190+ JavaScript interview questions and answers in quiz-style format, answered by ex-FAANG interviewers
Solved by ex-interviewers
Covers critical topics
Tired of scrolling through low-quality JavaScript interview questions? You’ve found the right place!
Our JavaScript interview questions are crafted by experienced ex-FAANG senior / staff engineers, not random unverified sources or AI.
With over 190+ questions covering everything from core JavaScript concepts to advanced JavaScript features (async / await, promises, etc.), you’ll be fully prepared.
Each quiz question comes with:
Concise answers (TL;DR): Clear and to-the-point solutions to help you respond confidently during interviews.
Comprehensive explanations: In-depth insights to ensure you fully understand the concepts and can elaborate when required. Don’t waste time elsewhere—start practicing with the best!
If you're looking for JavaScript coding questions -We've got you covered as well, with:
280+ JavaScript coding questions
In-browser coding workspace similar to real interview environment
== é o operador de igualdade abstrato enquanto === é o operador de igualdade rigoroso. O operador == será comparado para a igualdade após fazer quaisquer conversões de tipo necessárias. O operador === não fará conversão de tipo, então se dois valores não forem do mesmo tipo === simplesmente retornará false. Ao usar ==, coisas engraçadas podem acontecer, tais como:
1=='1';// true
1==[1];// true
1==true;// true
0=='';// true
0=='0';// true
0==false;// true
Como regra geral, nunca use o operador ==, exceto por conveniência ao comparar com null ou undefined, onde a == null retornará true se a for null ou undefined.
var a =null;
console.log(a ==null);// true
console.log(a ==undefined);// true
Qual é a diferença entre uma variável que é: `null`, `undefined` ou não declarada?
Variáveis não declaradas são criadas quando você atribui um valor a um identificador que não foi criado anteriormente usando var, let ou const. Variáveis não declaradas serão definidas globalmente, fora do escopo atual. No modo estrito, um ReferenceError será lançado quando você tentar atribuir a uma variável não declarada. Variáveis não declaradas são ruins assim como as variáveis globais são ruins. Evite elas a todo custo! Para verificá-las, envolva o uso delas em um bloco try/catch.
functionfoo(){
x =1;// Lança um erro de referência em modo strict
}
foo();
console.log(x);// 1
Uma variável que é undefined é uma variável que foi declarada, mas não atribuída um valor. É do tipo 'undefined'. Se uma função não retornar nenhum valor como resultado de sua execução, e se for atribuída a uma variável, a variável também terá o valor de undefined. Para verificar isso, compare usando o operador de igualdade estrita (===) ou typeof, que retornará a string undefined. Note que você não deve usar o operador de igualdade abstrata para verificar, pois também retornará true se o valor for null.
var foo;
console.log(foo);// undefined
console.log(foo ===undefined);// true
console.log(typeof foo ==='undefined');// true
console.log(foo ==null);// verdadeiro. Errado, não use isso para verificar!
functionbar(){}
var baz =bar();
console.log(baz);// undefined
Uma variável que é null terá sido explicitamente atribuída ao valor null. Ele não representa nenhum valor e é diferente de undefined no sentido de que foi explicitamente atribuído. Para verificar se é null, simplesmente compare usando o operador de igualdade estrita. Observe que, assim como acima, você não deve usar o operador de igualdade abstrata (==) para verificar, pois também retornará true se o valor for undefined.
var foo =null;
console.log(foo ===null);// verdadeiro
console.log(typeof foo ==='object');// verdadeiro
console.log(foo ==undefined);// true. Errado, não use isto para verificar!
Como bom hábito, nunca deixe suas variáveis não declaradas ou não atribuídas. Atribua explicitamente null a elas depois de declará-las, se você não pretende usá-las ainda. Se você usa alguma ferramenta de análise estática em seu fluxo de trabalho (por exemplo, ESLint, TypeScript Compiler), geralmente ela também pode verificar se você está referenciando variáveis não declaradas.
.call e .apply são usados para invocar funções e o primeiro parâmetro será usado como o valor de this dentro da função. No entanto, .call recebe argumentos separados por vírgulas como os próximos argumentos enquanto .apply recebe um array de argumentos como o próximo argumento. Uma maneira fácil de lembrar este é C para chamada e parâmetros separados por vírgulas e A para 'apply' e um array de argumentos.
functionadd(a, b){
return a + b;
}
console.log(add.call(null,1,2));// 3
console.log(add.apply(null,[1,2]));// 3
What is the difference between `mouseenter` and `mouseover` event in JavaScript and browsers?
The main difference lies in the bubbling behavior of mouseenter and mouseover events. mouseenter does not bubble while mouseover bubbles.
mouseenter events do not bubble. The mouseenter event is triggered only when the mouse pointer enters the element itself, not its descendants. If a parent element has child elements, and the mouse pointer enters child elements, the mouseenter event will not be triggered on the parent element again, it's only triggered once upon entry of parent element without regard for its contents. If both parent and child have mouseenter listeners attached and the mouse pointer moves from the parent element to the child element, mouseenter will only fire for the child.
mouseover events bubble up the DOM tree. The mouseover event is triggered when the mouse pointer enters the element or one of its descendants. If a parent element has child elements, and the mouse pointer enters child elements, the mouseover event will be triggered on the parent element again as well. If the parent element has multiple child elements, this can result in multiple event callbacks fired. If there are child elements, and the mouse pointer moves from the parent element to the child element, mouseover will fire for both the parent and the child.
Property
mouseenter
mouseover
Bubbling
No
Yes
Trigger
Only when entering itself
When entering itself and when entering descendants
mouseenter event:
Does not bubble: The mouseenter event does not bubble. It is only triggered when the mouse pointer enters the element to which the event listener is attached, not when it enters any child elements.
Triggered once: The mouseenter event is triggered only once when the mouse pointer enters the element, making it more predictable and easier to manage in certain scenarios.
A use case for mouseenter is when you want to detect the mouse entering an element without worrying about child elements triggering the event multiple times.
mouseover Event:
Bubbles up the DOM: The mouseover event bubbles up through the DOM. This means that if you have an event listener on a parent element, it will also trigger when the mouse pointer moves over any child elements.
Triggered multiple times: The mouseover event is triggered every time the mouse pointer moves over an element or any of its child elements. This can lead to multiple triggers if you have nested elements.
A use case for mouseover is when you want to detect when the mouse enters an element or any of its children and are okay with the events triggering multiple times.
Example
Here's an example demonstrating the difference between mouseover and mouseenter events:
In JavaScript, data types can be categorized into primitive and non-primitive types:
Primitive data types
Number: Represents both integers and floating-point numbers.
String: Represents sequences of characters.
Boolean: Represents true or false values.
Undefined: A variable that has been declared but not assigned a value.
Null: Represents the intentional absence of any object value.
Symbol: A unique and immutable value used as object property keys. Read more in our deep dive on Symbols
BigInt: Represents integers with arbitrary precision.
Non-primitive (Reference) data types
Object: Used to store collections of data.
Array: An ordered collection of data.
Function: A callable object.
Date: Represents dates and times.
RegExp: Represents regular expressions.
Map: A collection of keyed data items.
Set: A collection of unique values.
The primitive types store a single value, while non-primitive types can store collections of data or complex entities.
Data types in JavaScript
JavaScript, like many programming languages, has a variety of data types to represent different kinds of data. The main data types in JavaScript can be divided into two categories: primitive and non-primitive (reference) types.
Primitive data types
Number: Represents both integer and floating-point numbers. JavaScript only has one type of number.
let age =25;
let price =99.99;
console.log(price);// 99.99
String: Represents sequences of characters. Strings can be enclosed in single quotes, double quotes, or backticks (for template literals).
let myName ='John Doe';
let greeting ='Hello, world!';
let message =`Welcome, ${myName}!`;
console.log(message);// "Welcome, John Doe!"
Boolean: Represents logical entities and can have two values: true or false.
let isActive =true;
let isOver18 =false;
console.log(isOver18);// false
Undefined: A variable that has been declared but not assigned a value is of type undefined.
let user;
console.log(user);// undefined
Null: Represents the intentional absence of any object value. It is a primitive value and is treated as a falsy value.
let user =null;
console.log(user);// null
if(!user){
console.log('user is a falsy value');
}
Symbol: A unique and immutable primitive value, typically used as the key of an object property.
let sym1 =Symbol();
let sym2 =Symbol('description');
console.log(sym1);// Symbol()
console.log(sym2);// Symbol(description)
BigInt: Used for representing integers with arbitrary precision, useful for working with very large numbers.
let bigNumber =BigInt(9007199254740991);
let anotherBigNumber =1234567890123456789012345678901234567890n;
Object: It is used to store collections of data and more complex entities. Objects are created using curly braces {}.
let person ={
name:'Alice',
age:30,
};
console.log(person);// {name: "Alice", age: 30}
Array: A special type of object used for storing ordered collections of data. Arrays are created using square brackets [].
let numbers =[1,2,3,4,5];
console.log(numbers);
Function: Functions in JavaScript are objects. They can be defined using function declarations or expressions.
functiongreet(){
console.log('Hello!');
}
letadd=function(a, b){
return a + b;
};
greet();// "Hello!"
console.log(add(2,3));// 5
Date: Represents dates and times. The Date object is used to work with dates.
let today =newDate().toLocaleTimeString();
console.log(today);
RegExp: Represents regular expressions, which are patterns used to match character combinations in strings.
let pattern =/abc/;
let str ='123abc456';
console.log(pattern.test(str));// true
Map: A collection of keyed data items, similar to an object but allows keys of any type.
let map =newMap();
map.set('key1','value1');
console.log(map);
Set: A collection of unique values.
let set =newSet();
set.add(1);
set.add(2);
console.log(set);// { 1, 2 }
Determining data types
JavaScript is a dynamically-typed language, which means variables can hold values of different data types over time. The typeof operator can be used to determine the data type of a value or variable.
console.log(typeof42);// "number"
console.log(typeof'hello');// "string"
console.log(typeoftrue);// "boolean"
console.log(typeofundefined);// "undefined"
console.log(typeofnull);// "object" (this is a historical bug in JavaScript)
console.log(typeofSymbol());// "symbol"
console.log(typeofBigInt(123));// "bigint"
console.log(typeof{});// "object"
console.log(typeof[]);// "object"
console.log(typeoffunction(){});// "function"
Pitfalls
Type coercion
JavaScript often performs type coercion, converting values from one type to another, which can lead to unexpected results.
In the first example, since strings can be concatenated with the + operator, the number is converted into a string and the two strings are concatenated together. In the second example, strings cannot work with the minus operator (-), but two numbers can be minused, so the string is first converted into a number and the result is the difference.
Both Map objects and plain objects in JavaScript can store key-value pairs, but they have several key differences:
Feature
Map
Plain object
Key type
Any data type
String (or Symbol)
Key order
Maintained
Not guaranteed
Size property
Yes (size)
None
Iteration
forEach, keys(), values(), entries()
for...in, Object.keys(), etc.
Inheritance
No
Yes
Performance
Generally better for larger datasets and frequent additions/deletions
Faster for small datasets and simple operations
Serializable
No
Yes
Map vs plain JavaScript objects
In JavaScript, Map objects and a plain object (also known as a "POJO" or "plain old JavaScript object") are both used to store key-value pairs, but they have different characteristics, use cases, and behaviors.
Plain JavaScript objects (POJO)
A plain object is a basic JavaScript object created using the {} syntax. It is a collection of key-value pairs, where each key is a string (or a symbol, in modern JavaScript) and each value can be any type of value, including strings, numbers, booleans, arrays, objects, and more.
const person ={name:'John',age:30,occupation:'Developer'};
console.log(person);
Map objects
A Map object, introduced in ECMAScript 2015 (ES6), is a more advanced data structure that allows you to store key-value pairs with additional features. A Map is an iterable, which means you can use it with for...of loops, and it provides methods for common operations like get, set, has, and delete.
const person =newMap([
['name','John'],
['age',30],
['occupation','Developer'],
]);
console.log(person);
Key differences
Here are the main differences between a Map object and a plain object:
Key types: In a plain object, keys are always strings (or symbols). In a Map, keys can be any type of value, including objects, arrays, and even other Maps.
Key ordering: In a plain object, the order of keys is not guaranteed. In a Map, the order of keys is preserved, and you can iterate over them in the order they were inserted.
Iteration: A Map is iterable, which means you can use for...of loops to iterate over its key-value pairs. A plain object is not iterable by default, but you can use Object.keys() or Object.entries() to iterate over its properties.
Performance: Map objects are generally faster and more efficient than plain objects, especially when dealing with large datasets.
Methods: A Map object provides additional methods, such as get, set, has, and delete, which make it easier to work with key-value pairs.
Serialization: When serializing a Map object to JSON, it will be converted to an object but the existing Map properties might be lost in the conversion. A plain object, on the other hand, is serialized to a JSON object with the same structure.
When to use which
Use a plain object (POJO) when:
You need a simple, lightweight object with string keys.
You're working with a small dataset.
You need to serialize the object to JSON (e.g. to send over the network).
Use a Map object when:
You need to store key-value pairs with non-string keys (e.g., objects, arrays).
You need to preserve the order of key-value pairs.
You need to iterate over the key-value pairs in a specific order.
You're working with a large dataset and need better performance.
In summary, while both plain objects and Map objects can be used to store key-value pairs, Map objects offer more advanced features, better performance, and additional methods, making them a better choice for more complex use cases.
Notes
Map objects cannot be serialized to be sent in HTTP requests, but libraries like superjson allowing them to be serialized and deserialized.
In JavaScript, a proxy is an object that acts as an intermediary between an object and the code. Proxies are used to intercept and customize the fundamental operations of JavaScript objects, such as property access, assignment, function invocation, and more.
Here's a basic example of using a Proxy to log every property access:
Property access interception: Intercept and customize property access on an object.
Property assignment validation: Validate property values before they are set on the target object.
Logging and debugging: Create wrappers for logging and debugging interactions with an object
Creating reactive systems: Trigger updates in other parts of your application when object properties change (data binding).
Data transformation: Transforming data being set or retrieved from an object.
Mocking and stubbing in tests: Create mock or stub objects for testing purposes, allowing you to isolate dependencies and focus on the unit under test
Function invocation interception: Used to cache and return the result of frequently accessed methods if they involve network calls or computationally intensive logic, improving performance
Dynamic property creation: Useful for defining properties on-the-fly with default values and avoid storing redundant data in objects.
JavaScript proxies
In JavaScript, a proxy is an object that allows you to customize the behavior of another object, often referred to as the target object. Proxies can intercept and redefine various operations for the target object, such as property access, assignment, enumeration, function invocation, and more. This makes proxies a powerful tool for a variety of use cases, including but not limited to validation, logging, performance monitoring, and implementing advanced data structures.
Here are some common use cases and examples of how proxies can be used in JavaScript:
Property access interception
Proxies can be used to intercept and customize property access on an object.
const target ={
message:'Hello, world!',
};
const handler ={
get:function(target, property){
if(property in target){
return target[property];
}
return`Property ${property} does not exist.`;
},
};
const proxy =newProxy(target, handler);
console.log(proxy.message);// Hello, world!
console.log(proxy.nonExistentProperty);// Property nonExistentProperty does not exist.
Creating wrappers for logging and debugging
This is useful for creating wrappers for logging and debugging interactions with an object.
const target ={
name:'Alice',
age:30,
};
const handler ={
get:function(target, property){
console.log(`Getting property ${property}`);
return target[property];
},
set:function(target, property, value){
console.log(`Setting property ${property} to ${value}`);
target[property]= value;
returntrue;
},
};
const proxy =newProxy(target, handler);
console.log(proxy.name);// Output: Getting property name
// Alice
proxy.age=31;// Output: Setting property age to 31
console.log(proxy.age);// Output: Getting property age
// 31
Property assignment validation
Proxies can be used to validate property values before they are set on the target object.
const target ={
age:25,
};
const handler ={
set:function(target, property, value){
if(property ==='age'&&typeof value !=='number'){
thrownewTypeError('Age must be a number');
}
target[property]= value;
returntrue;
},
};
const proxy =newProxy(target, handler);
proxy.age=30;// Works fine
proxy.age='thirty';// Throws TypeError: Age must be a number
Creating reactive systems
Proxies are often used to trigger updates in other parts of your application when object properties change (data binding).
console.log(`Property ${property} set to ${value}`);
target[property]= value;
// Automatically update the UI or perform other actions
returntrue;
},
};
const proxy =newProxy(target, handler);
proxy.firstName='Jane';// Output: Property firstName set to Jane
Other use cases for access interception include:
Mocking and stubbing: Proxies can be used to create mock or stub objects for testing purposes, allowing you to isolate dependencies and focus on the unit under test.
Function invocation interception
Proxies can intercept and customize function calls.
consttarget=function(name){
return`Hello, ${name}!`;
};
const handler ={
apply:function(target, thisArg, argumentsList){
console.log(`Called with arguments: ${argumentsList}`);
return target.apply(thisArg, argumentsList);
},
};
const proxy =newProxy(target, handler);
console.log(proxy('Alice'));// Called with arguments: Alice
// Hello, Alice!
This interception can be used to cache and return the result of frequently accessed methods if they involve network calls or computationally intensive logic, improving performance by reducing the number of requests/computations made.
Dynamic property creation
Proxies can be used to dynamically create properties or methods on an object. This is useful for defining properties on-the-fly with default values and avoid storing redundant data in objects.
Proxies can be used to create objects for database records by intercepting property access to lazily load data from the database. This provides a more object-oriented interface to interact with a database.
Real world use cases
Many popular libraries, especially state management solutions, are built on top of JavaScript proxies:
Vue.js: Vue.js is a progressive framework for building user interfaces. In Vue 3, proxies are used extensively to implement the reactivity system.
MobX: MobX uses proxies to make objects and arrays observable, allowing components to automatically react to state changes.
Immer: Immer is a library that allows you to work with immutable state in a more convenient way. It uses proxies to track changes and produce the next immutable state.
Summary
Proxies in JavaScript provide a powerful and flexible way to intercept and customize operations on objects. They are useful for a wide range of applications, including validation, logging, debugging, dynamic property creation, and implementing reactive systems. By using proxies, developers can create more robust, maintainable, and feature-rich applications.
A callback function is a function passed as an argument to another function, which is then invoked inside the outer function to complete some kind of routine or action. In asynchronous operations, callbacks are used to handle tasks that take time to complete, such as network requests or file I/O, without blocking the execution of the rest of the code. For example:
functionfetchData(callback){
setTimeout(()=>{
const data ={name:'John',age:30};
callback(data);
},1000);
}
fetchData((data)=>{
console.log(data);
});
What is a callback function?
A callback function is a function that is passed as an argument to another function and is executed after some operation has been completed. This is particularly useful in asynchronous programming, where operations like network requests, file I/O, or timers need to be handled without blocking the main execution thread.
Synchronous vs. asynchronous callbacks
Synchronous callbacks are executed immediately within the function they are passed to. They are blocking and the code execution waits for them to complete.
Asynchronous callbacks are executed after a certain event or operation has been completed. They are non-blocking and allow the code execution to continue while waiting for the operation to finish.
Example of a synchronous callback
functiongreet(name, callback){
console.log('Hello '+ name);
callback();
}
functionsayGoodbye(){
console.log('Goodbye!');
}
greet('Alice', sayGoodbye);
// Output:
// Hello Alice
// Goodbye!
Example of an asynchronous callback
functionfetchData(callback){
setTimeout(()=>{
const data ={name:'John',age:30};
callback(data);
},1000);
}
fetchData((data)=>{
console.log(data);
});
// Output after 1 second:
// { name: 'John', age: 30 }
Common use cases
Network requests: Fetching data from an API
File I/O: Reading or writing files
Timers: Delaying execution using setTimeout or setInterval
Event handling: Responding to user actions like clicks or key presses
Handling errors in callbacks
When dealing with asynchronous operations, it's important to handle errors properly. A common pattern is to use the first argument of the callback function to pass an error object, if any.
The microtask queue is a queue of tasks that need to be executed after the currently executing script and before any other task. Microtasks are typically used for tasks that need to be executed immediately after the current operation, such as promise callbacks. The microtask queue is processed before the macrotask queue, ensuring that microtasks are executed as soon as possible.
The concept of a microtask queue
What is a microtask queue?
The microtask queue is a part of the JavaScript event loop mechanism. It is a queue that holds tasks that need to be executed immediately after the currently executing script and before any other task in the macrotask queue. Microtasks are typically used for operations that need to be executed as soon as possible, such as promise callbacks and MutationObserver callbacks.
How does the microtask queue work?
Execution order: The microtask queue is processed after the currently executing script and before the macrotask queue. This means that microtasks are given higher priority over macrotasks.
Event loop: During each iteration of the event loop, the JavaScript engine first processes all the microtasks in the microtask queue before moving on to the macrotask queue.
Adding microtasks: Microtasks can be added to the microtask queue using methods like Promise.resolve().then() and queueMicrotask().
Example
Here is an example to illustrate how the microtask queue works:
console.log('Script start');
setTimeout(()=>{
console.log('setTimeout');
},0);
Promise.resolve()
.then(()=>{
console.log('Promise 1');
})
.then(()=>{
console.log('Promise 2');
});
console.log('Script end');
Output:
Script start
Script end
Promise1
Promise2
setTimeout
In this example:
The synchronous code (console.log('Script start') and console.log('Script end')) is executed first.
The promise callbacks (Promise 1 and Promise 2) are added to the microtask queue and executed next.
The setTimeout callback is added to the macrotask queue and executed last.
Use cases
Promise callbacks: Microtasks are commonly used for promise callbacks to ensure they are executed as soon as possible after the current operation.
MutationObserver: The MutationObserver API uses microtasks to notify changes in the DOM.
Caching is a technique used to store copies of files or data in a temporary storage location to reduce the time it takes to access them. It improves performance by reducing the need to fetch data from the original source repeatedly. In front end development, caching can be implemented using browser cache, service workers, and HTTP headers like Cache-Control.
The concept of caching and how it can be used to improve performance
What is caching?
Caching is a technique used to store copies of files or data in a temporary storage location, known as a cache, to reduce the time it takes to access them. The primary goal of caching is to improve performance by minimizing the need to fetch data from the original source repeatedly.
Types of caching
Browser cache
The browser cache stores copies of web pages, images, and other resources locally on the user's device. When a user revisits a website, the browser can load these resources from the cache instead of fetching them from the server, resulting in faster load times.
Service workers
Service workers are scripts that run in the background and can intercept network requests. They can cache resources and serve them from the cache, even when the user is offline. This can significantly improve performance and provide a better user experience.
HTTP caching
HTTP caching involves using HTTP headers to control how and when resources are cached. Common headers include Cache-Control, Expires, and ETag.
How caching improves performance
Reduced latency
By storing frequently accessed data closer to the user, caching reduces the time it takes to retrieve that data. This results in faster load times and a smoother user experience.
Reduced server load
Caching reduces the number of requests made to the server, which can help decrease server load and improve overall performance.
Offline access
With service workers, cached resources can be served even when the user is offline, providing a seamless experience.
Code coverage is a metric that measures the percentage of code that is executed when the test suite runs. It helps in assessing the quality of tests by identifying untested parts of the codebase. Higher code coverage generally indicates more thorough testing, but it doesn't guarantee the absence of bugs. Tools like Istanbul or Jest can be used to measure code coverage.
What is code coverage?
Code coverage is a software testing metric that determines the amount of code that is executed during automated tests. It provides insights into which parts of the codebase are being tested and which are not.
Types of code coverage
Statement coverage: Measures the number of statements in the code that have been executed.
Branch coverage: Measures whether each branch (e.g., if and else blocks) has been executed.
Function coverage: Measures whether each function in the code has been called.
Line coverage: Measures the number of lines of code that have been executed.
Condition coverage: Measures whether each boolean sub-expression has been evaluated to both true and false.
Example
Consider the following JavaScript function:
functionisEven(num){
if(num %2===0){
returntrue;
}else{
returnfalse;
}
}
A test suite for this function might look like this:
test('isEven returns true for even numbers',()=>{
expect(isEven(2)).toBe(true);
});
test('isEven returns false for odd numbers',()=>{
expect(isEven(3)).toBe(false);
});
Running code coverage tools on this test suite would show 100% statement, branch, function, and line coverage because all parts of the code are executed.
How to measure code coverage
Tools
Istanbul: A popular JavaScript code coverage tool.
Jest: A testing framework that includes built-in code coverage reporting.
Karma: A test runner that can be configured to use Istanbul for code coverage.
Example with Jest
To measure code coverage with Jest, you can add the --coverage flag when running your tests:
jest --coverage
This will generate a coverage report that shows the percentage of code covered by your tests.
Assessing test quality with code coverage
Benefits
Identifies untested code: Helps in finding parts of the codebase that are not covered by tests.
Improves test suite: Encourages writing more comprehensive tests.
Increases confidence: Higher coverage can increase confidence in the stability of the code.
Limitations
False sense of security: High coverage does not guarantee the absence of bugs.
Quality over quantity: 100% coverage does not mean the tests are of high quality. Tests should also check for edge cases and potential errors.
Content Security Policy (CSP) is a security feature that helps prevent various types of attacks, such as Cross-Site Scripting (XSS) and data injection attacks, by specifying which content sources are trusted. It works by allowing developers to define a whitelist of trusted sources for content like scripts, styles, and images. This is done through HTTP headers or meta tags. For example, you can use the Content-Security-Policy header to specify that only scripts from your own domain should be executed:
Content-Security-Policy: script-src 'self'
What is Content Security Policy (CSP)?
Content Security Policy (CSP) is a security standard introduced to mitigate a range of attacks, including Cross-Site Scripting (XSS) and data injection attacks. CSP allows web developers to control the resources that a user agent is allowed to load for a given page. By specifying a whitelist of trusted content sources, CSP helps to prevent the execution of malicious content.
How CSP works
CSP works by allowing developers to define a set of rules that specify which sources of content are considered trustworthy. These rules are delivered to the browser via HTTP headers or meta tags. When the browser loads a page, it checks the CSP rules and blocks any content that does not match the specified sources.
Example of a CSP header
Here is an example of a simple CSP header that only allows scripts from the same origin:
Content-Security-Policy: script-src 'self'
This policy tells the browser to only execute scripts that are loaded from the same origin as the page itself.
Common directives
default-src: Serves as a fallback for other resource types when they are not explicitly defined.
script-src: Specifies valid sources for JavaScript.
style-src: Specifies valid sources for CSS.
img-src: Specifies valid sources for images.
connect-src: Specifies valid sources for AJAX, WebSocket, and EventSource connections.
font-src: Specifies valid sources for fonts.
object-src: Specifies valid sources for plugins like Flash.
Benefits of using CSP
Mitigates XSS attacks: By restricting the sources from which scripts can be loaded, CSP helps to prevent the execution of malicious scripts.
Prevents data injection attacks: CSP can block the loading of malicious resources that could be used to steal data or perform other harmful actions.
Improves security posture: Implementing CSP is a proactive measure that enhances the overall security of a web application.
Implementing CSP
CSP can be implemented using HTTP headers or meta tags. The HTTP header approach is generally preferred because it is more secure and cannot be easily overridden by attackers.
Cross-Site Request Forgery (CSRF) is an attack where a malicious website tricks a user's browser into making an unwanted request to another site where the user is authenticated. This can lead to unauthorized actions being performed on behalf of the user. Mitigation techniques include using anti-CSRF tokens, SameSite cookies, and ensuring proper CORS configurations.
Cross-Site Request Forgery (CSRF) and its mitigation techniques
What is CSRF?
Cross-Site Request Forgery (CSRF) is a type of attack that occurs when a malicious website causes a user's browser to perform an unwanted action on a different site where the user is authenticated. This can lead to unauthorized actions such as changing account details, making purchases, or other actions that the user did not intend to perform.
How does CSRF work?
User authentication: The user logs into a trusted website (e.g., a banking site) and receives an authentication cookie.
Malicious site: The user visits a malicious website while still logged into the trusted site.
Unwanted request: The malicious site contains code that makes a request to the trusted site, using the user's authentication cookie to perform actions on behalf of the user.
Mitigation techniques
Anti-CSRF tokens
One of the most effective ways to prevent CSRF attacks is by using anti-CSRF tokens. These tokens are unique and unpredictable values that are generated by the server and included in forms or requests. The server then validates the token to ensure the request is legitimate.
On the server side, the token is validated to ensure it matches the expected value.
SameSite cookies
The SameSite attribute on cookies can help mitigate CSRF attacks by restricting how cookies are sent with cross-site requests. The SameSite attribute can be set to Strict, Lax, or None.
Set-Cookie: sessionId=abc123; SameSite=Strict
Strict: Cookies are only sent in a first-party context and not with requests initiated by third-party websites.
Lax: Cookies are not sent on normal cross-site subrequests (e.g., loading images), but are sent when a user navigates to the URL from an external site (e.g., following a link).
None: Cookies are sent in all contexts, including cross-origin requests.
CORS (Cross-Origin Resource Sharing)
Properly configuring CORS can help prevent CSRF attacks by ensuring that only trusted origins can make requests to your server. This involves setting appropriate headers on the server to specify which origins are allowed to access resources.
Debouncing and throttling are techniques used to control the rate at which a function is executed. Debouncing ensures that a function is only called after a specified delay has passed since the last time it was invoked. Throttling ensures that a function is called at most once in a specified time interval.
Debouncing delays the execution of a function until a certain amount of time has passed since it was last called. This is useful for scenarios like search input fields where you want to wait until the user has stopped typing before making an API call.
debouncedHello();// Prints 'Hello world!' after 2 seconds
Throttling ensures that a function is called at most once in a specified time interval. This is useful for scenarios like window resizing or scrolling where you want to limit the number of times a function is called.
// Simulate rapid calls to handleResize every 100ms
let intervalId =setInterval(()=>{
handleResize();
},100);
// 'Window resized' is outputted only every 2 seconds due to throttling
Debouncing and throttling
Debouncing
Debouncing is a technique used to ensure that a function is only executed after a certain amount of time has passed since it was last invoked. This is particularly useful in scenarios where you want to limit the number of times a function is called, such as when handling user input events like keypresses or mouse movements.
Example use case
Imagine you have a search input field and you want to make an API call to fetch search results. Without debouncing, an API call would be made every time the user types a character, which could lead to a large number of unnecessary calls. Debouncing ensures that the API call is only made after the user has stopped typing for a specified amount of time.
Throttling is a technique used to ensure that a function is called at most once in a specified time interval. This is useful in scenarios where you want to limit the number of times a function is called, such as when handling events like window resizing or scrolling.
Example use case
Imagine you have a function that updates the position of elements on the screen based on the window size. Without throttling, this function could be called many times per second as the user resizes the window, leading to performance issues. Throttling ensures that the function is only called at most once in a specified time interval.
Destructuring assignment is a syntax in JavaScript that allows you to unpack values from arrays or properties from objects into distinct variables. For arrays, you use square brackets, and for objects, you use curly braces. For example:
// Array destructuring
const[a, b]=[1,2];
// Object destructuring
const{ name, age }={name:'John',age:30};
Destructuring assignment for objects and arrays
Destructuring assignment is a convenient way to extract values from arrays and objects into separate variables. This can make your code more readable and concise.
Array destructuring
Array destructuring allows you to unpack values from arrays into distinct variables using square brackets.
Basic example
const numbers =[1,2,3];
const[first, second, third]= numbers;
console.log(first);// 1
console.log(second);// 2
console.log(third);// 3
Skipping values
You can skip values in the array by leaving an empty space between commas.
const numbers =[1,2,3];
const[first,, third]= numbers;
console.log(first);// 1
console.log(third);// 3
Default values
You can assign default values in case the array does not have enough elements.
const numbers =[1];
const[first, second =2]= numbers;
console.log(first);// 1
console.log(second);// 2
Object destructuring
Object destructuring allows you to unpack properties from objects into distinct variables using curly braces.
Basic example
const person ={name:'John',age:30};
const{ name, age }= person;
console.log(name);// John
console.log(age);// 30
Renaming variables
You can rename the variables while destructuring.
const person ={name:'John',age:30};
const{name: personName,age: personAge }= person;
console.log(personName);// John
console.log(personAge);// 30
Default values
You can assign default values in case the property does not exist in the object.
const person ={name:'John'};
const{ name, age =25}= person;
console.log(name);// John
console.log(age);// 25
Nested objects
You can destructure nested objects as well.
const person ={name:'John',address:{city:'New York',zip:'10001'}};
Error propagation in JavaScript refers to how errors are passed through the call stack. When an error occurs in a function, it can be caught and handled using try...catch blocks. If not caught, the error propagates up the call stack until it is either caught or causes the program to terminate. For example:
functiona(){
thrownewError('An error occurred');
}
functionb(){
a();
}
try{
b();
}catch(e){
console.error(e.message);// Outputs: An error occurred
}
Error propagation in JavaScript
Error propagation in JavaScript is a mechanism that allows errors to be passed up the call stack until they are caught and handled. This is crucial for debugging and ensuring that errors do not cause the entire application to crash unexpectedly.
How errors propagate
When an error occurs in a function, it can either be caught and handled within that function or propagate up the call stack to the calling function. If the calling function does not handle the error, it continues to propagate up the stack until it reaches the global scope, potentially causing the program to terminate.
Using try...catch blocks
To handle errors and prevent them from propagating further, you can use try...catch blocks. Here is an example:
functiona(){
thrownewError('An error occurred');
}
functionb(){
a();
}
try{
b();
}catch(e){
console.error(e.message);// Outputs: An error occurred
}
In this example, the error thrown in function a propagates to function b, and then to the try...catch block where it is finally caught and handled.
Propagation with asynchronous code
Error propagation works differently with asynchronous code, such as promises and async/await. For promises, you can use .catch() to handle errors:
Hoisting in JavaScript is a behavior where function declarations are moved to the top of their containing scope during the compile phase. This means you can call a function before it is defined in the code. However, this does not apply to function expressions or arrow functions, which are not hoisted in the same way.
// Function declaration
hoistedFunction();// Works fine
functionhoistedFunction(){
console.log('This function is hoisted');
}
// Function expression
nonHoistedFunction();// Throws an error
varnonHoistedFunction=function(){
console.log('This function is not hoisted');
};
What is hoisting?
Hoisting is a JavaScript mechanism where variables and function declarations are moved to the top of their containing scope during the compile phase. This allows functions to be called before they are defined in the code.
Function declarations
Function declarations are fully hoisted. This means you can call a function before its declaration in the code.
hoistedFunction();// Works fine
functionhoistedFunction(){
console.log('This function is hoisted');
}
Function expressions
Function expressions, including arrow functions, are not hoisted in the same way. They are treated as variable assignments and are only hoisted as undefined.
nonHoistedFunction();// Throws an error: TypeError: nonHoistedFunction is not a function
varnonHoistedFunction=function(){
console.log('This function is not hoisted');
};
Arrow functions
Arrow functions behave similarly to function expressions in terms of hoisting.
arrowFunction();// Throws an error: TypeError: arrowFunction is not a function
vararrowFunction=()=>{
console.log('This arrow function is not hoisted');
Inheritance in ES2015 classes allows one class to extend another, enabling the child class to inherit properties and methods from the parent class. This is done using the extends keyword. The super keyword is used to call the constructor and methods of the parent class. Here's a quick example:
classAnimal{
constructor(name){
this.name= name;
}
speak(){
console.log(`${this.name} makes a noise.`);
}
}
classDogextendsAnimal{
constructor(name, breed){
super(name);
this.breed= breed;
}
speak(){
console.log(`${this.name} barks.`);
}
}
const dog =newDog('Rex','German Shepherd');
dog.speak();// Rex barks.
Inheritance in ES2015 classes
Basic concept
Inheritance in ES2015 classes allows a class (child class) to inherit properties and methods from another class (parent class). This promotes code reuse and a hierarchical class structure.
Using the extends keyword
The extends keyword is used to create a class that is a child of another class. The child class inherits all the properties and methods of the parent class.
classParentClass{
constructor(){
this.parentProperty='I am a parent property';
}
parentMethod(){
console.log('This is a parent method');
}
}
classChildClassextendsParentClass{
constructor(){
super();// Calls the parent class constructor
this.childProperty='I am a child property';
}
childMethod(){
console.log('This is a child method');
}
}
const child =newChildClass();
console.log(child.parentProperty);// I am a parent property
child.parentMethod();// This is a parent method
Using the super keyword
The super keyword is used to call the constructor of the parent class and to access its methods. This is necessary when you want to initialize the parent class properties in the child class.
classAnimal{
constructor(name){
this.name= name;
}
speak(){
console.log(`${this.name} makes a noise.`);
}
}
classDogextendsAnimal{
constructor(name, breed){
super(name);// Calls the parent class constructor
this.breed= breed;
}
speak(){
super.speak();// Calls the parent class method
console.log(`${this.name} barks.`);
}
}
const dog =newDog('Rex','German Shepherd');
dog.speak();
// Rex makes a noise.
// Rex barks.
Method overriding
Child classes can override methods from the parent class. This allows the child class to provide a specific implementation of a method that is already defined in the parent class.
Input validation is the process of ensuring that user input is correct, safe, and meets the application's requirements. It is crucial for security because it helps prevent attacks like SQL injection, cross-site scripting (XSS), and other forms of data manipulation. By validating input, you ensure that only properly formatted data enters your system, reducing the risk of malicious data causing harm.
Input validation and its importance in security
What is input validation?
Input validation is the process of verifying that the data provided by a user or other external sources meets the expected format, type, and constraints before it is processed by the application. This can include checking for:
Correct data type (e.g., string, number)
Proper format (e.g., email addresses, phone numbers)
Acceptable value ranges (e.g., age between 0 and 120)
Required fields being filled
Types of input validation
Client-side validation: This occurs in the user's browser before the data is sent to the server. It provides immediate feedback to the user and can improve the user experience. However, it should not be solely relied upon for security purposes, as it can be easily bypassed.
Server-side validation: This occurs on the server after the data has been submitted. It is essential for security because it ensures that all data is validated regardless of the client's behavior.
const express =require('express');
const app =express();
app.post('/submit',(req, res)=>{
const username = req.body.username;
if(!/^[A-Za-z0-9]{5,}$/.test(username)){
return res.status(400).send('Invalid username');
}
// Proceed with processing the valid input
});
Importance of input validation in security
Preventing SQL injection: By validating and sanitizing input, you can prevent attackers from injecting malicious SQL code into your database queries.
const username = req.body.username;
const query ='SELECT * FROM users WHERE username = ?';
db.query(query,[username],(err, results)=>{
// Handle results
});
Preventing cross-site scripting (XSS): Input validation helps ensure that user input does not contain malicious scripts that could be executed in the browser.
const sanitizeHtml =require('sanitize-html');
const userInput = req.body.comment;
const sanitizedInput =sanitizeHtml(userInput);
Preventing buffer overflow attacks: By validating the length of input data, you can prevent attackers from sending excessively large inputs that could cause buffer overflows and crash your application.
Ensuring data integrity: Input validation helps maintain the integrity of your data by ensuring that only properly formatted and expected data is processed and stored.
Best practices for input validation
Always validate input on the server side, even if you also validate on the client side
Use built-in validation functions and libraries where possible
Sanitize input to remove or escape potentially harmful characters
Implement whitelisting (allowing only known good input) rather than blacklisting (blocking known bad input)
Regularly update and review your validation rules to address new security threats
Lazy loading is a design pattern that delays the loading of resources until they are actually needed. This can significantly improve performance by reducing initial load times and conserving bandwidth. For example, images on a webpage can be lazy-loaded so that they only load when they come into the viewport. This can be achieved using the loading="lazy" attribute in HTML or by using JavaScript libraries.
The concept of lazy loading and how it can improve performance
What is lazy loading?
Lazy loading is a design pattern used to defer the initialization of an object until the point at which it is needed. This can be applied to various types of resources such as images, videos, scripts, and even data fetched from APIs.
How does lazy loading work?
Lazy loading works by delaying the loading of resources until they are actually needed. For example, images on a webpage can be lazy-loaded so that they only load when they come into the viewport. This can be achieved using the loading="lazy" attribute in HTML or by using JavaScript libraries.
Benefits of lazy loading
Improved performance: By loading only the necessary resources initially, the page load time is reduced, leading to a faster and more responsive user experience.
Reduced bandwidth usage: Lazy loading helps in conserving bandwidth by loading resources only when they are needed.
Better user experience: Users can start interacting with the content faster as the initial load time is reduced.
Implementing lazy loading
Using the loading attribute in HTML
The simplest way to implement lazy loading for images is by using the loading attribute in HTML.
Lexical scoping means that the scope of a variable is determined by its location within the source code, and nested functions have access to variables declared in their outer scope. For example:
functionouterFunction(){
let outerVariable ='I am outside!';
functioninnerFunction(){
console.log(outerVariable);// 'I am outside!'
}
innerFunction();
}
outerFunction();
In this example, innerFunction can access outerVariable because of lexical scoping.
Lexical scoping
Lexical scoping is a fundamental concept in JavaScript and many other programming languages. It determines how variable names are resolved in nested functions. The scope of a variable is defined by its position in the source code, and nested functions have access to variables declared in their outer scope.
How lexical scoping works
When a function is defined, it captures the scope in which it was created. This means that the function has access to variables in its own scope as well as variables in any containing (outer) scopes.
Example
Consider the following example:
functionouterFunction(){
let outerVariable ='I am outside!';
functioninnerFunction(){
console.log(outerVariable);// 'I am outside!'
}
innerFunction();
}
outerFunction();
In this example:
outerFunction declares a variable outerVariable.
innerFunction is nested inside outerFunction and logs outerVariable to the console.
When innerFunction is called, it has access to outerVariable because of lexical scoping.
Nested functions and closures
Lexical scoping is closely related to closures. A closure is created when a function retains access to its lexical scope, even when the function is executed outside that scope.
functionouterFunction(){
let outerVariable ='I am outside!';
functioninnerFunction(){
console.log(outerVariable);
}
return innerFunction;
}
const myInnerFunction =outerFunction();
myInnerFunction();// 'I am outside!'
In this example:
outerFunction returns innerFunction.
myInnerFunction is assigned the returned innerFunction.
When myInnerFunction is called, it still has access to outerVariable because of the closure created by lexical scoping.
Partial application is a technique in functional programming where a function is applied to some of its arguments, producing a new function that takes the remaining arguments. This allows you to create more specific functions from general ones. For example, if you have a function add(a, b), you can partially apply it to create a new function add5 that always adds 5 to its argument.
functionadd(a, b){
return a + b;
}
const add5 = add.bind(null,5);
console.log(add5(10));// Outputs 15
Partial application
Partial application is a functional programming technique where a function is applied to some of its arguments, producing a new function that takes the remaining arguments. This can be useful for creating more specific functions from general ones, improving code reusability and readability.
Example
Consider a simple add function that takes two arguments:
functionadd(a, b){
return a + b;
}
Using partial application, you can create a new function add5 that always adds 5 to its argument:
const add5 = add.bind(null,5);
console.log(add5(10));// Outputs 15
How it works
In the example above, add.bind(null, 5) creates a new function where the first argument (a) is fixed to 5. The null value is used as the this context, which is not relevant in this case.
Benefits
Code reusability: You can create more specific functions from general ones, making your code more modular and reusable.
Readability: Partially applied functions can make your code easier to read and understand by reducing the number of arguments you need to pass around.
Real-world example
Partial application is often used in libraries like Lodash. For example, Lodash's _.partial function allows you to create partially applied functions easily:
In JavaScript, scope determines the accessibility of variables and functions at different parts of the code. There are three main types of scope: global scope, function scope, and block scope. Global scope means the variable is accessible everywhere in the code. Function scope means the variable is accessible only within the function it is declared. Block scope, introduced with ES6, means the variable is accessible only within the block (e.g., within curly braces {}) it is declared.
Variables declared outside any function or block have global scope. They are accessible from anywhere in the code.
var globalVar ='I am global';
functionmyFunction(){
console.log(globalVar);// Accessible here
}
myFunction();
console.log(globalVar);// Accessible here
Function scope
Variables declared within a function are in function scope. They are accessible only within that function.
functionmyFunction(){
var functionVar ='I am in a function';
console.log(functionVar);// Accessible here
}
myFunction();
console.log(functionVar);// Uncaught ReferenceError: functionVar is not defined
Block scope
Variables declared with let or const within a block (e.g., within curly braces {}) have block scope. They are accessible only within that block.
if(true){
let blockVar ='I am in a block';
console.log(blockVar);// Accessible here
}
console.log(blockVar);// Uncaught ReferenceError: blockVar is not defined
Lexical scope
JavaScript uses lexical scoping, meaning that the scope of a variable is determined by its location within the source code. Nested functions have access to variables declared in their outer scope.
Tagged templates in JavaScript allow you to parse template literals with a function. The function receives the literal strings and the values as arguments, enabling custom processing of the template. For example:
const result = tag`Hello ${'world'}! How are ${'you'}?`;
console.log(result);// "Hello world! How are you?"
Tagged templates
What are tagged templates?
Tagged templates are a feature in JavaScript that allows you to call a function (the "tag") with a template literal. The tag function can then process the template literal's parts (both the literal strings and the interpolated values) in a custom way.
Syntax
The syntax for tagged templates involves placing a function name before a template literal:
functiontag(strings,...values){
// Custom processing
}
tag`template literal with ${values}`;
How it works
When a tagged template is invoked, the tag function receives:
An array of literal strings (the parts of the template that are not interpolated)
The interpolated values as additional arguments
For example:
functiontag(strings,...values){
console.log(strings);// ["Hello ", "! How are ", "?"]
console.log(values);// ["world", "you"]
}
tag`Hello ${'world'}! How are ${'you'}?`;
Use cases
Tagged templates can be used for various purposes, such as:
String escaping: Preventing XSS attacks by escaping user input
Localization: Translating template literals into different languages
Custom formatting: Applying custom formatting to the interpolated values
Example
Here is a simple example of a tagged template that escapes HTML:
functionescapeHTML(strings,...values){
return strings.reduce((result, string, i)=>{
const value = values[i -1];
return(
result +
(value
?String(value)
.replace(/&/g,'&')
.replace(/</g,'<')
.replace(/>/g,'>')
:'')+
string
);
});
}
const userInput ='<script>alert("XSS")</script>';
const result = escapeHTML`User input: ${userInput}`;
Test-driven development (TDD) is a software development approach where you write tests before writing the actual code. The process involves writing a failing test, writing the minimum code to pass the test, and then refactoring the code while keeping the tests passing. This ensures that the code is always tested and helps in maintaining high code quality.
What is test-driven development (TDD)?
Test-driven development (TDD) is a software development methodology that emphasizes writing tests before writing the actual code. The primary goal of TDD is to ensure that the code is thoroughly tested and meets the specified requirements. The TDD process can be broken down into three main steps: Red, Green, and Refactor.
Red: Write a failing test
Write a test for a new feature or functionality.
Run the test to ensure it fails, confirming that the feature is not yet implemented.
// Example using Jest
test('adds 1 + 2 to equal 3',()=>{
expect(add(1,2)).toBe(3);
});
Green: Write the minimum code to pass the test
Write the simplest code possible to make the test pass.
Run the test to ensure it passes.
functionadd(a, b){
return a + b;
}
Refactor: Improve the code
Refactor the code to improve its structure and readability without changing its behavior.
Ensure that all tests still pass after refactoring.
// Refactored code (if needed)
functionadd(a, b){
return a + b;// In this simple example, no refactoring is needed
}
Benefits of TDD
Improved code quality
TDD ensures that the code is thoroughly tested, which helps in identifying and fixing bugs early in the development process.
Better design
Writing tests first forces developers to think about the design and requirements of the code, leading to better-structured and more maintainable code.
Faster debugging
Since tests are written for each piece of functionality, it becomes easier to identify the source of a bug when a test fails.
Documentation
Tests serve as documentation for the code, making it easier for other developers to understand the functionality and purpose of the code.
Challenges of TDD
Initial learning curve
Developers new to TDD may find it challenging to adopt this methodology initially.
Time-consuming
Writing tests before writing the actual code can be time-consuming, especially for complex features.
Overhead
Maintaining a large number of tests can become an overhead, especially when the codebase changes frequently.
The Prototype pattern is a creational design pattern used to create new objects by copying an existing object, known as the prototype. This pattern is useful when the cost of creating a new object is more expensive than cloning an existing one. In JavaScript, this can be achieved using the Object.create method or by using the prototype property of a constructor function.
const prototypeObject ={
greet(){
console.log('Hello, world!');
},
};
const newObject =Object.create(prototypeObject);
newObject.greet();// Outputs: Hello, world!
The Prototype pattern
The Prototype pattern is a creational design pattern that allows you to create new objects by copying an existing object, known as the prototype. This pattern is particularly useful when the cost of creating a new object is more expensive than cloning an existing one.
How it works
In the Prototype pattern, an object is used as a blueprint for creating new objects. This blueprint object is called the prototype. New objects are created by copying the prototype, which can be done in various ways depending on the programming language.
Implementation in JavaScript
In JavaScript, the Prototype pattern can be implemented using the Object.create method or by using the prototype property of a constructor function.
Using Object.create
The Object.create method creates a new object with the specified prototype object and properties.
const prototypeObject ={
greet(){
console.log('Hello, world!');
},
};
const newObject =Object.create(prototypeObject);
newObject.greet();// Outputs: Hello, world!
In this example, newObject is created with prototypeObject as its prototype. This means that newObject inherits the greet method from prototypeObject.
Using constructor functions and the prototype property
Another way to implement the Prototype pattern in JavaScript is by using constructor functions and the prototype property.
functionPerson(name){
this.name= name;
}
Person.prototype.greet=function(){
console.log(`Hello, my name is ${this.name}`);
};
const person1 =newPerson('Alice');
const person2 =newPerson('Bob');
person1.greet();// Outputs: Hello, my name is Alice
person2.greet();// Outputs: Hello, my name is Bob
In this example, the Person constructor function is used to create new Person objects. The greet method is added to the Person.prototype, so all instances of Person inherit this method.
Advantages
Reduces the cost of creating new objects by cloning existing ones
Simplifies the creation of complex objects
Promotes code reuse and reduces redundancy
Disadvantages
Cloning objects can be less efficient than creating new ones in some cases
Can lead to issues with deep cloning if the prototype object contains nested objects
The Singleton pattern ensures that a class has only one instance and provides a global point of access to that instance. This is useful when exactly one object is needed to coordinate actions across the system. In JavaScript, this can be implemented using closures or ES6 classes.
classSingleton{
constructor(){
if(!Singleton.instance){
Singleton.instance=this;
}
returnSingleton.instance;
}
}
const instance1 =newSingleton();
const instance2 =newSingleton();
console.log(instance1 === instance2);// true
Singleton pattern
The Singleton pattern is a design pattern that restricts the instantiation of a class to one single instance. This is particularly useful when exactly one object is needed to coordinate actions across the system.
Key characteristics
Single instance: Ensures that a class has only one instance.
Global access: Provides a global point of access to the instance.
Lazy initialization: The instance is created only when it is needed.
Implementation in JavaScript
There are several ways to implement the Singleton pattern in JavaScript. Here are two common methods:
Using closures
constSingleton=(function(){
let instance;
functioncreateInstance(){
const object =newObject('I am the instance');
return object;
}
return{
getInstance:function(){
if(!instance){
instance =createInstance();
}
return instance;
},
};
})();
const instance1 =Singleton.getInstance();
const instance2 =Singleton.getInstance();
console.log(instance1 === instance2);// true
Using ES6 classes
classSingleton{
constructor(){
if(!Singleton.instance){
Singleton.instance=this;
}
returnSingleton.instance;
}
}
const instance1 =newSingleton();
const instance2 =newSingleton();
console.log(instance1 === instance2);// true
Use cases
Configuration objects: When you need a single configuration object shared across the application.
Logging: A single logging instance to manage log entries.
Database connections: Ensuring only one connection is made to the database.
The spread operator (...) in JavaScript allows you to expand elements of an iterable (like an array or object) into individual elements. It is commonly used for copying arrays or objects, merging arrays or objects, and passing elements of an array as arguments to a function.
The spread operator can be used to create a shallow copy of an array. This is useful when you want to duplicate an array without affecting the original array.
const arr1 =[1,2,3];
const arr2 =[...arr1];
console.log(arr2);// Output: [1, 2, 3]
Merging arrays
You can use the spread operator to merge multiple arrays into one. This is a concise and readable way to combine arrays.
Similar to arrays, the spread operator can be used to create a shallow copy of an object. This is useful for duplicating objects without affecting the original object.
const obj1 ={a:1,b:2};
const obj2 ={...obj1 };
console.log(obj2);// Output: { a: 1, b: 2 }
Merging objects
The spread operator can also be used to merge multiple objects into one. This is particularly useful for combining properties from different objects.
The spread operator allows you to pass elements of an array as individual arguments to a function. This is useful for functions that accept multiple arguments.
The Strategy pattern is a behavioral design pattern that allows you to define a family of algorithms, encapsulate each one as a separate class, and make them interchangeable. This pattern lets the algorithm vary independently from the clients that use it. For example, if you have different sorting algorithms, you can define each one as a strategy and switch between them without changing the client code.
context.executeStrategy('someData');// Output: Algorithm A was run on someData
The Strategy pattern
Definition
The Strategy pattern is a behavioral design pattern that enables selecting an algorithm's behavior at runtime. It defines a family of algorithms, encapsulates each one, and makes them interchangeable. This pattern allows the algorithm to vary independently from the clients that use it.
Components
Context: Maintains a reference to a Strategy object and is configured with a ConcreteStrategy object.
Strategy: An interface common to all supported algorithms. The Context uses this interface to call the algorithm defined by a ConcreteStrategy.
ConcreteStrategy: Implements the Strategy interface to provide a specific algorithm.
Example
Consider a scenario where you have different sorting algorithms and you want to switch between them without changing the client code.
// Strategy interface
classStrategy{
doAlgorithm(data){
thrownewError('This method should be overridden!');
}
}
// ConcreteStrategyA
classConcreteStrategyAextendsStrategy{
doAlgorithm(data){
return data.sort((a, b)=> a - b);// Example: ascending sort
}
}
// ConcreteStrategyB
classConcreteStrategyBextendsStrategy{
doAlgorithm(data){
return data.sort((a, b)=> b - a);// Example: descending sort
The WebSocket API provides a way to open a persistent connection between a client and a server, allowing for real-time, two-way communication. Unlike HTTP, which is request-response based, WebSocket enables full-duplex communication, meaning both the client and server can send and receive messages independently. This is particularly useful for applications like chat apps, live updates, and online gaming.
The following example uses Postman's WebSocket echo service to demonstrate how web sockets work.
// Postman's echo server that will echo back messages you send
The WebSocket API is a technology that provides a way to establish a persistent, low-latency, full-duplex communication channel between a client (usually a web browser) and a server. This is different from the traditional HTTP request-response model, which is stateless and requires a new connection for each request.
Key features
Full-duplex communication: Both the client and server can send and receive messages independently.
Low latency: The persistent connection reduces the overhead of establishing a new connection for each message.
Real-time updates: Ideal for applications that require real-time data, such as chat applications, live sports updates, and online gaming.
How it works
Connection establishment: The client initiates a WebSocket connection by sending a handshake request to the server.
Handshake response: The server responds with a handshake response, and if successful, the connection is established.
Data exchange: Both the client and server can now send and receive messages independently over the established connection.
Connection closure: Either the client or server can close the connection when it is no longer needed.
Example usage
Here is a basic example of how to use the WebSocket API in JavaScript, using Postman's WebSocket Echo Service.
// Postman's echo server that will echo back messages you send
In JavaScript, the this keyword refers to the object that is currently executing the code. In event handlers, this typically refers to the element that triggered the event. However, the value of this can change depending on how the event handler is defined and called. To ensure this refers to the desired object, you can use methods like bind(), arrow functions, or assign the context explicitly.
The concept of this binding in event handlers
Understanding this in JavaScript
In JavaScript, the this keyword is a reference to the object that is currently executing the code. The value of this is determined by how a function is called, not where it is defined. This can lead to different values of this in different contexts.
this in event handlers
In the context of event handlers, this usually refers to the DOM element that triggered the event. For example:
// Create a button element and append it to the DOM
Tree shaking is a technique used in module bundling to eliminate dead code, which is code that is never used or executed. This helps to reduce the final bundle size and improve application performance. It works by analyzing the dependency graph of the code and removing any unused exports. Tools like Webpack and Rollup support tree shaking when using ES6 module syntax (import and export).
The concept of tree shaking in module bundling
Tree shaking is a term commonly used in the context of JavaScript module bundlers like Webpack and Rollup. It refers to the process of eliminating dead code from the final bundle, which helps in reducing the bundle size and improving the performance of the application.
How tree shaking works
Tree shaking works by analyzing the dependency graph of the code. It looks at the import and export statements to determine which parts of the code are actually used and which are not. The unused code, also known as dead code, is then removed from the final bundle.
Example
Consider the following example:
// utils.js
exportfunctionadd(a, b){
return a + b;
}
exportfunctionsubtract(a, b){
return a - b;
}
// main.js
import{ add }from'./utils';
console.log(add(2,3));
In this example, the subtract function is never used in main.js. A tree-shaking-enabled bundler will recognize this and exclude the subtract function from the final bundle.
Requirements for tree shaking
ES6 module syntax: Tree shaking relies on the static structure of ES6 module syntax (import and export). CommonJS modules (require and module.exports) are not statically analyzable and thus not suitable for tree shaking.
Bundler support: The bundler you are using must support tree shaking. Both Webpack and Rollup have built-in support for tree shaking.
Tools that support tree shaking
Webpack: Webpack supports tree shaking out of the box when using ES6 modules. You can enable it by setting the mode to production in your Webpack configuration.
Rollup: Rollup is designed with tree shaking in mind and provides excellent support for it.
Benefits of tree shaking
Reduced bundle size: By removing unused code, the final bundle size is reduced, leading to faster load times.
Improved performance: Smaller bundles mean less code to parse and execute, which can improve the performance of your application.
Classical inheritance is a model where classes inherit from other classes, typically seen in languages like Java and C++. Prototypal inheritance, used in JavaScript, involves objects inheriting directly from other objects. In classical inheritance, you define a class and create instances from it. In prototypal inheritance, you create an object and use it as a prototype for other objects.
Difference between classical inheritance and prototypal inheritance
Classical inheritance
Classical inheritance is a pattern used in many object-oriented programming languages like Java, C++, and Python. It involves creating a class hierarchy where classes inherit properties and methods from other classes.
Class definition: You define a class with properties and methods.
Instantiation: You create instances (objects) of the class.
Inheritance: A class can inherit from another class, forming a parent-child relationship.
Example in Java:
class Animal {
void eat() {
System.out.println("This animal eats food.");
}
}
class Dog extends Animal {
void bark() {
System.out.println("The dog barks.");
}
}
public class Main {
public static void main(String[] args) {
Dog dog = new Dog();
dog.eat(); // Inherited method
dog.bark(); // Own method
}
}
Prototypal inheritance
Prototypal inheritance is a feature of JavaScript where objects inherit directly from other objects. There are no classes; instead, objects serve as prototypes for other objects.
Object creation: You create an object directly.
Prototype chain: Objects can inherit properties and methods from other objects through the prototype chain.
Flexibility: You can dynamically add or modify properties and methods.
Example in JavaScript:
const animal ={
eat(){
console.log('This animal eats food.');
},
};
const dog =Object.create(animal);
dog.bark=function(){
console.log('The dog barks.');
};
dog.eat();// Inherited method (Output: The animal eats food.)
dog.bark();// Own method (Output: The dog barks.)
Key differences
Class-based vs. prototype-based: Classical inheritance uses classes, while prototypal inheritance uses objects.
Inheritance model: Classical inheritance forms a class hierarchy, whereas prototypal inheritance forms a prototype chain.
Flexibility: Prototypal inheritance is more flexible and dynamic, allowing for changes at runtime.
document.querySelector() and document.getElementById() are both methods used to select elements from the DOM, but they have key differences. document.querySelector() can select any element using a CSS selector and returns the first match, while document.getElementById() selects an element by its ID and returns the element with that specific ID.
// Using document.querySelector()
const element =document.querySelector('.my-class');
Selector type: document.querySelector() uses CSS selectors, while document.getElementById() uses only the ID attribute.
Return value: document.querySelector() returns the first matching element, whereas document.getElementById() returns the element with the specified ID.
Performance: document.getElementById() is generally faster because it directly accesses the element by ID, while document.querySelector() has to parse the CSS selector.
Dot notation and bracket notation are two ways to access properties of an object in JavaScript. Dot notation is more concise and readable but can only be used with valid JavaScript identifiers. Bracket notation is more flexible and can be used with property names that are not valid identifiers, such as those containing spaces or special characters.
Global scope means variables are accessible from anywhere in the code. Function scope means variables are accessible only within the function they are declared in. Block scope means variables are accessible only within the block (e.g., within {}) they are declared in.
var globalVar ="I'm global";// Global scope
functionmyFunction(){
var functionVar ="I'm in a function";// Function scope
if(true){
let blockVar ="I'm in a block";// Block scope
console.log(blockVar);// Accessible here
}
// console.log(blockVar); // Uncaught ReferenceError: blockVar is not defined
}
// console.log(functionVar); // Uncaught ReferenceError: functionVar is not defined
myFunction();
Global scope, function scope, and block scope
Global scope
Variables declared in the global scope are accessible from anywhere in the code. In a browser environment, these variables become properties of the window object.
var globalVar ="I'm global";
functioncheckGlobal(){
console.log(globalVar);// Accessible here
}
checkGlobal();// Output: "I'm global"
console.log(globalVar);// Output: "I'm global"
Function scope
Variables declared within a function are only accessible within that function. This is true for variables declared using var, let, or const.
functionmyFunction(){
var functionVar ="I'm in a function";
console.log(functionVar);// Accessible here
}
myFunction();// Output: "I'm in a function"
console.log(functionVar);// Uncaught ReferenceError: functionVar is not defined
Block scope
Variables declared with let or const within a block (e.g., within {}) are only accessible within that block. This is not true for var, which is function-scoped.
if(true){
let blockVar ="I'm in a block";
console.log(blockVar);// Accessible here
}
// console.log(blockVar); // Uncaught ReferenceError: blockVar is not defined
if(true){
var blockVarVar ="I'm in a block but declared with var";
console.log(blockVarVar);// Accessible here
}
console.log(blockVarVar);// Output: "I'm in a block but declared with var"
A shallow copy duplicates the top-level properties of an object, but nested objects are still referenced. A deep copy duplicates all levels of an object, creating entirely new instances of nested objects. For example, using Object.assign() creates a shallow copy, while using libraries like Lodash or structuredClone() in modern JavaScript can create deep copies.
A shallow copy creates a new object and copies the values of the original object's top-level properties into the new object. However, if any of these properties are references to other objects, only the reference is copied, not the actual object. This means that changes to nested objects in the copied object will affect the original object.
In this example, shallowCopy is a shallow copy of obj1. Changing shallowCopy.b.c also changes obj1.b.c because b is a reference to the same object in both obj1 and shallowCopy.
Deep copy
A deep copy creates a new object and recursively copies all properties and nested objects from the original object. This means that the new object is completely independent of the original object, and changes to nested objects in the copied object do not affect the original object.
Unit testing focuses on testing individual components or functions in isolation to ensure they work as expected. Integration testing checks how different modules or services work together. End-to-end testing simulates real user scenarios to verify the entire application flow from start to finish.
Difference between unit testing, integration testing, and end-to-end testing
Unit testing
Unit testing involves testing individual components or functions in isolation. The goal is to ensure that each part of the code works correctly on its own. These tests are usually written by developers and are the first line of defense against bugs.
Scope: Single function or component
Tools: Jest, Mocha, Jasmine
Example: Testing a function that adds two numbers
functionadd(a, b){
return a + b;
}
test('adds 1 + 2 to equal 3',()=>{
expect(add(1,2)).toBe(3);
});
Integration testing
Integration testing focuses on verifying the interactions between different modules or services. The goal is to ensure that combined parts of the application work together as expected. These tests are usually more complex than unit tests and may involve multiple components.
Scope: Multiple components or services
Tools: Jest, Mocha, Jasmine, Postman (for API testing)
Example: Testing a function that fetches data from an API and processes it
asyncfunctionfetchData(apiUrl){
const response =awaitfetch(apiUrl);
const data =await response.json();
returnprocessData(data);
}
test('fetches and processes data correctly',async()=>{
const apiUrl ='https://api.example.com/data';
const data =awaitfetchData(apiUrl);
expect(data).toEqual(expectedProcessedData);
});
End-to-end testing
End-to-end (E2E) testing simulates real user scenarios to verify the entire application flow from start to finish. The goal is to ensure that the application works as a whole, including the user interface, backend, and any external services.
var declarations are hoisted to the top of their scope and initialized with undefined, allowing them to be used before their declaration. let and const declarations are also hoisted but are not initialized, resulting in a ReferenceError if accessed before their declaration. const additionally requires an initial value at the time of declaration.
Hoisting differences between var, let, and const
var hoisting
var declarations are hoisted to the top of their containing function or global scope. This means the variable is available throughout the entire function or script, even before the line where it is declared. However, the variable is initialized with undefined until the actual declaration is encountered.
console.log(a);// Output: undefined
var a =10;
console.log(a);// Output: 10
let hoisting
let declarations are also hoisted to the top of their block scope, but they are not initialized. This creates a "temporal dead zone" (TDZ) from the start of the block until the declaration is encountered. Accessing the variable in the TDZ results in a ReferenceError.
console.log(b);// ReferenceError: Cannot access 'b' before initialization
let b =20;
console.log(b);// Output: 20
const hoisting
const declarations behave similarly to let in terms of hoisting. They are hoisted to the top of their block scope but are not initialized, resulting in a TDZ. Additionally, const requires an initial value at the time of declaration and cannot be reassigned.
console.log(c);// ReferenceError: Cannot access 'c' before initialization
A Promise in JavaScript can be in one of three states: pending, fulfilled, or rejected. When a Promise is created, it starts in the pending state. If the operation completes successfully, the Promise transitions to the fulfilled state, and if it fails, it transitions to the rejected state. Here's a quick example:
let promise =newPromise((resolve, reject)=>{
// some asynchronous operation
if(success){
resolve('Success!');
}else{
reject('Error!');
}
});
Different states of a Promise
Pending
When a Promise is first created, it is in the pending state. This means that the asynchronous operation has not yet completed.
let promise =newPromise((resolve, reject)=>{
// asynchronous operation
});
Fulfilled
A Promise transitions to the fulfilled state when the asynchronous operation completes successfully. The resolve function is called to indicate this.
let promise =newPromise((resolve, reject)=>{
resolve('Success!');
});
Rejected
A Promise transitions to the rejected state when the asynchronous operation fails. The reject function is called to indicate this.
The this keyword in JavaScript can be bound in several ways:
Default binding: In non-strict mode, this refers to the global object (window in browsers). In strict mode, this is undefined.
Implicit binding: When a function is called as a method of an object, this refers to the object.
Explicit binding: Using call, apply, or bind methods to explicitly set this.
New binding: When a function is used as a constructor with the new keyword, this refers to the newly created object.
Arrow functions: Arrow functions do not have their own this and inherit this from the surrounding lexical context.
Default binding
In non-strict mode, if a function is called without any context, this refers to the global object (window in browsers). In strict mode, this is undefined.
functionshowThis(){
console.log(this);
}
showThis();// In non-strict mode: window, in strict mode: undefined
Implicit binding
When a function is called as a method of an object, this refers to the object.
const obj ={
name:'Alice',
greet:function(){
console.log(this.name);
},
};
obj.greet();// 'Alice'
Explicit binding
Using call, apply, or bind methods, you can explicitly set this.
Using call
functiongreet(){
console.log(this.name);
}
const person ={name:'Bob'};
greet.call(person);// 'Bob'
Using apply
functiongreet(greeting){
console.log(greeting +', '+this.name);
}
const person ={name:'Charlie'};
greet.apply(person,['Hello']);// 'Hello, Charlie'
Using bind
functiongreet(){
console.log(this.name);
}
const person ={name:'Dave'};
const boundGreet = greet.bind(person);
boundGreet();// 'Dave'
New binding
When a function is used as a constructor with the new keyword, this refers to the newly created object.
functionPerson(name){
this.name= name;
}
const person =newPerson('Eve');
console.log(person.name);// 'Eve'
Arrow functions
Arrow functions do not have their own this and inherit this from the surrounding lexical context.
const obj ={
firstName:'Frank',
greet:()=>{
console.log(this.firstName);
},
};
obj.greet();// undefined, because `this` is inherited from the global scope
In a browser, events go through three phases: capturing, target, and bubbling. During the capturing phase, the event travels from the root to the target element. In the target phase, the event reaches the target element. Finally, in the bubbling phase, the event travels back up from the target element to the root. You can control event handling using addEventListener with the capture option.
Event phases in a browser
Capturing phase
The capturing phase, also known as the trickling phase, is the first phase of event propagation. During this phase, the event starts from the root of the DOM tree and travels down to the target element. Event listeners registered for this phase will be triggered in the order from the outermost ancestor to the target element.
The target phase is the second phase of event propagation. In this phase, the event has reached the target element itself. Event listeners registered directly on the target element will be triggered during this phase.
element.addEventListener('click', handler);// default is target phase
Bubbling phase
The bubbling phase is the final phase of event propagation. During this phase, the event travels back up from the target element to the root of the DOM tree. Event listeners registered for this phase will be triggered in the order from the target element to the outermost ancestor.
You can control event propagation using methods like stopPropagation and stopImmediatePropagation. These methods can be called within an event handler to stop the event from propagating further.
element.addEventListener('click',function(event){
event.stopPropagation();// Stops the event from propagating further
The Observer pattern is a design pattern where an object, known as the subject, maintains a list of its dependents, called observers, and notifies them of any state changes. This pattern is useful for implementing distributed event-handling systems, such as updating the user interface in response to data changes or implementing event-driven architectures.
What is the Observer pattern?
The Observer pattern is a behavioral design pattern that defines a one-to-many dependency between objects. When the state of the subject (the one) changes, all its observers (the many) are notified and updated automatically. This pattern is particularly useful for scenarios where changes in one object need to be reflected in multiple other objects without tightly coupling them.
Key components
Subject: The object that holds the state and sends notifications to observers.
Observer: The objects that need to be notified of changes in the subject.
ConcreteSubject: A specific implementation of the subject.
ConcreteObserver: A specific implementation of the observer.
subject.notifyObservers();// Both observers will be updated
Use cases
User interface updates
In front end development, the Observer pattern is commonly used to update the user interface in response to changes in data. For example, in a Model-View-Controller (MVC) architecture, the view can observe the model and update itself whenever the model's state changes.
Event handling
The Observer pattern is useful for implementing event-driven systems. For instance, in JavaScript, the addEventListener method allows you to register multiple event handlers (observers) for a single event (subject).
Real-time data feeds
Applications that require real-time updates, such as stock tickers or chat applications, can benefit from the Observer pattern. Observers can subscribe to data feeds and get notified whenever new data is available.
Dependency management
In complex applications, managing dependencies between different modules can be challenging. The Observer pattern helps decouple these modules, making the system more modular and easier to maintain.
Closures in JavaScript can be used to create private variables by defining a function within another function. The inner function has access to the outer function's variables, but those variables are not accessible from outside the outer function. This allows you to encapsulate and protect the variables from being accessed or modified directly.
functioncreateCounter(){
let count =0;// private variable
return{
increment:function(){
count++;
return count;
},
decrement:function(){
count--;
return count;
},
getCount:function(){
return count;
},
};
}
const counter =createCounter();
console.log(counter.increment());// 1
console.log(counter.getCount());// 1
console.log(counter.count);// undefined
How can closures be used to create private variables?
Understanding closures
A closure is a feature in JavaScript where an inner function has access to the outer (enclosing) function's variables. This includes:
Variables declared within the outer function's scope
Parameters of the outer function
Variables from the global scope
Creating private variables
To create private variables using closures, you can define a function that returns an object containing methods. These methods can access and modify the private variables, but the variables themselves are not accessible from outside the function.
Example
Here's a detailed example to illustrate how closures can be used to create private variables:
functioncreateCounter(){
let count =0;// private variable
return{
increment:function(){
count++;
return count;
},
decrement:function(){
count--;
return count;
},
getCount:function(){
return count;
},
};
}
const counter =createCounter();
console.log(counter.increment());// 1
console.log(counter.increment());// 2
console.log(counter.decrement());// 1
console.log(counter.getCount());// 1
console.log(counter.count);// undefined
Explanation
Outer function: createCounter is the outer function that defines a private variable count.
Inner functions: The object returned by createCounter contains methods (increment, decrement, and getCount) that form closures. These methods have access to the count variable.
Encapsulation: The count variable is not accessible directly from outside the createCounter function. It can only be accessed and modified through the methods provided.
Benefits
Encapsulation: Private variables help in encapsulating the state and behavior of an object, preventing unintended interference.
Data integrity: By restricting direct access to variables, you can ensure that they are modified only through controlled methods.