JavaScript Interview Questions

190+ JavaScript interview questions and answers in quiz-style format, answered by ex-FAANG interviewers
Questions and solutions by ex-interviewers
Covers critical topics

Tired of scrolling through low-quality JavaScript interview questions? You’ve found the right place!

Our JavaScript interview questions are crafted by experienced ex-FAANG senior / staff engineers, not random unverified sources or AI.

With over 190+ questions covering everything from core JavaScript concepts to advanced JavaScript features (async / await, promises, etc.), you’ll be fully prepared.

Each quiz question comes with:

  • Concise answers (TL;DR): Clear and to-the-point solutions to help you respond confidently during interviews.
  • Comprehensive explanations: In-depth insights to ensure you fully understand the concepts and can elaborate when required. Don’t waste time elsewhere—start practicing with the best!
If you're looking for JavaScript coding questions -We've got you covered as well, with:
Javascript coding
  • 280+ JavaScript coding questions
  • In-browser coding workspace similar to real interview environment
  • Reference solutions from Big Tech Ex-interviewers
  • Automated test cases
  • Instantly preview your code for UI questions
Get Started
Join 50,000+ engineers

Explain the concept of "hoisting" in JavaScript

Topics
JavaScript

TL;DR

Hoisting is a JavaScript mechanism where variable and function declarations are moved ("hoisted") to the top of their containing scope during the compile phase.

  • Variable declarations (var): Declarations are hoisted, but not initializations. The value of the variable is undefined if accessed before initialization.
  • Variable declarations (let and const): Declarations are hoisted, but not initialized. Accessing them results in ReferenceError until the actual declaration is encountered.
  • Function expressions (var): Declarations are hoisted, but not initializations. The value of the variable is undefined if accessed before initialization.
  • Function declarations (function): Both declaration and definition are fully hoisted.
  • Class declarations (class): Declarations are hoisted, but not initialized. Accessing them results in ReferenceError until the actual declaration is encountered.
  • Import declarations (import): Declarations are hoisted, and side effects of importing the module are executed before the rest of the code.

The following behavior summarizes the result of accessing the variables before they are declared.

DeclarationAccessing before declaration
var fooundefined
let fooReferenceError
const fooReferenceError
class FooReferenceError
var foo = function() { ... }undefined
function foo() { ... }Normal
importNormal

Hoisting

Hoisting is a term used to explain the behavior of declarations in JavaScript code.

Variables declared with the var keyword have their declaration "moved" up to the top of their containing scope during compilation, which we refer to as hoisting.

Only the declaration is hoisted; the initialization/assignment (if there is one) will stay where it is. Note that the declaration is not actually moved – the JavaScript engine parses the declarations during compilation and becomes aware of variables and their scopes, but it is easier to understand this behavior by visualizing the declarations as being "hoisted" to the top of their scope.

Let's explain with a few code samples. Note that the code for these examples should be executed within a module scope instead of being entered line by line into a REPL like the browser console.

Hoisting of variables declared using var

Hoisting is visible here: even though foo is declared and initialized after the first console.log(), the first console.log() prints undefined.

console.log(foo); // undefined
var foo = 1;
console.log(foo); // 1

You can visualize the code as:

var foo;
console.log(foo); // undefined
foo = 1;
console.log(foo); // 1

Hoisting of variables declared using let, const, and class

Variables declared via let, const, and class are hoisted as well. However, unlike var and function, they are not initialized and accessing them before the declaration will result in a ReferenceError exception. The variable is in a "temporal dead zone" from the start of the block until the declaration is processed.

y; // ReferenceError: Cannot access 'y' before initialization
let y = 'local';
z; // ReferenceError: Cannot access 'z' before initialization
const z = 'local';
Foo; // ReferenceError: Cannot access 'Foo' before initialization
class Foo {
constructor() {}
}

Hoisting of function expressions

A function expression is a function assigned to a variable binding. When the binding uses var, only the variable declaration is hoisted — the function body is not.

console.log(bar); // undefined
bar(); // Uncaught TypeError: bar is not a function
var bar = function () {
console.log('BARRRR');
};

Arrow functions are function expressions too, so the same rule applies — only the binding is hoisted, and its TDZ behavior follows the declaration keyword (var initializes to undefined, let and const remain in the TDZ until their declaration runs).

console.log(baz); // undefined
var baz = () => 'arrow';
console.log(baz()); // 'arrow'

Hoisting of function declarations

Function declarations use the function keyword. Unlike function expressions, function declarations have both the declaration and definition hoisted, thus they can be called even before they are declared.

console.log(foo); // [Function: foo]
foo(); // 'FOOOOO'
function foo() {
console.log('FOOOOO');
}

The same applies to generator functions (function*), async functions (async function), and async generator functions (async function*).

Hoisting of import statements

Import declarations are hoisted. The identifiers the imports introduce are available in the entire module scope, and their side effects are produced before the rest of the module's code runs.

foo.doSomething(); // Works normally.
import foo from './modules/foo';

Under the hood

In reality, JavaScript creates all variables in the current scope before it even tries to execute the code. Variables created using the var keyword will have the value of undefined, whereas variables created using the let and const keywords will be marked as <value unavailable>. Thus, accessing them will cause a ReferenceError, preventing you from accessing them before initialization.

In the ECMAScript specification, let and const declarations are explained as below:

The variables are created when their containing Environment Record is instantiated but may not be accessed in any way until the variable's LexicalBinding is evaluated.

However, this statement is a little different for the var keyword:

Var variables are created when their containing Environment Record is instantiated and are initialized to undefined when created.

MDN groups hoisting into four observable behaviors, which map to the declaration kinds covered above:

  1. Value hoisting — the value is usable before the declaration. Applies to function declarations.
  2. Declaration hoisting — the binding is usable before the declaration but reads undefined. Applies to var.
  3. Scope tainting — the binding exists from the top of the scope but any access throws (the TDZ). Applies to let, const, and class.
  4. Side effects — the declaration's side effects run before the rest of the module evaluates. Applies to import.

Modern practices

In practice, modern codebases avoid using var and use let and const exclusively. It is recommended to declare and initialize your variables and import statements at the top of the containing scope/module to eliminate the mental overhead of tracking when a variable can be used.

ESLint is a static code analyzer that can find violations of such cases with the following rules:

  • no-use-before-define: Warns when an identifier is referenced before its declaration appears in source.
  • no-undef: Warns when an identifier is referenced without being declared anywhere in scope.

Additional examples

The examples below cover hoisting behaviors that are less obvious from the summary table and that commonly cause confusion.

Function declaration compared with function expression

console.log(declared());
console.log(expressed());
function declared() {
return 'function declaration';
}
var expressed = function () {
return 'function expression';
};
  • declared() returns 'function declaration'. Function declarations are fully hoisted — both the identifier binding and the function body are available from the top of the scope.
  • expressed() throws TypeError: expressed is not a function. The var expressed binding is hoisted and initialized to undefined, but the assignment of the function expression happens at its source location. Calling undefined() produces the TypeError.

var in a for loop with setTimeout

for (var i = 0; i < 3; i++) {
setTimeout(() => console.log(i), 0);
}
// Output: 3, 3, 3

var i is function-scoped rather than block-scoped, so all three callbacks close over the same binding. The loop increments i to 3 before any setTimeout callback runs, because macrotasks run after the current synchronous code completes. Each callback then reads the current value of the shared i, which is 3.

Two fixes:

  • Replace var with let. let is block-scoped, so each iteration creates a fresh binding that the callback closes over.
  • Wrap the body in an IIFE that captures the current value as a parameter: (i => setTimeout(() => console.log(i), 0))(i). This was the pre-ES6 workaround.

var escapes block scope

if (true) {
var a = 1;
let b = 2;
}
console.log(a); // 1
console.log(b); // ReferenceError: b is not defined

var is scoped to the nearest function or script, not to the enclosing block. The declaration is hoisted past if, for, while, and plain block statements to the containing function or module scope, which is why a is still visible after the if. let and const are block-scoped, so b only exists inside the block.

Redeclaration

var x = 1;
var x = 2; // OK — x is now 2
let y = 1;
let y = 2; // SyntaxError: Identifier 'y' has already been declared

var allows the same name to be redeclared in the same scope; the second declaration is a no-op and only the assignment runs. let, const, and class throw SyntaxError if the same name is declared twice in the same scope. Like hoisting, this is resolved statically before execution — duplicate lexical declarations are an early error detected during parsing, so no code runs at all.

Class declarations

console.log(typeof Foo);
class Foo {}

This throws ReferenceError: Cannot access 'Foo' before initialization.

Class declarations are hoisted — the binding is created at the top of the enclosing block — but they remain in the Temporal Dead Zone until the class declaration is evaluated. Any access before that point throws, including typeof.

This behavior can be confused with "classes are not hoisted". The two statements are observably different:

  • If Foo were not hoisted, typeof Foo would return 'undefined' (the behavior for truly undeclared identifiers).
  • Because Foo is hoisted but uninitialized, typeof Foo throws.

The distinction also matters for extends clauses, which are evaluated at class declaration time. class A extends B {} throws if B is hoisted but still in the TDZ at that point.

typeof and the Temporal Dead Zone

console.log(typeof undeclaredVariable); // 'undefined'
console.log(typeof someLet); // ReferenceError
let someLet = 1;

typeof does not throw when applied to an identifier that has no declaration anywhere in scope — it returns the string 'undefined'. However, typeof does throw when applied to an identifier that is declared but still in the Temporal Dead Zone. The binding exists, and reading it (which typeof must do) triggers the TDZ error.

This distinguishes "undeclared" (no binding in any enclosing scope) from "declared but uninitialized" (binding exists, initialization has not yet occurred).

Shared names across var and function declarations

function outer() {
console.log(inner);
inner();
function inner() {
console.log('inner called');
}
var inner = 'overwritten';
}
outer();
// Output:
// [Function: inner]
// inner called

Two behaviors combine here:

  1. Both var inner and the function inner declaration are hoisted to the top of outer.
  2. When a var declaration and a function declaration share a name in the same scope, the function declaration takes precedence during initialization. inner is initialized with the function object rather than undefined.

The var inner = 'overwritten' assignment takes effect only after the two console.log calls, so those calls observe the function. A console.log(inner) after the assignment would print 'overwritten'.

A let or const declaration in the same scope as a var of the same name produces a SyntaxError at parse time, before any code runs.

Common misconceptions

The following statements appear frequently in explanations of hoisting, including in material generated by large language models, but are incorrect or imprecise:

  1. Classes are not hoisted. Class declarations are hoisted. They remain in the Temporal Dead Zone until the declaration is evaluated, which is observably different from not being hoisted at all — most notably, typeof throws on a class in the TDZ but returns 'undefined' for a truly undeclared identifier.
  2. var is hoisted; let and const are not. All three are hoisted. They differ in initialization: var is initialized to undefined at hoist time, while let and const remain uninitialized in the TDZ until their declaration is evaluated.
  3. typeof never throws on undeclared variables. typeof is safe for identifiers that have no declaration anywhere in scope, but it throws in the TDZ. The typeof x === 'undefined' guard is only safe if x is not declared anywhere in the enclosing scope.
  4. Function declarations are hoisted; function expressions are not. Both are hoisted, but only the binding. In var fn = function () {}, the var fn declaration is hoisted and initialized to undefined. In let fn = function () {}, the binding is hoisted but remains in the TDZ. The function body is never hoisted for expressions.

Further reading

What are the differences between JavaScript variables created using `let`, `var` or `const`?

Topics
JavaScript

TL;DR

In JavaScript, let, var, and const are all keywords used to declare variables, but they differ significantly in terms of scope, initialization rules, whether they can be redeclared or reassigned, and the behavior when they are accessed before declaration:

Behaviorvarletconst
ScopeFunction or GlobalBlockBlock
InitializationOptionalOptionalRequired
RedeclarationYesNoNo
ReassignmentYesYesNo
Accessing before declarationundefinedReferenceErrorReferenceError

Differences in behavior

Let's look at the difference in behavior between var, let, and const.

Scope

Variables declared using the var keyword are scoped to the function in which they are created, or if created outside of any function, to the global object. let and const are block scoped, meaning they are only accessible within the nearest set of curly braces (function, if-else block, or for-loop).

function foo() {
// All variables are accessible within functions.
var bar = 1;
let baz = 2;
const qux = 3;
console.log(bar); // 1
console.log(baz); // 2
console.log(qux); // 3
}
foo(); // Prints each variable successfully
console.log(bar); // ReferenceError: bar is not defined
console.log(baz); // ReferenceError: baz is not defined
console.log(qux); // ReferenceError: qux is not defined

In the following example, bar is accessible outside of the if block but baz and qux are not.

if (true) {
var bar = 1;
let baz = 2;
const qux = 3;
}
// var variables are accessible anywhere in the function scope.
console.log(bar); // 1
// let and const variables are not accessible outside of the block they were defined in.
console.log(baz); // ReferenceError: baz is not defined
console.log(qux); // ReferenceError: qux is not defined

Initialization

var and let variables can be initialized without a value but const declarations must be initialized.

var foo; // Ok
let bar; // Ok
const baz; // SyntaxError: Missing initializer in const declaration

Redeclaration

Redeclaring a variable with var will not throw an error, but let and const will.

var foo = 1;
var foo = 2; // Ok
console.log(foo); // Should print 2, but SyntaxError from baz prevents the code executing.
let baz = 3;
let baz = 4; // Uncaught SyntaxError: Identifier 'baz' has already been declared

Reassignment

var and let allow reassigning the variable's value while const does not.

var foo = 1;
foo = 2; // This is fine.
let bar = 3;
bar = 4; // This is fine.
const baz = 5;
baz = 6; // Uncaught TypeError: Assignment to constant variable.

Accessing before declaration

var ,let and const declared variables are all hoisted. var declared variables are auto-initialized with an undefined value. However, let and const variables are not initialized and accessing them before the declaration will result in a ReferenceError exception because they are in a "temporal dead zone" from the start of the block until the declaration is processed.

console.log(foo); // undefined
var foo = 'foo';
console.log(baz); // ReferenceError: Cannot access 'baz' before initialization
let baz = 'baz';
console.log(bar); // ReferenceError: Cannot access 'baz' before initialization
const bar = 'bar';

Notes

  • In modern JavaScript, it's generally recommended to use const by default for variables that don't need to be reassigned. This promotes immutability and prevents accidental changes.
  • Use let when you need to reassign a variable within its scope.
  • Avoid using var due to its potential for scoping issues and hoisting behavior.
  • If you need to target older browsers, write your code using let/const, and use a transpiler like Babel to compile your code to older syntax.

Further reading

What is the difference between `==` and `===` in JavaScript?

Topics
JavaScript

TL;DR

== is the abstract equality operator while === is the strict equality operator. == performs type coercion before comparing, following the Abstract Equality Comparison algorithm defined in the ECMAScript specification. === does not perform coercion and returns false whenever the operand types differ. === is generally preferred in application code because it eliminates a class of bugs caused by unexpected coercion. The most common exception is x == null, which checks for both null and undefined in a single comparison.

Operator=====
NameLoose (abstract) equality operatorStrict equality operator
Type coercionYes — per the Abstract Equality Comparison algorithmNo
Comparison behaviorTypes may be coerced before the value comparisonTypes are compared first

Don't confuse = with == and ===. = is the assignment operator — it sets a variable's value (x = 5) and does not compare anything.


The Abstract Equality Comparison algorithm

The behavior of == is defined by the IsLooselyEqual algorithm in ECMA-262 §7.2.15. Given operands x and y, the algorithm proceeds as follows:

  1. If Type(x) is the same as Type(y), return the result of x === y (strict equality, without coercion).
  2. If x is null and y is undefined, return true.
  3. If x is undefined and y is null, return true.
  4. If Type(x) is Number and Type(y) is String, return x == ToNumber(y).
  5. If Type(x) is String and Type(y) is Number, return ToNumber(x) == y.
  6. If Type(x) is BigInt and Type(y) is String, convert y with StringToBigInt. Return false if the conversion is undefined; otherwise compare the resulting BigInts.
  7. If Type(x) is String and Type(y) is BigInt, swap operands and apply step 6.
  8. If Type(x) is Boolean, return ToNumber(x) == y.
  9. If Type(y) is Boolean, return x == ToNumber(y).
  10. If Type(x) is String, Number, BigInt, or Symbol, and Type(y) is Object, return x == ToPrimitive(y).
  11. If Type(x) is Object and Type(y) is String, Number, BigInt, or Symbol, return ToPrimitive(x) == y.
  12. If one operand is a BigInt and the other is a Number, return true if the mathematical values are equal; otherwise false.
  13. Return false.

Four properties of the algorithm that are not apparent from a truth table alone:

  • Boolean operands are always converted to Number first (via step 8 or 9). This is why true == '1' is true: true becomes 1, then step 5 converts '1' to 1, producing 1 == 1.
  • Object operands are reduced to primitives via ToPrimitive (steps 10 and 11), which invokes Symbol.toPrimitive, then valueOf, then toString. For example, [1] == 1 is true because [1].toString() is '1', which then coerces to 1.
  • null and undefined are only loose-equal to each other and to themselves (steps 2 and 3). They are not coerced to 0 or false elsewhere in the algorithm, which is why a == null is a valid idiom for testing "null or undefined".
  • NaN is not equal to any value, including itself, under any equality operator. Use Number.isNaN(x) or Object.is(x, NaN) to test for it.

The coercion helpers used by ==

== dispatches to three type-conversion routines defined in ECMA-262 §7.1:

  • ToPrimitive(input, hint) — returns the input unchanged if it is already a primitive. Otherwise invokes input[Symbol.toPrimitive](hint), then falls back to valueOf() and toString(). If none returns a primitive, a TypeError is thrown.
  • ToNumber(argument)undefined becomes NaN; null becomes +0; true and false become 1 and +0; strings are parsed with whitespace trimming, with the empty string becoming 0; Symbols and BigInts throw TypeError; objects are first reduced via ToPrimitive(argument, "number") and the result is recursed on.
  • ToString(argument)undefined becomes "undefined"; null becomes "null"; Booleans become "true" and "false"; Numbers use Number::toString; Symbols throw TypeError; objects are first reduced via ToPrimitive(argument, "string").

Examples

The examples below apply the algorithm above to values that commonly produce unexpected results.

Array compared with boolean

console.log([] == false); // true
console.log([0] == false); // true
console.log([1] == true); // true
console.log([1, 2] == '1,2'); // true

Walking through [] == false:

  1. Step 9: false is coerced to 0, producing [] == 0.
  2. Step 11: the array is coerced via ToPrimitive. Array.prototype.toString joins elements with commas, so [].toString() is '', producing '' == 0.
  3. Step 5: '' == 0 coerces the string to 0, producing 0 == 0.
  4. Step 1: same types, strict equality returns true.

The same process explains the remaining cases.

[] == ![]

console.log([] == ![]); // true

Evaluation order:

  1. ![] is evaluated first. Applying ToBoolean to any object yields true, so ![] is false.
  2. The expression is now [] == false, which evaluates to true via the steps shown above.

Object compared with boolean

console.log({} == false); // false

Walking through:

  1. Step 9: false becomes 0, producing {} == 0.
  2. Step 11: the plain object is coerced via ToPrimitive. Object.prototype.toString returns '[object Object]'.
  3. Step 5: '[object Object]' == 0 calls ToNumber('[object Object]'), which produces NaN.
  4. Step 1 with NaN == 0: strict equality returns false.

This differs from [] == false because the two objects have different toString results. This case is a frequent source of incorrect output in AI-generated explanations that treat all objects as equivalent to [] for coercion purposes.

null and undefined

console.log(null == undefined); // true
console.log(null == 0); // false
console.log(null == false); // false
console.log(null >= 0); // true
  • null == undefined is true by the special case in step 2.
  • null == 0 is false because no step in == converts null to 0.
  • null == false is false for the same reason — false becomes 0, but null is not coerced, and step 13 returns false.
  • null >= 0 is true because relational operators do not use the Abstract Equality algorithm. They apply ToNumber directly, converting null to 0, producing 0 >= 0.

A consequence: the three expressions null >= 0, null <= 0, and null != 0 are all true simultaneously.

Same-type string comparison does not coerce

console.log(0 == ''); // true
console.log(0 == '0'); // true
console.log('' == '0'); // false

0 == '' and 0 == '0' both convert the string to a number (step 5). '' == '0' is a comparison between two strings; step 1 defers to strict equality, which compares the string contents directly. The strings '' and '0' are not equal.

A consequence: == is not transitive. a == b and a == c together do not imply b == c.

Symbol equality

const s = Symbol('x');
console.log(s == s); // true
console.log(s == 'x'); // false
console.log(s === s); // true

Two Symbols are loosely equal only if they are the same value (step 1). A Symbol compared with a string falls through the algorithm to step 13 and returns false without attempting any coercion that would throw.

Common misconceptions

The following statements appear frequently in documentation, teaching material, and responses generated by large language models, but are incorrect:

  1. **{} == false is true: ** It is false. {} coerces to '[object Object]', which coerces to NaN, which is not equal to 0.
  2. **[] == ![] is false: ** It is true. ![] is false, and [] == false follows the coercion steps to true.
  3. **null == false is true because null is falsy: ** It is false. The == algorithm has no step that coerces null to a boolean or number; ToBoolean is a separate operation used by conditional expressions, not by ==.
  4. **== is transitive: ** It is not. 0 == '' and 0 == '0' are both true, but '' == '0' is false.
  5. **NaN == NaN is true: ** NaN is not equal to any value under ==, ===, or a relational comparison. Use Number.isNaN(x) or Object.is(x, NaN).

Object.is()

Object.is(x, y) returns the same result as === with two exceptions:

  • Object.is(NaN, NaN) is true, whereas NaN === NaN is false.
  • Object.is(+0, -0) is false, whereas +0 === -0 is true.

There's one final value-comparison operation within JavaScript: the Object.is() static method. The only difference between Object.is() and === is how they treat signed zeros and NaN values. The === operator (and the == operator) treats the number values -0 and +0 as equal, but treats NaN as not equal to each other.

Conclusion

  • Using === (strict equality) is generally recommended to avoid the pitfalls of type coercion, which can lead to unexpected behavior and bugs in your code. It makes the intent of your comparisons clearer and ensures that you are comparing both the value and the type.
  • Use x == null when a single check for null or undefined is required. ESLint's eqeqeq rule allows this pattern via the { "null": "ignore" } option.
  • Use Object.is when NaN equality or distinguishing +0 from -0 is required.
  • When questioned about an unexpected == result, work through the algorithm steps rather than relying on memorized truth tables. The algorithm is short and fully specifies the behavior.

Further reading

What is the event loop in JavaScript runtimes?

What is the difference between call stack and task queue?
Topics
JavaScript

TL;DR

The event loop is a concept within the JavaScript runtime environment regarding how asynchronous operations are executed within JavaScript engines. It works as such:

  1. The JavaScript engine starts executing scripts, placing synchronous operations on the call stack.
  2. When an asynchronous operation is encountered (e.g., setTimeout(), HTTP request), it is offloaded to the respective Web API or Node.js API to handle the operation in the background.
  3. Once the asynchronous operation completes, its callback function is placed in the respective queues – task queues (also known as macrotask queues / callback queues) or microtask queues. We will refer to "task queue" as "macrotask queue" from here on to better differentiate from the microtask queue.
  4. The event loop continuously monitors the call stack and executes items on the call stack. If/when the call stack is empty:
    1. Microtask queue is processed. Microtasks include promise callbacks (then, catch, finally), await continuations, MutationObserver callbacks, and calls to queueMicrotask(). The event loop takes the first callback from the microtask queue and pushes it to the call stack for execution. This repeats until the microtask queue is empty.
    2. Macrotask queue is processed. Macrotasks include web APIs like setTimeout(), HTTP requests, user interface event handlers like clicks, scrolls, etc. The event loop dequeues the first callback from the macrotask queue and pushes it onto the call stack for execution. However, after a macrotask queue callback is processed, the event loop does not proceed with the next macrotask yet! The event loop first checks the microtask queue. Checking the microtask queue is necessary as microtasks have higher priority than macrotask queue callbacks. The macrotask queue callback that was just executed could have added more microtasks!
      1. If the microtask queue is non-empty, process them as per the previous step.
      2. If the microtask queue is empty, the next macrotask queue callback is processed. This repeats until the macrotask queue is empty.
  5. This process continues indefinitely, allowing the JavaScript engine to handle both synchronous and asynchronous operations efficiently without blocking the call stack.

Event loop in JavaScript

The event loop is the mechanism that lets JavaScript handle asynchronous operations without blocking its single-threaded execution.

Parts of the event loop

To understand it better, we need to understand all the parts of the system. These components are part of the event loop:

Call stack

The call stack keeps track of the functions being executed in a program. When a function is called, it is added to the top of the call stack. When the function completes, it is removed from the call stack. This allows the program to keep track of where it is in the execution of a function and return to the correct location when the function completes. As the name suggests, it is a stack data structure which follows last-in-first-out.

Web APIs/Node.js APIs

Asynchronous operations like setTimeout(), HTTP requests, file I/O, etc., are handled by Web APIs (in the browser) or C++ APIs (in Node.js). These APIs are not part of the JavaScript engine and run on separate threads, allowing them to execute concurrently without blocking the call stack.

Task queue / Macrotask queue / Callback queue

The macrotask queue (also called the task queue, callback queue, or event queue) holds callbacks waiting to run when the call stack and microtask queue are empty.

Microtasks queue

The microtask queue holds higher-priority callbacks that drain after the call stack empties and between every macrotask.

Event loop order

  1. The JavaScript engine starts executing scripts, placing synchronous operations on the call stack.
  2. When an asynchronous operation is encountered (e.g., setTimeout(), HTTP request), it is offloaded to the respective Web API or Node.js API to handle the operation in the background.
  3. Once the asynchronous operation completes, its callback function is placed in the respective queues – task queues (also known as macrotask queues / callback queues) or microtask queues. We will refer to "task queue" as "macrotask queue" from here on to better differentiate from the microtask queue.
  4. The event loop continuously monitors the call stack and executes items on the call stack. If/when the call stack is empty:
    1. Microtask queue is processed. The event loop takes the first callback from the microtask queue and pushes it to the call stack for execution. This repeats until the microtask queue is empty.
    2. Macrotask queue is processed. The event loop dequeues the first callback from the macrotask queue and pushes it onto the call stack for execution. However, after a macrotask queue callback is processed, the event loop does not proceed with the next macrotask yet! The event loop first checks the microtask queue. Checking the microtask queue is necessary as microtasks have higher priority than macrotask queue callbacks. The macrotask queue callback that was just executed could have added more microtasks!
      1. If the microtask queue is non-empty, process them as per the previous step.
      2. If the microtask queue is empty, the next macrotask queue callback is processed. This repeats until the macrotask queue is empty.
  5. This process continues indefinitely, allowing the JavaScript engine to handle both synchronous and asynchronous operations efficiently without blocking the call stack.

Example

The example below mixes synchronous logs with two timer callbacks and two promise callbacks. The first timer's callback enqueues a microtask, and the first promise callback enqueues another timer — small additions that exercise every ordering rule the event loop applies, while keeping each line individually trivial to read.

console.log('Start');
setTimeout(() => {
console.log('Timeout 1');
Promise.resolve().then(() => console.log('Promise 2'));
}, 0);
Promise.resolve().then(() => {
console.log('Promise 1');
setTimeout(() => console.log('Timeout 3'), 0);
});
setTimeout(() => console.log('Timeout 2'), 0);
console.log('End');
// Console output:
// Start
// End
// Promise 1
// Timeout 1
// Promise 2
// Timeout 2
// Timeout 3

Queue entries in the trace below are labeled by the message their callback will log (so [Promise 1] means "the queued callback that will log Promise 1"). Names match registration order: Timeout 1 is the first timer registered, Promise 2 is the microtask scheduled later by Timeout 1's callback, and so on.

StepWhat just happenedCall stackMicrotask queueMacrotask queueOutput
1console.log('Start') runsemptyemptyemptyStart
2The first setTimeout registers a timer with the Web APIemptyemptyemptyStart
3Promise.resolve().then(...) enqueues its callback as a microtaskempty[Promise 1]emptyStart
4The second setTimeout registers another timerempty[Promise 1]emptyStart
5console.log('End') runs; sync script finishes. Both 0 ms timers have elapsed and their callbacks have moved from the Web API into the macrotask queue, in registration orderempty[Promise 1][Timeout 1, Timeout 2]Start, End
6Stack empty → microtask queue drains: Promise 1 runs and logs, then schedules a new timer whose callback will log Timeout 3. The new macrotask is appended to the end of the macrotask queueemptyempty[Timeout 1, Timeout 2, Timeout 3]…, Promise 1
7Microtask queue empty → one macrotask runs: Timeout 1 logs, then enqueues a new microtask that will log Promise 2empty[Promise 2][Timeout 2, Timeout 3]…, Timeout 1
8Microtask queue is re-checked before the next macrotask (non-empty → drain): Promise 2 runs and logsemptyempty[Timeout 2, Timeout 3]…, Promise 2
9Microtask queue empty → next macrotask: Timeout 2 runs and logsemptyempty[Timeout 3]…, Timeout 2
10Microtask queue re-checked (empty) → next macrotask: Timeout 3 runs and logsemptyemptyempty…, Timeout 3

Three rules the trace makes explicit:

  • Microtasks drain before any macrotask. Step 6 runs Promise 1 before either timer, even though both timers were scheduled before the promise callback ran.
  • A macrotask that schedules a microtask interleaves. Step 7 runs Timeout 1 and enqueues Promise 2; step 8 runs Promise 2 before the next macrotask, not after. The event loop re-checks the microtask queue between every macrotask, which is why a single drain at the end of synchronous code is not enough to model behavior correctly.
  • A microtask that schedules a macrotask appends to the queue. Step 6 runs Promise 1 and schedules Timeout 3; Timeout 3 then runs last, after both timers that were already in the macrotask queue. Microtasks cannot promote a macrotask to the front of the line.

Advanced examples

The examples below demonstrate event loop behaviors that commonly appear in production code and more advanced interview questions.

async/await scheduling

async/await is specified in terms of promise chaining. When execution reaches an await, the function is paused, its continuation is scheduled as a microtask on resolution of the awaited value, and control returns to the caller.

console.log('1');
async function run() {
console.log('2');
await Promise.resolve();
console.log('3');
}
run();
setTimeout(() => console.log('4'), 0);
Promise.resolve().then(() => console.log('5'));
console.log('6');
// Output: 1, 2, 6, 3, 5, 4

Explanation:

  1. 1 is logged from the first synchronous statement.
  2. run() is invoked. Synchronous code in the function runs up to the await, logging 2.
  3. The continuation of run() (everything after the await) is scheduled as a microtask. Control returns to the top-level script.
  4. setTimeout schedules a macrotask.
  5. Promise.resolve().then(...) schedules a microtask.
  6. 6 is logged from the last synchronous statement.
  7. The script completes and the microtask queue drains in FIFO order: run()'s continuation logs 3, then the .then callback logs 5.
  8. The macrotask queue is then processed, logging 4.

The common misconception is that await blocks execution. It does not — the function is paused, but control returns immediately to the caller, and the continuation runs as a microtask once the awaited value settles.

Microtask starvation

Macrotasks run only once the microtask queue has fully drained. If microtasks continually schedule more microtasks, the macrotask queue never advances, which prevents rendering, user input handling, and timer callbacks from running.

let count = 0;
function scheduleMicrotask() {
Promise.resolve().then(() => {
count++;
if (count < 5) scheduleMicrotask();
console.log('microtask', count);
});
}
setTimeout(() => console.log('macrotask fired'), 0);
scheduleMicrotask();
// Output: microtask 1, microtask 2, microtask 3, microtask 4, microtask 5, macrotask fired

With a bounded recursion depth, the macrotask eventually runs. An unbounded chain (for example if (true) instead of if (count < 5)) would prevent any macrotask from running and would block rendering in the browser.

To yield to the browser for rendering or input handling, a macrotask is required — for example setTimeout(fn, 0), MessageChannel, or scheduler.yield() in environments that support it. A microtask such as queueMicrotask or Promise.resolve().then does not yield.

Yielding the main thread to split long tasks

A synchronous block that runs longer than 50 ms is classified as a long task and blocks the browser from rendering, handling input, and processing timers for that duration. The fix is to break the work into chunks and yield to the event loop between chunks so that rendering and other macrotasks can run.

A loop that runs as one task — the entire computation blocks until it finishes:

function heavyWork() {
let sum = 0;
for (let i = 0; i < 1e8; i++) sum += i;
return sum;
}
heavyWork(); // ~hundreds of ms; the page is unresponsive for the duration

The same work split across macrotasks via setTimeout:

function chunked(total, chunkSize, onDone) {
let i = 0;
let sum = 0;
function tick() {
const end = Math.min(i + chunkSize, total);
while (i < end) {
sum += i;
i++;
}
if (i < total) {
setTimeout(tick, 0);
} else {
onDone(sum);
}
}
tick();
}
chunked(1e7, 1e6, (sum) => console.log('done', sum));

Between every chunk, the browser can paint a frame, dispatch input events, and run other macrotasks. The drawback is that the HTML specification clamps nested setTimeout delays to a minimum of 4 ms after 5 levels of recursion, which adds noticeable latency to long chunked computations.

MessageChannel schedules a macrotask without that clamp:

function yieldToMain() {
return new Promise((resolve) => {
const channel = new MessageChannel();
channel.port1.onmessage = () => resolve();
channel.port2.postMessage(null);
});
}
async function chunked(total, chunkSize) {
let i = 0;
let sum = 0;
while (i < total) {
const end = Math.min(i + chunkSize, total);
while (i < end) {
sum += i;
i++;
}
if (i < total) await yieldToMain();
}
return sum;
}
chunked(1e7, 1e6).then((sum) => console.log('done', sum));

postMessage enqueues a macrotask immediately without delay clamping, so the next chunk runs as soon as the browser has finished its render and any earlier pending tasks. React's scheduler used this pattern before scheduler.postTask was widely available. Production code should reuse a single MessageChannel instance instead of creating one per yield.

A comparison of the available yielding mechanisms:

MechanismSchedule typeYields to render?Notes
queueMicrotask / Promise.thenMicrotaskNoDrains before render — used for sequencing, not yielding
setTimeout(fn, 0)MacrotaskYesClamped to ≥ 4 ms after 5 nested calls per the HTML specification
MessageChannel.postMessageMacrotaskYesNo clamp; ~ 0 ms in practice
scheduler.postTask(fn, { priority })MacrotaskYesBuilt-in priority levels (user-blocking, user-visible, background); Chromium-only
scheduler.yield()MacrotaskYesReturns a promise that resolves on the next yield; preserves task continuation priority; Chromium-only

Microtasks cannot be used to yield. They drain before rendering, which is the behavior the microtask-starvation example demonstrates.

queueMicrotask compared to Promise.resolve().then

Both schedule a microtask in the same FIFO queue and run at the same point in the event loop. They differ in how exceptions thrown inside the callback are surfaced.

queueMicrotask(() => {
throw new Error('from queueMicrotask');
});
Promise.resolve().then(() => {
throw new Error('from promise.then');
});
setTimeout(() => console.log('timeout ran'), 0);
  • An exception thrown from a queueMicrotask callback is reported as an uncaught error and reaches window.onerror (in browsers) or uncaughtException (in Node).
  • An exception thrown from a .then callback causes the resulting promise to reject, surfacing through unhandledrejection if no downstream .catch handles it.

queueMicrotask is appropriate when the callback is conceptually standalone and its errors should behave like any other thrown exception. .then is appropriate when the callback is part of a promise chain where errors are expected to be caught downstream.

Differences across runtimes

The event loop is specified differently in browsers, Node.js, and Web Workers. Code that relies on precise scheduling may behave differently across these environments.

Browsers

Specified in the HTML Living Standard:

  • Each agent has its own event loop.
  • The macrotask queue is partitioned into multiple task sources (timers, network I/O, UI events, postMessage, and others). FIFO order is guaranteed within a source but not across sources — the user agent may choose any non-empty source each turn.
  • requestAnimationFrame callbacks run in a separate phase of the event loop, before the render step, rather than on the macrotask queue.
  • Rendering (style, layout, paint) occurs between macrotasks, not between microtasks. This is why a long microtask chain can freeze the UI.
Where requestAnimationFrame and requestIdleCallback fit

Within a single iteration of the event loop, the browser visits these phases in order:

  1. Run one task from a macrotask queue source.
  2. Drain the microtask queue (including any microtasks scheduled by step 1).
  3. If a render is due this turn, run all requestAnimationFrame callbacks queued for the next frame.
  4. Style, layout, and paint.
  5. During any remaining idle time before the next frame deadline, run requestIdleCallback callbacks.

requestAnimationFrame schedules work for the next paint, making it the right tool for visual updates synchronized with the display refresh rate (~ 16.7 ms per frame at 60 Hz). requestIdleCallback schedules work for the period after rendering and only if the browser has idle time, making it suitable for non-urgent background work.

console.log('1: sync');
queueMicrotask(() => console.log('2: microtask'));
setTimeout(() => console.log('3: macrotask'), 0);
requestAnimationFrame(() => console.log('4: rAF'));
typeof requestIdleCallback === 'function' &&
requestIdleCallback(() => console.log('5: rIC'));
console.log('6: sync');
// Typical output: 1, 6, 2, 3, 4, 5
// `5: rIC` may run later or be deferred under load

setTimeout(fn, 0) typically logs before the requestAnimationFrame callback because the timer's macrotask is dispatched on the next event loop turn, while rAF waits for the next paint (often a few milliseconds later at typical refresh rates). requestIdleCallback runs only after the browser finishes rendering, which is why it appears last and is the only callback in this example that may be deferred.

Node.js

Built on libuv, with additional phases beyond what the HTML spec describes:

  • process.nextTick() has a higher-priority queue that drains before the promise microtask queue on every phase transition.
  • Macrotasks are divided into named phases: timers, pending callbacks, idle/prepare, poll (I/O), check (setImmediate), and close callbacks. Phases run in order, and microtasks together with nextTick drain between each.
  • At the top level of a script, the execution order of setImmediate(fn) and setTimeout(fn, 0) is not deterministic and depends on loop timing. Inside an I/O callback, setImmediate is guaranteed to run before setTimeout(fn, 0).

A comparison of the Node-specific scheduling primitives:

MechanismQueueRuns atNotes
process.nextTick(fn)nextTick queueEvery phase transition, before the promise microtask queueHighest priority — recursive use can starve I/O
queueMicrotask(fn) / Promise.thenMicrotask queueEvery phase transition, after the nextTick queue drainsSame semantics as in browsers
setImmediate(fn)Check phaseOnce per loop iteration, after the poll (I/O) phaseUse to defer work until after the current I/O cycle
setTimeout(fn, 0)Timers phaseAt the top of the next loop iteration once the delay elapsesMinimum delay clamped to 1 ms

Observed ordering at the top of a script:

setImmediate(() => console.log('setImmediate'));
setTimeout(() => console.log('setTimeout'), 0);
Promise.resolve().then(() => console.log('promise'));
process.nextTick(() => console.log('nextTick'));
// Output:
// nextTick
// promise
// setTimeout (or setImmediate — order between these two is not guaranteed at the top level)
// setImmediate (or setTimeout)

Web Workers

  • Each Worker has an independent event loop, with its own microtask and macrotask queues.
  • Messages posted via postMessage are enqueued as macrotasks on the receiving Worker's event loop.
  • No access to requestAnimationFrame or the DOM.

Common misconceptions

Several statements about the event loop appear frequently in explanations and in responses generated by large language models but are inaccurate:

  1. "setTimeout(fn, 0) runs immediately after the current synchronous code." Microtasks drain first. A Promise.resolve().then(fn) scheduled after a setTimeout(fn, 0) still runs before the timer callback.
  2. "await blocks the event loop." The await expression pauses the containing async function and returns control to the caller. The continuation is scheduled as a microtask and does not block other tasks.
  3. "Microtasks run on a separate thread." JavaScript execution is single-threaded. Microtasks run on the main thread, interleaved with macrotasks under the event loop's scheduling rules.
  4. "Promise.resolve() is synchronous when the promise is already resolved." The resolution is synchronous, but .then callbacks are always scheduled asynchronously as microtasks. This is a Promises/A+ requirement intended to guarantee consistent execution ordering.
  5. "process.nextTick is a microtask." In Node.js, nextTick has its own queue that drains before the promise microtask queue.
  6. "setTimeout(fn, 0) fires after 0 milliseconds." Both the HTML specification and Node.js clamp the minimum to a small non-zero value (4ms for nested timers in browsers; 1ms in Node). A delay of 0 is a lower bound, not a guarantee.

Further reading and resources

Explain event delegation in JavaScript

Topics
Web APIsJavaScript

TL;DR

Event delegation is a technique in JavaScript where a single event listener is attached to a parent element instead of attaching event listeners to multiple child elements. When an event occurs on a child element, the event bubbles up the DOM tree, and the parent element's event listener handles the event based on the target element.

Event delegation provides the following benefits:

  • Improved performance: Attaching a single event listener is more efficient than attaching multiple event listeners to individual elements, especially for large or dynamic lists. This reduces memory usage and improves overall performance.
  • Simplified event handling: With event delegation, you only need to write the event handling logic once in the parent element's event listener. This makes the code more maintainable and easier to update.
  • Dynamic element support: Event delegation automatically handles events for dynamically added or removed elements within the parent element. There's no need to manually attach or remove event listeners when the DOM structure changes.

However, do note that:

  • It is important to identify the target element that triggered the event.
  • Not all events can be delegated because they are not bubbled. Non-bubbling events include: focus, blur, scroll, mouseenter, mouseleave, resize, etc.

Event delegation

Event delegation is a design pattern in JavaScript used to efficiently manage and handle events on multiple child elements by attaching a single event listener to a common ancestor element. This pattern is particularly valuable in scenarios where you have a large number of similar elements, such as list items, and want to optimize event handling.

How event delegation works

  1. Attach a listener to a common ancestor: Instead of attaching individual event listeners to each child element, you attach a single event listener to a common ancestor element higher in the DOM hierarchy.
  2. Event bubbling: When an event occurs on a child element, it bubbles up through the DOM tree to the common ancestor element. During this propagation, the event listener on the common ancestor can intercept and handle the event.
  3. Determine the target: Within the event listener, you can inspect the event object to identify the actual target of the event (the child element that triggered the event). You can use properties like event.target or event.currentTarget to determine which specific child element was interacted with.
  4. Perform action based on target: Based on the target element, you can perform the desired action or execute code specific to that element. This allows you to handle events for multiple child elements with a single event listener.

Benefits of event delegation

  1. Efficiency: Event delegation reduces the number of event listeners, improving memory usage and performance, especially when dealing with a large number of elements.
  2. Dynamic elements: It works seamlessly with dynamically added or removed child elements, as the common ancestor continues to listen for events on them.

Example

Here's a simple example:

// HTML:
// <ul id="item-list">
// <li>Item 1</li>
// <li>Item 2</li>
// <li>Item 3</li>
// </ul>
const itemList = document.getElementById('item-list');
itemList.addEventListener('click', (event) => {
if (event.target.tagName === 'LI') {
console.log(`Clicked on ${event.target.textContent}`);
}
});

In this example, a single click event listener is attached to the <ul> element. When a click event occurs on an <li> element, the event bubbles up to the <ul> element, where the event listener checks the target's tag name to identify whether a list item was clicked. It's crucial to check the identity of the event.target as there can be other kinds of elements in the DOM tree.

Use cases

Event delegation is commonly used in scenarios like:

Handling dynamic content in single-page applications

// HTML:
// <div id="button-container">
// <button>Button 1</button>
// <button>Button 2</button>
// </div>
// <button id="add-button">Add Button</button>
const buttonContainer = document.getElementById('button-container');
const addButton = document.getElementById('add-button');
buttonContainer.addEventListener('click', (event) => {
if (event.target.tagName === 'BUTTON') {
console.log(`Clicked on ${event.target.textContent}`);
}
});
addButton.addEventListener('click', () => {
const newButton = document.createElement('button');
newButton.textContent = `Button ${buttonContainer.children.length + 1}`;
buttonContainer.appendChild(newButton);
});

In this example, a click event listener is attached to the <div> container. When a new button is added dynamically and clicked, the event listener on the container handles the click event.

Simplifying code by avoiding the need to attach and remove event listeners for elements that change

// HTML:
// <form id="user-form">
// <input type="text" name="username" placeholder="Username">
// <input type="email" name="email" placeholder="Email">
// <input type="password" name="password" placeholder="Password">
// </form>
const userForm = document.getElementById('user-form');
userForm.addEventListener('input', (event) => {
const { name, value } = event.target;
console.log(`Changed ${name}: ${value}`);
});

In this example, a single input event listener is attached to the form element. It can respond to input changes for all child input elements, simplifying the code by eliminating the need for individual listeners on each <input> element.

More real-world delegation patterns

Beyond the simple list-item example, three patterns show up constantly in production code.

Form-wide delegated change handler

A single change or input listener on the form catches every input update, which is useful for autosave, dirty-tracking, and validation:

// HTML: <form id="profile-form"> with many inputs/selects/textareas inside
const form = document.getElementById('profile-form');
form.addEventListener('input', (event) => {
// Works for any <input>, <select>, or <textarea> the form contains,
// even ones added later by the user.
const field = event.target;
console.log(`${field.name} changed to ${field.value}`);
scheduleAutosave(field.name, field.value);
});

No matter how many fields the form has, or whether new fields are appended dynamically, only one listener is needed.

Data-table row actions

Modern data-table UIs commonly use a single click handler on the table that reads data-action from the clicked element to know what to do. This is delegation plus the data-attribute pattern:

// HTML rows look like:
// <tr>
// <td>...</td>
// <td>
// <button data-action="edit" data-id="42">Edit</button>
// <button data-action="delete" data-id="42">Delete</button>
// </td>
// </tr>
document.querySelector('table').addEventListener('click', (event) => {
const button = event.target.closest('[data-action]');
if (!button) {
return;
}
const { action, id } = button.dataset;
if (action === 'edit') {
openEditor(id);
}
if (action === 'delete') {
confirmDelete(id);
}
});

event.target.closest() is the workhorse here. It walks up from the click target to the nearest matching ancestor, which makes the handler robust against inner spans, icons, and styling wrappers.

Click-to-edit cells

A spreadsheet-style cell editor uses delegation to promote whichever cell was clicked into an editable input, without attaching one listener per cell:

document.querySelector('table').addEventListener('click', (event) => {
const cell = event.target.closest('td.editable');
if (!cell || cell.querySelector('input')) {
return; // already editing
}
const original = cell.textContent;
const input = document.createElement('input');
input.value = original;
cell.textContent = '';
cell.appendChild(input);
input.focus();
input.addEventListener('blur', () => {
cell.textContent = input.value;
if (input.value !== original) saveCell(cell.dataset.id, input.value);
});
});

Delegating non-bubbling events

The TL;DR notes that focus, blur, scroll, mouseenter, mouseleave, and resize do not bubble, so the obvious delegation pattern (one listener on the parent) does not work for them. Two solid workarounds:

Use the capture phase

Pass a third argument of true (or { capture: true }) to listen during the capture phase. The event is visible to ancestors on the way down to the target, even if it does not bubble back up:

document.body.innerHTML = `
<div id="form">
<input id="a" placeholder="A">
<input id="b" placeholder="B">
</div>
`;
document.getElementById('form').addEventListener(
'focus',
(event) => {
console.log('focused:', event.target.id);
},
true, // capture: catch focus before it stops at the input
);
// In a real page, focus events fire automatically when the user clicks or
// tabs into an input. Here we dispatch them by hand so the demo prints the
// same sequence in this playground.
['a', 'b'].forEach((id) => {
document.getElementById(id).dispatchEvent(new FocusEvent('focus'));
});
// Logs: focused: a, then focused: b

Use the bubbling siblings: focusin and focusout

focus does not bubble, but focusin does. The same is true for blur and focusout. The pointer events have similar pairs: mouseover and mouseout bubble, while mouseenter and mouseleave do not (note that mouseover and mouseout also fire when the pointer crosses descendant boundaries, so the semantics are slightly different). The bubbling variants exist specifically to support delegation:

document.body.innerHTML = `
<form id="form">
<input id="a" placeholder="A">
<input id="b" placeholder="B">
</form>
`;
const form = document.getElementById('form');
form.addEventListener('focusin', (event) =>
console.log('focusin:', event.target.id),
);
form.addEventListener('focusout', (event) =>
console.log('focusout:', event.target.id),
);
// In a real page, focusin and focusout fire automatically when the user
// moves between inputs. Here we dispatch them by hand to simulate the user
// moving focus from input A to input B inside this playground.
const a = document.getElementById('a');
const b = document.getElementById('b');
[
[a, 'focusin'], // focusin: a
[a, 'focusout'], // focusout: a
[b, 'focusin'], // focusin: b
].forEach(([el, type]) => {
el.dispatchEvent(new FocusEvent(type, { bubbles: true }));
});

For scroll, the bubbling-sibling approach does not exist. Capture-phase delegation technically works (capture-phase listeners on ancestors do receive scroll events from descendants), but it is rarely a good idea: scroll events fire many times per second, you would receive them for every nested scroller in the subtree, and there is no event.target filtering that beats just attaching a listener directly to the scrollable element. Use one direct listener instead. (window scroll has nothing higher to delegate to anyway.)

Performance: when delegation actually helps

The common claim is "delegation is faster because there are fewer listeners." That is partly true but often overstated.

  • Listener count is rarely the bottleneck. On modern browsers, attaching 100 vs 10,000 click listeners is still sub-millisecond. Adding listeners is almost free, and dispatching a click is fast either way.
  • Memory is the meaningful win. Each direct listener creates a closure that holds references to its enclosing scope. With 10,000 rows, that is 10,000 closures kept alive, which can become noticeable. One delegated listener is one closure.
  • Dynamic content is the structural win. With direct listeners, you have to attach (and detach) listeners on every DOM mutation, which is easy to leak. Delegation just works for elements added later, and that is the main reason most code uses it.
  • Delegation is not always faster at runtime. A delegated handler runs event.target.closest(...) on every event, which is cheap but not free. For a small fixed number of elements with stable handlers, attaching directly is fine and arguably cleaner.

Use delegation for memory, dynamic content, and code simplicity, not because direct listeners are inherently slow.

Pitfalls

Event delegation comes with several pitfalls:

  • Incorrect target handling. Use event.target.closest(selector) rather than checking event.target.tagName directly. Clicks on inner elements (icons, spans) will otherwise miss the match.
  • Not all events bubble. focus, blur, scroll, mouseenter, mouseleave, and resize do not bubble. Use the capture phase or the bubbling alternatives (focusin, focusout, mouseover, mouseout) shown above.
  • stopPropagation() inside the tree breaks delegation. If a child handler calls event.stopPropagation(), the delegated handler at the ancestor never fires. This is a common source of "my handler doesn't run" bugs.
  • Event overhead. Complex routing logic inside the root listener can become hard to maintain. Use small dispatch tables ({ edit: ..., delete: ... }) rather than long if/else chains.

Event delegation in JavaScript frameworks

In React, event handlers are attached to the React root's DOM container into which the React tree is rendered. Even though onClick is added to child elements, the actual event listeners are attached to the root DOM node, leveraging event delegation to optimize event handling and improve performance.

When an event occurs, React's event listener captures it and determines which React component rendered the target element based on its internal bookkeeping. React then dispatches the event to the appropriate component's event handler by calling the handler function with a synthetic event object. This synthetic event object wraps the native browser event, providing a consistent interface across different browsers and capturing information about the event.

By using event delegation, React avoids attaching individual event handlers to each component instance, which would create significant overhead, especially for large component trees. Instead, React leverages the browser's native event bubbling mechanism to capture events at the root and distribute them to the appropriate components.

Further reading

Explain how `this` works in JavaScript

Topics
JavaScriptOOP

TL;DR

There's no simple explanation for this; it is one of the most confusing concepts in JavaScript because its behavior differs from many other programming languages. The one-liner explanation of the this keyword is that it is a dynamic reference to the context in which a function is executed.

A longer explanation is that this follows these rules:

  1. If the new keyword is used when calling the function, meaning the function was used as a function constructor, the this inside the function is the newly-created object instance.
  2. If this is used in a class constructor, the this inside the constructor is the newly-created object instance.
  3. If apply(), call(), or bind() is used to call/create a function, this inside the function is the object that is passed in as the argument.
  4. If a function is called as a method (e.g. obj.method()) — this is the object that the function is a property of.
  5. If a function is invoked as a free function invocation, meaning it was invoked without any of the conditions present above, this is the global object. In the browser, the global object is the window object. If in strict mode ('use strict';), this will be undefined instead of the global object.
  6. If multiple of the above rules apply, the rule that is higher wins and will set the this value.
  7. If the function is an ES2015 arrow function, it ignores all the rules above and receives the this value of its surrounding scope at the time it is created.

For an in-depth explanation, do check out Arnav Aggrawal's article on Medium.


this keyword

In JavaScript, this is a keyword that refers to the current execution context of a function or script. It's a fundamental concept in JavaScript, and understanding how this works is crucial for building robust and maintainable applications.

Used globally

In the global scope, this refers to the global object, which is the window object in a web browser or the global object in a Node.js environment.

console.log(this); // In a browser, this will log the window object (for non-strict mode).

Within a regular function call

When a function is called in the global context or as a standalone function, this refers to the global object (in non-strict mode) or undefined (in strict mode).

function showThis() {
console.log(this);
}
showThis(); // In non-strict mode: Window (global object). In strict mode: undefined.

Within a method call

When a function is called as a method of an object, this refers to the object that the method is called on.

const obj = {
name: 'John',
showThis: function () {
console.log(this);
},
};
obj.showThis(); // { name: 'John', showThis: ƒ }

Note that if you do the following, it is as good as a regular function call and not a method call. this has lost its context and no longer points to obj.

const obj = {
name: 'John',
showThis: function () {
console.log(this);
},
};
const showThisStandalone = obj.showThis;
showThisStandalone(); // In non-strict mode: Window (global object). In strict mode: undefined.

Within a function constructor

When a function is used as a constructor (called with the new keyword), this refers to the newly-created instance. In the following example, this refers to the Person object being created, and the name property is set on that object.

function Person(name) {
this.name = name;
}
const person = new Person('John');
console.log(person.name); // "John"

Within class constructor and methods

In ES2015 classes, this behaves as it does in object methods. It refers to the instance of the class.

class Person {
constructor(name) {
this.name = name;
}
showThis() {
console.log(this);
}
}
const person = new Person('John');
person.showThis(); // Person {name: 'John'}
const showThisStandalone = person.showThis;
showThisStandalone(); // `undefined` because in JavaScript class bodies, all methods are strict mode by default, even if you don't add 'use strict'

Explicitly binding this

You can use bind(), call(), or apply() to explicitly set the value of this for a function.

The call() and apply() methods allow you to explicitly set the value of this when calling the function.

function showThis() {
console.log(this);
}
const obj = { name: 'John' };
showThis.call(obj); // { name: 'John' }
showThis.apply(obj); // { name: 'John' }

The bind() method creates a new function with this bound to the specified value.

function showThis() {
console.log(this);
}
const obj = { name: 'John' };
const boundFunc = showThis.bind(obj);
boundFunc(); // { name: 'John' }

Within arrow functions

Arrow functions do not have their own this context. Instead, this is lexically scoped, which means it inherits the this value from its surrounding scope at the time the arrow function is defined.

In this example, this refers to the global object (window or global), because the arrow function is not bound to the person object.

const person = {
firstName: 'John',
sayHello: () => {
console.log(`Hello, my name is ${this.firstName}!`);
},
};
person.sayHello(); // "Hello, my name is undefined!"

In the following example, the this in the arrow function will be the this value of its enclosing context, so it depends on how showThis() is called.

const obj = {
name: 'John',
showThis: function () {
const arrowFunc = () => {
console.log(this);
};
arrowFunc();
},
};
obj.showThis(); // { name: 'John', showThis: ƒ }
const showThisStandalone = obj.showThis;
showThisStandalone(); // In non-strict mode: Window (global object). In strict mode: undefined.

Therefore, the this value in arrow functions cannot be set by bind(), apply() or call() methods, nor does it point to the current object in object methods.

const obj = {
name: 'Alice',
regularFunction: function () {
console.log('Regular function:', this.name);
},
arrowFunction: () => {
console.log('Arrow function:', this.name);
},
};
const anotherObj = {
name: 'Bob',
};
// Using call/apply/bind with a regular function
obj.regularFunction.call(anotherObj); // Regular function: Bob
obj.regularFunction.apply(anotherObj); // Regular function: Bob
const boundRegularFunction = obj.regularFunction.bind(anotherObj);
boundRegularFunction(); // Regular function: Bob
// Using call/apply/bind with an arrow function, `this` refers to the global scope and cannot be modified.
obj.arrowFunction.call(anotherObj); // Arrow function: window/undefined (depending if strict mode)
obj.arrowFunction.apply(anotherObj); // Arrow function: window/undefined (depending if strict mode)
const boundArrowFunction = obj.arrowFunction.bind(anotherObj);
boundArrowFunction(); // Arrow function: window/undefined (depending if strict mode)

Within event handlers

When a function is called as a DOM event handler, this refers to the element that triggered the event. In this example, this refers to the <button> element that was clicked.

<button id="my-button" onclick="console.log(this)">Click me</button>
<!-- Logs the button element -->

When setting an event handler using JavaScript, this also refers to the element that received the event.

document.getElementById('my-button').addEventListener('click', function () {
console.log(this); // Logs the button element
});

As mentioned above, ES2015 introduced arrow functions, which use the enclosing lexical scope. This is usually convenient, but it does prevent the caller from defining the this context via .call/.apply/.bind. One of the consequences is that DOM event handlers will not properly bind this in your event handler functions if you define the callback parameters to .addEventListener() using arrow functions.

document.getElementById('my-button').addEventListener('click', () => {
console.log(this); // Window / undefined (depending on whether strict mode) instead of the button element.
});

In summary, this in JavaScript refers to the current execution context of a function or script, and its value can change depending on the context in which it is used. Understanding how this works is essential for building robust and maintainable JavaScript applications.

Further reading

Describe the difference between a cookie, `sessionStorage` and `localStorage` in browsers

Topics
Web APIsJavaScript

TL;DR

All of the following are mechanisms of storing data on the client, the user's browser in this case. localStorage and sessionStorage both implement the Web Storage API interface.

  • Cookies: Suitable for server-client communication, small storage capacity, can be persistent or session-based, domain-specific. Sent to the server on every request.
  • localStorage: Suitable for long-term storage, data persists even after the browser is closed, accessible across all tabs and windows of the same origin, highest storage capacity among the three.
  • sessionStorage: Suitable for temporary data within a single page session, data is cleared when the tab or window is closed, has a higher storage capacity compared to cookies.

Here's a table summarizing the 3 client storage mechanisms.

PropertyCookielocalStoragesessionStorage
InitiatorClient or server. Server can use Set-Cookie headerClientClient
LifespanAs specifiedUntil deletedUntil tab is closed
Persistent across browser sessionsIf a future expiry date is setYesNo
Sent to server with every HTTP requestYes, sent via Cookie headerNoNo
Total capacity (per domain)4kb5MB5MB
AccessAcross windows/tabsAcross windows/tabsSame tab
SecurityJavaScript cannot access HttpOnly cookiesNoneNone

Storage on the web

Cookies, localStorage, and sessionStorage, are all storage mechanisms on the client (web browser). It is useful to store data on the client for client-only state like access tokens, themes, personalized layouts, so that users can have a consistent experience on a website across tabs and usage sessions.

These client-side storage mechanisms have the following common properties:

  • This means the clients can read and modify the values (except for HttpOnly cookies).
  • Key-value based storage.
  • They are only able to store values as strings. Non-strings will have to be serialized into a string (e.g. JSON.stringify()) in order to be stored.

Use cases for each storage mechanism

Since cookies have a relatively low maximum size, it is not advisable to store all your client-side data within cookies. The distinguishing properties about cookies are that cookies are sent to the server on every HTTP request so the low maximum size is a feature that prevents your HTTP requests from being too large due to cookies. Automatic expiry of cookies is a useful feature as well.

With that in mind, the best kind of data to store within cookies is small pieces of data that need to be transmitted to the server, such as auth tokens, session IDs, analytics tracking IDs, GDPR cookie consent, language preferences that are important for authentication, authorization, and rendering on the server. These values are sometimes sensitive and can benefit from the HttpOnly, Secure, and Expires/Max-Age capabilities that cookies provide.

localStorage and sessionStorage both implement the Web Storage API interface. Web Storage has a generous total capacity of 5MB, so storage size is usually not a concern. The key difference is that values stored in Web Storage are not automatically sent along with HTTP requests.

While you can manually include values from Web Storage when making AJAX/fetch() requests, the browser does not include them in the initial request / first load of the page. Hence Web Storage should not be used to store data that is relied on by the server for the initial rendering of the page if server-side rendering is being used (typically authentication/authorization-related information). localStorage is most suitable for user preferences data that do not expire, like themes and layouts (if it is not important for the server to render the final layout). sessionStorage is most suitable for temporary data that only needs to be accessible within the current browsing session, such as form data (useful to preserve data during accidental reloads).

The following sections dive deeper into each client storage mechanism.

Cookies

Cookies are used to store small pieces of data on the client side that can be sent back to the server with every HTTP request.

  • Storage capacity: Limited to around 4KB for all cookies.
  • Lifespan: Cookies can have a specific expiration date set using the Expires or Max-Age attributes. If no expiration date is set, the cookie is deleted when the browser is closed (session cookie).
  • Access: Cookies are domain-specific and can be shared across different pages and subdomains within the same domain.
  • Security: Cookies can be marked as HttpOnly to prevent access from JavaScript, reducing the risk of XSS attacks. They can also be secured with the Secure flag to ensure they are sent only when HTTPS is used.
// Set a cookie for the name/key `auth_token` with an expiry.
document.cookie =
'auth_token=abc123def; expires=Fri, 31 Dec 2024 23:59:59 GMT; path=/';
// Read all cookies. There's no way to read specific cookies using `document.cookie`.
// You have to parse the string yourself.
console.log(document.cookie); // auth_token=abc123def
// Delete the cookie with the name/key `auth_token` by setting an
// expiry date in the past. The value doesn't matter.
document.cookie = 'auth_token=; expires=Thu, 01 Jan 1970 00:00:00 GMT; path=/';

It is a pain to read/write to cookies. document.cookie returns a single string containing all the key/value pairs delimited by ; and you have to parse the string yourself. The js-cookie npm library provides a simple and lightweight API for reading/writing cookies in JavaScript.

A modern native way of accessing cookies is via the Cookie Store API which is only available on HTTPS pages.

// Set a cookie. More options are available too.
cookieStore.set('auth_token', 'abc123def');
// Async method to access a single cookie and do something with it.
cookieStore.get('auth_token').then(...);
// Async method to get all cookies.
cookieStore.getAll().then(...);
// Async method to delete a single cookie.
cookieStore.delete('auth_token').then(() =>
console.log('Cookie deleted')
);

The CookieStore API is relatively new and may not be supported in all browsers (supported in latest Chrome and Edge as of June 2024). Refer to caniuse.com for the latest compatibility.

localStorage

localStorage is used for storing data that persists even after the browser is closed and reopened. It is designed for long-term storage of data.

  • Storage capacity: Typically around 5MB per origin (varies by browser).
  • Lifespan: Data in localStorage persists until explicitly deleted by the user or the application.
  • Access: Data is accessible within all tabs and windows of the same origin.
  • Security: All JavaScript on the page has access to values within localStorage.
// Set a value in localStorage.
localStorage.setItem('key', 'value');
// Get a value from localStorage.
console.log(localStorage.getItem('key'));
// Remove a value from localStorage.
localStorage.removeItem('key');
// Clear all data in localStorage.
localStorage.clear();

sessionStorage

sessionStorage is used to store data for the duration of the page session. It is designed for temporary storage of data.

  • Storage Capacity: Typically around 5MB per origin (varies by browser).
  • Lifespan: Data in sessionStorage is cleared when the page session ends (i.e., when the browser or tab is closed). Reloading the page does not destroy data within sessionStorage.
  • Access: Data is accessible only within the current tab (or browsing context). Different tabs share different sessionStorage objects even if they belong to the same browser window. In this context, window refers to a browser window that can contain multiple tabs.
  • Security: All JavaScript on the same page has access to values within sessionStorage for that page.
// Set a value in sessionStorage.
sessionStorage.setItem('key', 'value');
// Get a value from sessionStorage.
console.log(sessionStorage.getItem('key'));
// Remove a value from sessionStorage.
sessionStorage.removeItem('key');
// Clear all data in sessionStorage.
sessionStorage.clear();

Security

A side-by-side feature comparison hides the most important practical difference between these three: how each one behaves under XSS.

CookielocalStorage / sessionStorage
Reachable from arbitrary JS on the originOnly if not HttpOnlyAlways
HttpOnly flag (cannot be read from JS)YesNo equivalent
Secure flag (HTTPS-only transport)YesN/A (values never leave the client unless your code sends them)
SameSite flag (CSRF defense: Strict, Lax, None)YesN/A
Partitioned cookies (CHIPS, isolated per top-level site)Yes (modern browsers)N/A
Survives an XSS exploitAn HttpOnly cookie doesAll values are exfiltrable

The practical takeaways:

  • Auth and session tokens belong in HttpOnly; Secure; SameSite=Lax cookies, not in localStorage. Any XSS bug in any script on the origin can read everything in localStorage. An HttpOnly cookie cannot be read by JavaScript at all, so the attacker would have to make requests through the user's browser, which CSRF defenses (SameSite, double-submit tokens) are designed to block.
  • CSRF vs XSS is the trade-off. Cookies need CSRF protection; localStorage tokens (with the Authorization-header pattern) do not. The cookie plus SameSite=Lax combination is generally considered safer because it survives XSS, and SameSite=Lax blocks the most common CSRF cases by default.
  • localStorage is fine for non-sensitive client state: theme preferences, layout settings, draft text, recently viewed items.

Real-world set, get, and remove

The three APIs look superficially similar but have meaningfully different ergonomics. Here is the same operation in each:

// localStorage: synchronous, string-only
localStorage.setItem('user', JSON.stringify({ id: 1, name: 'Ada' }));
const user = JSON.parse(localStorage.getItem('user') ?? 'null');
localStorage.removeItem('user');
// sessionStorage: same shape, scoped to the tab
sessionStorage.setItem('draft', 'hello');
const draft = sessionStorage.getItem('draft');
sessionStorage.removeItem('draft');
// Cookie: modern async API (where supported)
await cookieStore.set({
name: 'session',
value: 'abc123',
sameSite: 'Lax',
secure: true,
expires: Date.now() + 7 * 24 * 60 * 60 * 1000,
});
const session = await cookieStore.get('session');
await cookieStore.delete('session');
// Cookie: legacy `document.cookie` API (universally supported)
document.cookie =
'session=abc123; Path=/; Max-Age=604800; SameSite=Lax; Secure';
// Reading a specific cookie still requires parsing the string yourself:
const value = document.cookie
.split('; ')
.find((row) => row.startsWith('session='))
?.split('=')[1];

A few common mistakes:

  • localStorage and sessionStorage only store strings. localStorage.setItem('count', 0) stores "0" and localStorage.getItem('count') returns "0" (a string). Always serialize and deserialize explicitly.
  • Assigning document.cookie = '...' does not clear other cookies. Each assignment sets or updates a single cookie. To delete one, set it again with Max-Age=0 or an expires date in the past.
  • cookieStore.set is async; localStorage.setItem is synchronous. Mixing them in the same logic without awaiting the cookie call leads to ordering bugs.

Beyond these three: IndexedDB and Cache Storage

Modern apps frequently need more than what these three APIs offer. Two more are worth knowing:

  • IndexedDB: an in-browser, asynchronous, transactional database. Use it for large structured data (offline app state, large user-generated content, search indexes), MBs to GBs of storage, and queryable data. Wrappers like Dexie.js and idb make the API more pleasant.
  • Cache Storage (caches): paired with Service Workers, this stores HTTP request/response pairs for offline-capable apps and PWAs. It is not a general-purpose key-value store; it is specifically for caching network responses.
  • localStorage is for simple key-value config only. If you find yourself JSON-stringifying complex nested data into localStorage, IndexedDB is usually a better fit.

Notes

There are also other client-side storage mechanisms like IndexedDB which is more powerful than the above-mentioned technologies but more complicated to use.

References

Describe the difference between `<script>`, `<script async>` and `<script defer>`

Topics
HTMLJavaScript

TL;DR

All of these ways (<script>, <script async>, and <script defer>) are used to load and execute JavaScript files in an HTML document, but they differ in how the browser handles loading and execution of the script:

  • <script> is the default way of including JavaScript. The browser blocks HTML parsing while the script is being downloaded and executed. The browser will not continue rendering the page until the script has finished executing.
  • <script async> downloads the script asynchronously, in parallel with parsing the HTML. Executes the script as soon as it is available, potentially interrupting the HTML parsing. Multiple <script async> tags do not wait for each other and execute in no particular order.
  • <script defer> downloads the script asynchronously, in parallel with parsing the HTML. However, the execution of the script is deferred until HTML parsing is complete, in the order they appear in the HTML.

Here's a table summarizing the 4 ways of loading <script>s in an HTML document. Modern apps almost always use modules, which deserve their own row.

Feature<script><script async><script defer><script type="module">
Parsing behaviorBlocks HTML parsingDownloads in parallel; execution still blocks parsingDownloads in parallel; execution deferred until after parsingDownloads in parallel; execution deferred until after parsing
Execution orderIn order of appearanceNot guaranteedIn order of appearanceIn order of appearance, with each script's import dependencies resolved first
DOM dependencyNoNoYes (waits for DOM)Yes (waits for DOM)

What <script> tags are for

<script> tags are used to include JavaScript on a web page. The async and defer attributes are used to change how/when the loading and execution of the script happens.

<script>

For normal <script> tags without any async or defer, when they are encountered, HTML parsing is blocked, the script is fetched and executed immediately. HTML parsing resumes after the script is executed. This can block rendering of the page if the script is large.

Use <script> for critical scripts that the page relies on to render properly.

<!doctype html>
<html>
<head>
<title>Regular Script</title>
</head>
<body>
<!-- Content before the script -->
<h1>Regular Script Example</h1>
<p>This content will be rendered before the script executes.</p>
<!-- Regular script -->
<script src="regular.js"></script>
<!-- Content after the script -->
<p>This content will be rendered after the script executes.</p>
</body>
</html>

<script async>

In <script async>, the browser downloads the script file asynchronously (in parallel with HTML parsing) and executes it as soon as it is available (potentially before HTML parsing completes). The execution will not necessarily be executed in the order in which it appears in the HTML document. This can improve perceived performance because the browser doesn't wait for the script to download before continuing to render the page.

Use <script async> when the script is independent of any other scripts on the page, for example, analytics and ads scripts.

<!doctype html>
<html>
<head>
<title>Async Script</title>
</head>
<body>
<!-- Content before the script -->
<h1>Async Script Example</h1>
<p>This content will be rendered before the async script executes.</p>
<!-- Async script -->
<script async src="async.js"></script>
<!-- Content after the script -->
<p>
This content may be rendered before or after the async script executes.
</p>
</body>
</html>

<script defer>

Similar to <script async>, <script defer> also downloads the script in parallel to HTML parsing, but the script is only executed when the document has been fully parsed and before firing DOMContentLoaded. If there are multiple of them, each deferred script is executed in the order they appear in the HTML document.

If a script relies on a fully-parsed DOM, the defer attribute will be useful in ensuring that the HTML is fully parsed before executing.

<!doctype html>
<html>
<head>
<title>Deferred Script</title>
</head>
<body>
<!-- Content before the script -->
<h1>Deferred Script Example</h1>
<p>This content will be rendered before the deferred script executes.</p>
<!-- Deferred script -->
<script defer src="deferred.js"></script>
<!-- Content after the script -->
<p>This content will be rendered before the deferred script executes.</p>
</body>
</html>

<script type="module">

Module scripts are the standard entry point for projects built with Vite (and many other modern bundlers). Next.js doesn't always emit type="module" for its own runtime, but most application code authored as ES modules ends up running through one of these tags. They behave like defer scripts with two important additions: dependencies declared with import are loaded and executed in the right order, and module code is strict by default.

<script type="module" src="/src/main.js"></script>

Behavior:

  • Parsing: deferred. HTML parsing continues; the script does not block.
  • Execution: runs after the document has finished parsing. Across multiple <script type="module"> tags in the same document, execution follows document order, but each script's import dependencies are resolved and executed first.
  • DOM ready: the DOM is parsed before the module runs.
  • Strict mode: enforced automatically; no 'use strict' directive needed.
  • CORS: module scripts are always fetched with CORS, so the server must send Access-Control-Allow-Origin for cross-origin loads. The crossorigin attribute itself is not required for the module to load — it only controls whether credentials (cookies, HTTP auth) are sent on cross-origin requests (crossorigin or crossorigin="anonymous" omits them; crossorigin="use-credentials" sends them). Adding it is still recommended for explicit credentials handling and for full error details in error event handlers.

Use <script type="module"> for new front-end code. If you have an independent module-script entry point and document order does not matter, combine with async:

<script async type="module" src="/analytics-module.js"></script>

Which to use: a decision matrix

Script typeUse for
<script> (no attrs)Critical inline scripts that must run synchronously before the next HTML element parses.
<script async>Independent third-party scripts where order does not matter (analytics, ads, monitoring beacons).
<script defer>Classic (non-module) app scripts where order matters and the DOM should be ready.
<script type="module">ES module entry points. The default for Vite, Next.js client code, and any new project.
<script async type="module">Module scripts where order does not matter. Most apps want default module behavior instead.

How modern frameworks load scripts

It also helps to know what the tools you use actually generate.

  • Vite: emits <script type="module" crossorigin src="..."> for the entry chunk in production. Dynamic import() calls become <link rel="modulepreload"> hints plus lazy module fetches.
  • Next.js: provides a built-in <Script> component with strategy props such as beforeInteractive (loaded and executed before page hydration), afterInteractive (default, like defer), lazyOnload (after the page is idle), and worker (offloaded to Partytown).
  • CDN-injected scripts (Google Analytics, Plausible, Sentry, etc.) are almost always recommended as async, because they are independent. Loading them with defer or no attribute slows down the page for no reason.
  • CRA and older webpack apps typically emit <script defer src="..."></script> for the runtime and chunk entries. CRA itself is deprecated, and new projects should use Vite or Next.js.

Common bugs from the wrong attribute choice

  • Analytics with defer instead of async. defer waits for HTML parsing, so a third-party tag at the top of the page artificially extends DOMContentLoaded. Use async for any independent third-party tag.
  • App entry as async script. If app.js and vendor.js are loaded with async, they can execute in any order. app.js may run before vendor.js finishes, which throws ReferenceError for the missing globals. Use defer (or modules) for app scripts.
  • document.write inside a defer or async script. Browsers ignore document.write() calls from async or deferred scripts with a console warning: "A call to document.write() from an asynchronously-loaded external script was ignored." The script would need to be a regular blocking <script> for the call to take effect, though document.write should be avoided in any new code.
  • Module script in a <script> tag without type="module". Top-level import statements throw SyntaxError. Either set type="module" or use a bundler.
  • Cross-origin module served without CORS headers. Module scripts are always fetched with CORS, so a cross-origin URL like <script type="module" src="https://cdn.example.com/lib.js"> will fail to load if the server doesn't send Access-Control-Allow-Origin. The fix is on the server side — adding crossorigin to the tag does not bypass this requirement (though it is still recommended for better error reporting and explicit credentials handling).

Notes

  • The async attribute should be used for scripts that are not critical to the initial rendering of the page and do not depend on each other, while the defer attribute should be used for scripts that depend on or are depended on by another script.
  • The async and defer attributes are ignored for inline scripts (scripts with no src attribute).
  • <script>s with defer or async that contain document.write() will be ignored with a message like "A call to document.write() from an asynchronously-loaded external script was ignored".
  • Even though async and defer help to make script downloading asynchronous, the scripts are still eventually executed on the main thread. If these scripts are computationally intensive, it can result in laggy/frozen UI. Partytown is a library that helps relocate script executions into a web worker and off the main thread, which is great for third-party scripts where you do not have control over the code.

Further reading

What's the difference between a JavaScript variable that is: `null`, `undefined` or undeclared?

How would you go about checking for any of these states?
Topics
JavaScript

TL;DR

TraitnullundefinedUndeclared
MeaningExplicitly set by the developer to indicate that a variable has no valueVariable has been declared but not assigned a valueVariable has not been declared at all
Type (via typeof operator)'object''undefined''undefined'
Equality Comparisonnull == undefined is trueundefined == null is trueThrows a ReferenceError

Undeclared

Undeclared variables are created when you assign a value to an identifier that is not previously created using var, let or const. Undeclared variables will be defined globally, outside of the current scope. In strict mode, a ReferenceError will be thrown when you try to assign to an undeclared variable. Undeclared variables are bad in the same way that global variables are bad. Avoid them at all costs! To check for them, wrap its usage in a try/catch block.

function foo() {
x = 1; // Throws a ReferenceError in strict mode
}
foo();
console.log(x); // 1 (if not in strict mode)

Using the typeof operator on undeclared variables will give 'undefined'.

console.log(typeof y === 'undefined'); // true

undefined

A variable that is undefined is a variable that has been declared, but not assigned a value. It is of type undefined. If a function does not return a value, and its result is assigned to a variable, that variable will also have the value undefined. To check for it, compare using the strict equality (===) operator or typeof which will give the 'undefined' string. Note that you should not be using the loose equality operator (==) to check, as it will also return true if the value is null.

let foo;
console.log(foo); // undefined
console.log(foo === undefined); // true
console.log(typeof foo === 'undefined'); // true
console.log(foo == null); // true. Wrong, don't use this to check if a value is undefined!
function bar() {} // Returns undefined if there is nothing returned.
let baz = bar();
console.log(baz); // undefined

null

A variable that is null will have been explicitly assigned to the null value. It represents no value and is different from undefined in the sense that it has been explicitly assigned. To check for null, simply compare using the strict equality operator. Note that like the above, you should not be using the loose equality operator (==) to check, as it will also return true if the value is undefined.

const foo = null;
console.log(foo === null); // true
console.log(typeof foo === 'object'); // true
console.log(foo == undefined); // true. Wrong, don't use this to check if a value is null!

Notes

  • As a good habit, never leave your variables undeclared or unassigned. Explicitly assign null to them after declaring if you don't intend to use them yet.
  • Always explicitly declare variables before using them to prevent errors.
  • Using some static analysis tooling in your workflow (e.g. ESLint, TypeScript Compiler), will enable checks that you are not referencing undeclared variables.

Practice

Practice implementing type utilities that check for null and undefined on GreatFrontEnd.

Further Reading

What's the difference between `.call` and `.apply` in JavaScript?

Topics
JavaScript

TL;DR

.call and .apply are both used to invoke functions with a specific this context and arguments. The primary difference lies in how they accept arguments:

  • .call(thisArg, arg1, arg2, ...): Takes arguments individually.
  • .apply(thisArg, [argsArray]): Takes arguments as an array.

Assuming we have a function add, the function can be invoked using .call and .apply in the following manner:

function add(a, b) {
return a + b;
}
console.log(add.call(null, 1, 2)); // 3
console.log(add.apply(null, [1, 2])); // 3

Call vs Apply

Both .call and .apply are used to invoke functions, and the first parameter will be used as the value of this within the function. However, .call takes in comma-separated arguments as the next arguments, while .apply takes in an array of arguments as the next argument.

An easy way to remember this is C for call and comma-separated and A for apply and an array of arguments.

function add(a, b) {
return a + b;
}
console.log(add.call(null, 1, 2)); // 3
console.log(add.apply(null, [1, 2])); // 3

With ES6 syntax, we can invoke call using an array along with the spread operator for the arguments.

function add(a, b) {
return a + b;
}
console.log(add.call(null, ...[1, 2])); // 3

Use cases

Context management

.call and .apply can set the this context explicitly when invoking methods on different objects.

const person = {
name: 'John',
greet() {
console.log(`Hello, my name is ${this.name}`);
},
};
const anotherPerson = { name: 'Alice' };
person.greet.call(anotherPerson); // Hello, my name is Alice
person.greet.apply(anotherPerson); // Hello, my name is Alice

Function borrowing

Both .call and .apply allow borrowing methods from one object and using them in the context of another. This is useful when passing functions as arguments (callbacks) and the original this context is lost. .call and .apply allow the function to be invoked with the intended this value.

function greet() {
console.log(`Hello, my name is ${this.name}`);
}
const person1 = { name: 'John' };
const person2 = { name: 'Alice' };
greet.call(person1); // Hello, my name is John
greet.apply(person2); // Hello, my name is Alice

Alternative syntax to call methods on objects

.apply can be used with object methods by passing the object as the first argument followed by the usual parameters.

const arr1 = [1, 2, 3];
const arr2 = [4, 5, 6];
Array.prototype.push.apply(arr1, arr2); // Same as arr1.push(4, 5, 6)
console.log(arr1); // [1, 2, 3, 4, 5, 6]

Deconstructing the above:

  1. The first object, arr1 will be used as the this value.
  2. .push() is called on arr1 using arr2 as an array of arguments because it's using .apply().
  3. Array.prototype.push.apply(arr1, arr2) is equivalent to arr1.push(...arr2).

It may not be obvious, but Array.prototype.push.apply(arr1, arr2) mutates arr1. It's clearer to call methods using the OOP-centric way instead where possible.

Follow-Up Questions

  • How do .call and .apply differ from Function.prototype.bind?

Practice

Practice implementing your own Function.prototype.call method and Function.prototype.apply method on GreatFrontEnd.

Further Reading

Explain `Function.prototype.bind` in JavaScript

Topics
JavaScriptOOP

TL;DR

Function.prototype.bind is a method in JavaScript that allows you to create a new function with a specific this value and optional initial arguments. Its primary purpose is to:

  • Binding this value to preserve context: The primary purpose of bind is to bind the this value of a function to a specific object. When you call func.bind(thisArg), it creates a new function with the same body as func, but with this permanently bound to thisArg.
  • Partial application of arguments: bind also allows you to pre-specify arguments for the new function. Any arguments passed to bind after thisArg will be prepended to the arguments list when the new function is called.
  • Method borrowing: bind allows you to borrow methods from one object and apply them to another object, even if they were not originally designed to work with that object.

The bind method is particularly useful in scenarios where you need to ensure that a function is called with a specific this context, such as in event handlers, callbacks, or method borrowing.


Function.prototype.bind

Function.prototype.bind allows you to create a new function with a specific this context and, optionally, preset arguments. bind() is most useful for preserving the value of this in methods of classes that you want to pass into other functions.

bind was frequently used on legacy React class component methods which were not defined using arrow functions.

const john = {
age: 42,
getAge: function () {
return this.age;
},
};
console.log(john.getAge()); // 42
const unboundGetAge = john.getAge;
console.log(unboundGetAge()); // undefined
const boundGetAge = john.getAge.bind(john);
console.log(boundGetAge()); // 42
const mary = { age: 21 };
const boundGetAgeMary = john.getAge.bind(mary);
console.log(boundGetAgeMary()); // 21

In the example above, when the getAge method is called without a calling object (as unboundGetAge), the value is undefined because the value of this within getAge() becomes the global object. boundGetAge() has its this bound to john, hence it is able to obtain the age of john.

We can even use getAge on another object which is not john! boundGetAgeMary returns the age of mary.

Use cases

Here are some common scenarios where bind is frequently used:

Preserving context and fixing the this value in callbacks

When you pass a function as a callback, the this value inside the function can be unpredictable because it is determined by the execution context. Using bind() helps ensure that the correct this value is maintained.

class Person {
constructor(firstName) {
this.firstName = firstName;
}
greet() {
console.log(`Hello, my name is ${this.firstName}`);
}
}
const john = new Person('John');
// Without bind(), `this` inside the callback will be the global object
setTimeout(john.greet, 1000); // Output: "Hello, my name is undefined"
// Using bind() to fix the `this` value
setTimeout(john.greet.bind(john), 2000); // Output: "Hello, my name is John"

You can also use arrow functions to define class methods for this purpose instead of using bind. Arrow functions have their this value bound to the lexical context.

class Person {
constructor(name) {
this.name = name;
}
greet = () => {
console.log(`Hello, my name is ${this.name}`);
};
}
const john = new Person('John Doe');
setTimeout(john.greet, 1000); // Output: "Hello, my name is John Doe"

Partial application of functions (currying)

bind can be used to create a new function with some arguments pre-set. This is known as partial application or currying.

function multiply(a, b) {
return a * b;
}
// Using bind() to create a new function with some arguments pre-set
const multiplyBy5 = multiply.bind(null, 5);
console.log(multiplyBy5(3)); // Output: 15

Method borrowing

bind allows you to borrow methods from one object and apply them to another object, even if they were not originally designed to work with that object. This can be handy when you need to reuse functionality across different objects.

const person = {
name: 'John',
greet: function () {
console.log(`Hello, ${this.name}!`);
},
};
const greetPerson = person.greet.bind({ name: 'Alice' });
greetPerson(); // Output: Hello, Alice!

Practice

Try implementing your own Function.prototype.bind() method on GreatFrontEnd.

Further Reading

What advantage is there for using the JavaScript arrow syntax for a method in a constructor?

Topics
JavaScript

TL;DR

The main advantage of using an arrow function as a method inside a constructor is that the value of this gets set at the time of the function creation and can't change after that. When the constructor is used to create a new object, this will always refer to that object.

For example, let's say we have a Person constructor that takes a first name as an argument and has two methods to console.log() that name, one as a regular function and one as an arrow function:

const Person = function (name) {
this.firstName = name;
this.sayName1 = function () {
console.log(this.firstName);
};
this.sayName2 = () => {
console.log(this.firstName);
};
};
const john = new Person('John');
const dave = new Person('Dave');
john.sayName1(); // John
john.sayName2(); // John
// The regular function can have its `this` value changed, but the arrow function cannot
john.sayName1.call(dave); // Dave (because `this` is now the dave object)
john.sayName2.call(dave); // John
john.sayName1.apply(dave); // Dave (because `this` is now the dave object)
john.sayName2.apply(dave); // John
john.sayName1.bind(dave)(); // Dave (because `this` is now the dave object)
john.sayName2.bind(dave)(); // John
const sayNameFromWindow1 = john.sayName1;
sayNameFromWindow1(); // undefined (because `this` is now the window object)
const sayNameFromWindow2 = john.sayName2;
sayNameFromWindow2(); // John

The main takeaway here is that this can be changed for a normal function, but this always stays the same for an arrow function. So even if you are passing around your arrow function to different parts of your application, you wouldn't have to worry about the value of this changing.


Arrow functions

Arrow functions were introduced in ES2015 and provide a concise way to write functions in JavaScript. One of the key features of an arrow function is that it lexically binds the this value, which means that it takes the this value from the enclosing scope.

Syntax

Arrow functions use the => syntax instead of the function keyword. The basic syntax is:

const myFunction = (arg1, arg2, ...argN) => {
// function body
};

If the function body has only one expression, you can omit the curly braces and the return keyword:

const myFunction = (arg1, arg2, ...argN) => expression;

Examples

// Arrow function with parameters
const multiply = (x, y) => x * y;
console.log(multiply(2, 3)); // Output: 6
// Arrow function with no parameters
const sayHello = () => 'Hello, World!';
console.log(sayHello()); // Output: 'Hello, World!'

Advantages

  • Concise: Arrow functions provide a more concise syntax, especially for short functions.
  • Implicit return: They have an implicit return for single-line functions.
  • Value of this is predictable: Arrow functions lexically bind the this value, inheriting it from the enclosing scope.

Limitations

Arrow functions cannot be used as constructors and will throw an error when used with the new keyword.

const Foo = () => {};
const foo = new Foo(); // TypeError: Foo is not a constructor

They also do not have the arguments keyword; the arguments have to be obtained by using the rest operator (...) in the parameters.

const arrowFunction = (...args) => {
console.log(arguments); // Throws a ReferenceError
console.log(args); // [1, 2, 3]
};
arrowFunction(1, 2, 3);

Since arrow functions do not have their own this, they are not suitable for defining methods in an object. Traditional function expressions or function declarations should be used instead.

const obj = {
value: 42,
getValue: () => this.value, // `this` does not refer to `obj`
};
console.log(obj.getValue()); // undefined

Why arrow functions are useful

One of the most notable features of arrow functions is their behavior with this. Unlike regular functions, arrow functions do not have their own this. Instead, they inherit this from the parent scope at the time they are defined. This makes arrow functions particularly useful for scenarios like event handlers, callbacks, and methods in classes.

Arrow functions inside function constructors

const Person = function (name) {
this.firstName = name;
this.sayName1 = function () {
console.log(this.firstName);
};
this.sayName2 = () => {
console.log(this.firstName);
};
};
const john = new Person('John');
const dave = new Person('Dave');
john.sayName1(); // John
john.sayName2(); // John
// The regular function can have its `this` value changed, but the arrow function cannot
john.sayName1.call(dave); // Dave (because `this` is now the dave object)
john.sayName2.call(dave); // John
john.sayName1.apply(dave); // Dave (because `this` is now the dave object)
john.sayName2.apply(dave); // John
john.sayName1.bind(dave)(); // Dave (because `this` is now the dave object)
john.sayName2.bind(dave)(); // John
const sayNameFromWindow1 = john.sayName1;
sayNameFromWindow1(); // undefined (because `this` is now the window object)
const sayNameFromWindow2 = john.sayName2;
sayNameFromWindow2(); // John

Arrow functions in event handlers

const button = document.getElementById('myButton');
button.addEventListener('click', function () {
console.log(this); // Output: Button
console.log(this === button); // Output: true
});
button.addEventListener('click', () => {
console.log(this); // Output: Window
console.log(this === window); // Output: true
});

This can be particularly helpful in React class components. If you define a class method for something such as a click handler using a normal function, and then you pass that click handler down into a child component as a prop, you will need to also bind this in the constructor of the parent component. If you instead use an arrow function, there is no need to bind this, as the method will automatically get its this value from its enclosing lexical context. See this article for an excellent demonstration and sample code.

Further reading

Explain how prototypal inheritance works in JavaScript

Topics
JavaScriptOOP

TL;DR

Prototypical inheritance in JavaScript is a way for objects to inherit properties and methods from other objects. Every JavaScript object has a special hidden property called [[Prototype]] (commonly accessed via __proto__ or using Object.getPrototypeOf()) that is a reference to another object, which is called the object's "prototype".

When a property is accessed on an object and if the property is not found on that object, the JavaScript engine looks at the object's __proto__, and the __proto__'s __proto__ and so on, until it finds the property defined on one of the __proto__s or until it reaches the end of the prototype chain.

This behavior simulates classical inheritance, but it is really more of delegation than inheritance.

Here's an example of prototypal inheritance:

// Parent object constructor.
function Animal(name) {
this.name = name;
}
// Add a method to the parent object's prototype.
Animal.prototype.makeSound = function () {
console.log('The ' + this.constructor.name + ' makes a sound.');
};
// Child object constructor.
function Dog(name) {
Animal.call(this, name); // Call the parent constructor.
}
// Set the child object's prototype to be the parent's prototype.
Object.setPrototypeOf(Dog.prototype, Animal.prototype);
// Add a method to the child object's prototype.
Dog.prototype.bark = function () {
console.log('Woof!');
};
// Create a new instance of Dog.
const bolt = new Dog('Bolt');
// Call methods on the child object.
console.log(bolt.name); // "Bolt"
bolt.makeSound(); // "The Dog makes a sound."
bolt.bark(); // "Woof!"

Things to note are:

  • .makeSound is not defined on Dog, so the JavaScript engine goes up the prototype chain and finds .makeSound on the inherited Animal.
  • Using Object.create() to build the inheritance chain is no longer recommended. Use Object.setPrototypeOf() instead.

Prototypical Inheritance in Javascript

Prototypical inheritance is a feature in JavaScript used to create objects that inherit properties and methods from other objects. Instead of a class-based inheritance model, JavaScript uses a prototype-based model, where objects can directly inherit from other objects.

Key Concepts

  1. Prototypes: Every object in JavaScript has a prototype, which is another object. When you create an object using an object literal or a constructor function, the new object is linked to the prototype of its constructor function, or to Object.prototype if no prototype is specified. This is commonly referenced using __proto__ or [[Prototype]]. You can also get the prototype by using the built-in method Object.getPrototypeOf(), and you can set the prototype of an object via Object.setPrototypeOf().
// Define a constructor function
function Person(name, age) {
this.name = name;
this.age = age;
}
// Add a method to the prototype
Person.prototype.sayHello = function () {
console.log(`Hello, my name is ${this.name} and I am ${this.age} years old.`);
};
// Create a new object using the constructor function
let john = new Person('John', 30);
// The new object has access to the methods defined on the prototype
john.sayHello(); // "Hello, my name is John and I am 30 years old."
// The prototype of the new object is the prototype of the constructor function
console.log(john.__proto__ === Person.prototype); // true
// You can also get the prototype using Object.getPrototypeOf()
console.log(Object.getPrototypeOf(john) === Person.prototype); // true
// You can set the prototype of an object using Object.setPrototypeOf()
let newProto = {
sayGoodbye: function () {
console.log(`Goodbye, my name is ${this.name}`);
},
};
Object.setPrototypeOf(john, newProto);
// Now john has access to the methods defined on the new prototype
john.sayGoodbye(); // "Goodbye, my name is John"
// But no longer has access to the methods defined on the old prototype
console.log(john.sayHello); // undefined
  1. Prototype chain: When a property or method is accessed on an object, JavaScript first looks for it on the object itself. If it doesn't find it there, it looks at the object's prototype, and then the prototype's prototype, and so on, until it either finds the property or reaches the end of the chain (i.e., null).

  2. Constructor functions: JavaScript provides constructor functions to create objects. When a function is used as a constructor with the new keyword, the new object's prototype ([[Prototype]]) is set to the constructor's prototype property.

// Define a constructor function
function Animal(name) {
this.name = name;
}
// Add a method to the prototype
Animal.prototype.sayName = function () {
console.log(`My name is ${this.name}`);
};
// Define a new constructor function
function Dog(name, breed) {
Animal.call(this, name);
this.breed = breed;
}
// Set the prototype of Dog to inherit from Animal's prototype
Object.setPrototypeOf(Dog.prototype, Animal.prototype);
// Add a method to the Dog prototype
Dog.prototype.bark = function () {
console.log('Woof!');
};
// Create a new object using the Dog constructor function
let fido = new Dog('Fido', 'Labrador');
// The new object has access to the methods defined on its own prototype and the Animal prototype
fido.bark(); // "Woof!"
fido.sayName(); // "My name is Fido"
// If we try to access a method that doesn't exist on the Dog prototype or the Animal prototype, JavaScript will return undefined
console.log(fido.fly); // undefined
  1. Object.create(): This method creates a new object whose internal [[Prototype]] is directly linked to the specified prototype object. It's the most direct way to create an object that inherits from another specific object, without involving constructor functions. If you create an object via Object.create(null), it will not inherit any properties from Object.prototype. This means the object will not have any built-in properties or methods like toString() or hasOwnProperty().
// Define a prototype object
let proto = {
greet: function () {
console.log(`Hello, my name is ${this.name}`);
},
};
// Use `Object.create()` to create a new object with the specified prototype
let person = Object.create(proto);
person.name = 'John';
// The new object has access to the methods defined on the prototype
person.greet(); // "Hello, my name is John"
// Check if the object has a property
console.log(person.hasOwnProperty('name')); // true
// Create an object that does not inherit from Object.prototype
let animal = Object.create(null);
animal.name = 'Rocky';
// The new object does not have any built-in properties or methods
console.log(animal.toString); // undefined
console.log(animal.hasOwnProperty); // undefined
// But you can still add and access custom properties
animal.describe = function () {
console.log(`Name of the animal is ${this.name}`);
};
animal.describe(); // "Name of the animal is Rocky"

Resources

Difference between: `function Person(){}`, `const person = Person()`, and `const person = new Person()` in JavaScript?

Topics
JavaScriptOOP

TL;DR

  • function Person(){}: A function declaration in JavaScript. It can be used as a regular function or as a constructor.
  • const person = Person(): Calls Person as a regular function, not a constructor. If Person is intended to be a constructor, this will lead to unexpected behavior.
  • const person = new Person(): Creates a new instance of Person, correctly utilizing the constructor function to initialize the new object.
Aspectfunction Person(){}const person = Person()const person = new Person()
TypeFunction declarationFunction callConstructor call
UsageDefines a functionInvokes Person as a regular functionCreates a new instance of Person
Instance CreationNo instance createdNo instance createdNew instance created
Common MistakeN/AMisusing as constructor leading to undefinedNone (when used correctly)

Function declaration

function Person(){} is a standard function declaration in JavaScript. When written in PascalCase, it follows the convention for functions intended to be used as constructors.

function Person(name) {
this.name = name;
}

This code defines a function named Person that takes a parameter name and assigns it to the name property of the object created from this constructor function. When the this keyword is used in a constructor, it refers to the newly created object.

Function call

const person = Person() simply invokes the function's code. When you invoke Person as a regular function (i.e., without the new keyword), the function does not behave as a constructor. Instead, it executes its code and returns undefined if no return value is specified, and that gets assigned to the variable intended as the instance. Invoking it this way is a common mistake if the function is intended to be used as a constructor.

function Person(name) {
this.name = name;
}
const person = Person('John'); // Throws error in strict mode
console.log(person); // undefined
console.log(person.name); // Uncaught TypeError: Cannot read property 'name' of undefined

In this case, Person('John') does not create a new object. The person variable is assigned undefined because the Person function does not explicitly return a value. Attempting to access person.name throws an error because person is undefined.

Constructor call

const person = new Person() creates an instance of the Person object using the new operator, which inherits from Person.prototype. An alternative would be to use Object.create, such as: Object.create(Person.prototype) to create the object and Person.call(person, 'John') to initialize it.

function Person(name) {
this.name = name;
}
const person = new Person('John');
console.log(person); // Person { name: 'John' }
console.log(person.name); // 'John'
// Alternative
const person1 = Object.create(Person.prototype);
Person.call(person1, 'John');
console.log(person1); // Person { name: 'John' }
console.log(person1.name); // 'John'

In this case, new Person('John') creates a new object, and this within Person refers to this new object. The name property is correctly set on the new object. The person variable is assigned the new instance of Person with the name property set to 'John'. For the alternative object creation, Object.create(Person.prototype) creates a new object with Person.prototype as its prototype, and Person.call(person, 'John') initializes the object, setting the name property.

Follow-Up Questions

  • What are the differences between function constructors and ES6 class syntax?
  • What are some common use cases for Object.create?

Further reading

Explain the differences on the usage of `foo` between `function foo() {}` and `var foo = function() {}` in JavaScript

Topics
JavaScript

TL;DR

function foo() {} is a function declaration while var foo = function() {} is a function expression. The key difference is that function declarations have their bodies hoisted but the bodies of function expressions are not (they have the same hoisting behavior as var-declared variables).

If you try to invoke a function expression before it is declared, you will get an Uncaught TypeError: XXX is not a function error.

Function declarations can be called in the enclosing scope even before they are declared.

foo(); // 'FOOOOO'
function foo() {
console.log('FOOOOO');
}

Function expressions if called before they are declared will result in an error.

foo(); // Uncaught TypeError: foo is not a function
var foo = function () {
console.log('FOOOOO');
};

Another key difference is in the scope of the function name. Function expressions can be named by defining a name after the function keyword and before the parentheses. However, when using named function expressions, the function name is only accessible within the function itself. Trying to access it outside will result in an error or undefined.

const myFunc = function namedFunc() {
console.log(namedFunc); // Works
};
myFunc(); // Runs the function and logs the function reference
console.log(namedFunc); // ReferenceError: namedFunc is not defined

Note: The examples use var due to legacy reasons. Function expressions can be defined using let and const, and the key difference is in the hoisting behavior of those keywords.


Function declarations

A function declaration is a statement that defines a function with a name. It is typically used to declare a function that can be called multiple times throughout the enclosing scope.

function foo() {
console.log('FOOOOO');
}
foo(); // 'FOOOOO'

Function expressions

A function expression is an expression that defines a function and assigns it to a variable. It is often used when a function is needed only once or in a specific context.

var foo = function () {
console.log('FOOOOO');
};
foo(); // 'FOOOOO'

Note: The examples use var due to legacy reasons. Function expressions can be defined using let and const, and the key difference is in the hoisting behavior of those keywords.

Key differences

Hoisting

The key difference is that function declarations have their bodies hoisted but the bodies of function expressions are not (they have the same hoisting behavior as var-declared variables). For more explanation on hoisting, refer to the quiz question on hoisting. If you try to invoke a function expression before it is defined, you will get an Uncaught TypeError: XXX is not a function error.

Function declarations:

foo(); // 'FOOOOO'
function foo() {
console.log('FOOOOO');
}

Function expressions:

foo(); // Uncaught TypeError: foo is not a function
var foo = function () {
console.log('FOOOOO');
};

Name scope

Function expressions can be named by defining a name after the function keyword and before the parentheses. However, when using named function expressions, the function name is only accessible within the function itself. Trying to access it outside will result in undefined and calling it will result in an error.

const myFunc = function namedFunc() {
console.log(namedFunc); // Works
};
myFunc(); // Runs the function and logs the function reference
console.log(namedFunc); // ReferenceError: namedFunc is not defined

When to use each

  • Function declarations:
    • When you want to create a function on the global scope and make it available throughout the enclosing scope.
    • If a function is reusable and needs to be called multiple times.
  • Function expressions:
    • If a function is only needed once or in a specific context.
    • Use to limit the function's availability to subsequent code and keep the enclosing scope clean.

In general, it's preferable to use function declarations to avoid the mental overhead of determining if a function can be called. The practical usages of function expressions are quite rare.

Further reading

What's a typical use case for anonymous functions in JavaScript?

Topics
JavaScript

TL;DR

An anonymous function in JavaScript is a function that does not have any name associated with it. They are typically used as arguments to other functions or assigned to variables.

const arr = [-1, 0, 5, 6];
// The filter method is passed an anonymous function.
arr.filter((x) => x > 1); // [5, 6]

They are often used as arguments to other functions, known as higher-order functions, which can take functions as input and return a function as output. Anonymous functions can access variables from the outer scope, a concept known as closures, allowing them to "close over" and remember the environment in which they were created.

// Encapsulating Code
(function () {
// Some code here.
})();
// Callbacks
setTimeout(function () {
console.log('Hello world!');
}, 1000);
// Functional programming constructs
const arr = [1, 2, 3];
const double = arr.map(function (el) {
return el * 2;
});
console.log(double); // [2, 4, 6]

Anonymous functions

Anonymous functions provide a more concise way to define functions, especially for simple operations or callbacks. Besides that, they can also be used in the following scenarios.

Immediate execution

Anonymous functions are commonly used in Immediately Invoked Function Expressions (IIFEs) to encapsulate code within a local scope. This prevents variables declared within the function from leaking to the global scope and polluting the global namespace.

// This is an IIFE
(function () {
var x = 10;
console.log(x); // 10
})();
// x is not accessible here
console.log(typeof x); // undefined

In the above example, the IIFE creates a local scope for the variable x. As a result, x is not accessible outside the IIFE, thus preventing it from leaking into the global scope.

Callbacks

Anonymous functions can be used as callbacks that are used once and do not need to be used anywhere else. The code will seem more self-contained and readable when handlers are defined right inside the code calling them, rather than having to search elsewhere to find the function body.

setTimeout(() => {
console.log('Hello world!');
}, 1000);

Higher-order functions

They are used as arguments to functional programming constructs like higher-order functions or Lodash (similar to callbacks). Higher-order functions take other functions as arguments or return them as results. Anonymous functions are often used with higher-order functions like map(), filter(), and reduce().

const arr = [1, 2, 3];
const double = arr.map((el) => {
return el * 2;
});
console.log(double); // [2, 4, 6]

Event Handling

In React, anonymous functions are widely used for defining callback functions inline for handling events and passing callbacks as props.

function App() {
return <button onClick={() => console.log('Clicked!')}>Click Me</button>;
}

Follow-Up Questions

  • How do anonymous functions differ from named functions?
  • Can you explain the difference between arrow functions and anonymous functions?

What are the various ways to create objects in JavaScript?

Topics
JavaScript

TL;DR

Creating objects in JavaScript offers several methods:

  • Object literals ({}): Simplest and most popular approach. Define key-value pairs within curly braces.
  • Object() constructor: Use new Object() with dot notation to add properties.
  • Object.create(): Create new objects using existing objects as prototypes, inheriting properties and methods.
  • Constructor functions: Define blueprints for objects using functions, creating instances with new.
  • ES2015 classes: Structured syntax similar to other languages, using class and constructor keywords.

Objects in JavaScript

There are several methods for creating objects in JavaScript. Here are the various ways to do so:

Object literals ({})

This is the simplest and most popular way to create objects in JavaScript. It involves defining a collection of key-value pairs within curly braces ({}). It can be used when you need to create a single object with a fixed set of properties.

const person = {
firstName: 'John',
lastName: 'Doe',
age: 50,
eyeColor: 'blue',
};
console.log(person); // {firstName: "John", lastName: "Doe", age: 50, eyeColor: "blue"}

Object() constructor

This method involves using the new keyword with the built-in Object constructor to create an object. You can then add properties to the object using dot notation. It can be used when you need to create an object from a primitive value or to create an empty object.

const person = new Object();
person.firstName = 'John';
person.lastName = 'Doe';
console.log(person); // {firstName: "John", lastName: "Doe"}

Object.create() Method

This method allows you to create a new object using an existing object as a prototype. The new object inherits properties and methods from the prototype object. It can be used when you need to create a new object with a specific prototype.

// Object.create() Method
const personPrototype = {
greet() {
console.log(
`Hello, my name is ${this.name} and I'm ${this.age} years old.`,
);
},
};
const person = Object.create(personPrototype);
person.name = 'John';
person.age = 30;
person.greet(); // Output: Hello, my name is John and I'm 30 years old.

An object without a prototype can be created by doing Object.create(null).

ES2015 classes

Classes provide a more structured and familiar syntax (similar to other programming languages) for creating objects. They define a blueprint and use methods to interact with the object's properties. It can be used when you need to create complex objects with inheritance and encapsulation.

class Person {
constructor(name, age) {
this.name = name;
this.age = age;
}
greet = function () {
console.log(
`Hello, my name is ${this.name} and I'm ${this.age} years old.`,
);
};
}
const person1 = new Person('John', 30);
const person2 = new Person('Alice', 25);
person1.greet(); // Output: Hello, my name is John and I'm 30 years old.
person2.greet(); // Output: Hello, my name is Alice and I'm 25 years old.

Constructor functions

Constructor functions are used to create reusable blueprints for objects. They define the properties and behaviors shared by all objects of that type. You use the new keyword to create instances of the object. It can be used when you need to create multiple objects with similar properties and methods.

However, now that ES2015 classes are readily supported in modern browsers, there's little reason to use constructor functions to create objects.

// Constructor function
function Person(name, age) {
this.name = name;
this.age = age;
this.greet = function () {
console.log(
`Hello, my name is ${this.name} and I'm ${this.age} years old.`,
);
};
}
const person1 = new Person('John', 30);
const person2 = new Person('Alice', 25);
person1.greet(); // Output: Hello, my name is John and I'm 30 years old.
person2.greet(); // Output: Hello, my name is Alice and I'm 25 years old.

Further reading

What is a closure in JavaScript, and how/why would you use one?

Topics
ClosureJavaScript

TL;DR

In the book "You Don't Know JS" (YDKJS) by Kyle Simpson, a closure is defined as follows:

Closure is when a function is able to remember and access its lexical scope even when that function is executing outside its lexical scope

In simple terms, functions have access to variables that were in their scope at the time of their creation. This is what we call the function's lexical scope. A closure is a function that retains access to these variables even after the outer function has finished executing. It is as if the function has a memory of its original environment.

function outerFunction() {
const outerVar = 'I am outside of innerFunction';
function innerFunction() {
console.log(outerVar); // `innerFunction` can still access `outerVar`.
}
return innerFunction;
}
const inner = outerFunction(); // `inner` now holds a reference to `innerFunction`.
inner(); // "I am outside of innerFunction"
// Even though `outerFunction` has completed execution, `inner` still has access to variables defined inside `outerFunction`.

Key points to remember:

  • Closure occurs when an inner function has access to variables in its outer (lexical) scope, even when the outer function has finished executing.
  • Closure allows a function to remember the environment in which it was created, even if that environment is no longer present.
  • Closures are used extensively in JavaScript, such as in callbacks, event handlers, and asynchronous functions.

Understanding JavaScript closures

In JavaScript, a closure is a function that captures the lexical scope in which it was declared, allowing it to access and manipulate variables from an outer scope even after that scope has been closed.

Here's how closures work:

  1. Lexical scoping: JavaScript uses lexical scoping, meaning a function's access to variables is determined by its actual location within the source code.
  2. Function creation: When a function is created, it keeps a reference to its lexical scope. This scope contains all the local variables that were in-scope at the time the closure was created.
  3. Maintaining state: Closures are often used to maintain state in a secure way because the variables captured by the closure are not accessible outside the function.

ES6 syntax and closures

With ES6, closures can be created using arrow functions, which provide a more concise syntax and lexically bind the this value. Here's an example:

const createCounter = () => {
let count = 0;
return () => {
count += 1;
return count;
};
};
const counter = createCounter();
console.log(counter()); // Outputs: 1
console.log(counter()); // Outputs: 2

Closures compared with classes

Closures and classes can both encapsulate state and expose operations on it. The two approaches differ in privacy mechanism, memory characteristics, and idiomatic fit.

// Closure-based implementation
function makeCounter() {
let count = 0;
return {
inc: () => ++count,
get: () => count,
};
}
// Class-based implementation with private fields
class Counter {
#count = 0;
inc() {
return ++this.#count;
}
get() {
return this.#count;
}
}
const a = makeCounter();
const b = new Counter();
console.log(a.inc(), a.inc()); // 1 2
console.log(b.inc(), b.inc()); // 1 2
ConcernClosureClass with #private fields
PrivacyLexical scope; inaccessible from outside the closurePrivate slot; access outside the class throws TypeError
Memory per instanceNew closure scope and new function objects per callInstance state per new; methods shared via the prototype
this bindingNot required; methods close over outer variablesMethods use this; additional care required for callbacks
Prototype sharingNot supported; each instance has its own methodsSupported; instance methods share the prototype
Typical useFactories, event handlers, partial application, FPLong-lived domain objects, inheritance hierarchies

General guidance:

  • A closure is appropriate when a small number of instances with encapsulated state are needed and inheritance is not a concern.
  • A class is appropriate when many instances share the same behavior (prototype sharing avoids duplicating methods per instance), when instanceof checks or inheritance are needed, or when the API is consumed through type annotations in TypeScript.

Closures in React

Closures are everywhere. The code below shows a simple example of increasing a counter on a button click. In this code, handleClick forms a closure. It has access to its outer scope variables count and setCount.

import React, { useState } from 'react';
function Counter() {
const [count, setCount] = useState(0);
// `handleClick` is a closure over `count` and `setCount`.
function handleClick() {
setCount(count + 1);
}
return (
<div>
<p>Count: {count}</p>
<button onClick={handleClick}>Increment</button>
</div>
);
}

Stale closures in useEffect

A closure inside a useEffect callback captures the values of the variables it references at the time the effect runs. When those values change on subsequent renders, the closure continues to reference the originally captured values unless the effect re-runs or a different mechanism is used to read live state. This is a common cause of hooks-related bugs.

function Chat() {
const [count, setCount] = useState(0);
useEffect(() => {
const interval = setInterval(() => {
// `count` here refers to the value captured when the effect ran.
// With an empty dependency array, the effect runs only once, so this
// value is always 0.
console.log('current count:', count);
setCount(count + 1);
}, 1000);
return () => clearInterval(interval);
}, []);
return <div>{count}</div>;
}

There are three ways to correct this:

Declare the dependency. The effect re-runs whenever count changes, and a new closure captures the current value:

useEffect(() => {
const interval = setInterval(() => {
setCount(count + 1);
}, 1000);
return () => clearInterval(interval);
}, [count]);

This is correct, but creates a new interval every second.

Use the functional updater form of setState. The updater receives the current state as an argument, so no value needs to be captured:

useEffect(() => {
const interval = setInterval(() => {
setCount((prev) => prev + 1);
}, 1000);
return () => clearInterval(interval);
}, []);

Use a ref. Useful when the callback should read live state but not re-run when the state changes:

const countRef = useRef(0);
useEffect(() => {
countRef.current = count;
});
useEffect(() => {
const interval = setInterval(() => {
console.log('current count:', countRef.current);
}, 1000);
return () => clearInterval(interval);
}, []);

The functional updater is usually the simplest correct option when the callback only needs to update state. The ref approach is appropriate when live state must be read without re-running the subscription.

Memoization with closures

Function memoization caches results of a computation against the arguments used to produce them. A closure holds the cache, keeping it scoped to the memoized wrapper.

function memoize(fn) {
const cache = new Map();
return function (...args) {
const key = JSON.stringify(args);
if (cache.has(key)) return cache.get(key);
const result = fn.apply(this, args);
cache.set(key, result);
return result;
};
}
const slowSquare = (n) => {
console.log('computing', n);
return n * n;
};
const fastSquare = memoize(slowSquare);
console.log(fastSquare(4)); // 'computing 4' then 16
console.log(fastSquare(4)); // 16 (cache hit; no 'computing' log)
console.log(fastSquare(5)); // 'computing 5' then 25

Observations:

  1. The cache variable is accessible only through the returned function. This is the "private state" property of closures applied to a practical utility.
  2. The cache grows unbounded by default, which can cause memory issues on long-running inputs. Production memoization implementations typically use an LRU cache or a WeakMap keyed on object identity. Further discussion of this and related issues is on the closure pitfalls page.
  3. The same pattern appears in widely used utilities, including React's useMemo and useCallback, Reselect's createSelector, and Lodash's _.memoize.

Common example: var in a loop

for (var i = 0; i < 3; i++) {
setTimeout(() => console.log(i), 0);
}
// Output: 3, 3, 3

var i is function-scoped, so all three callbacks close over the same binding. The loop completes and sets i to 3 before any setTimeout callback runs, because setTimeout(fn, 0) still queues the callback as a macrotask that runs after the current synchronous code.

Two corrections:

  • Replace var with let. let is block-scoped, so each iteration creates a fresh binding, and each callback closes over a different value.
  • Use an IIFE to create a new function scope per iteration: (i => setTimeout(() => console.log(i), 0))(i). This is the pre-ES6 alternative.

Common questions

When is a closure preferable to a class?

A closure is generally preferable when the expected number of instances is small, when inheritance and instanceof are not needed, and when avoiding this is desirable. A class is preferable when many instances will share the same behavior (prototype sharing), when inheritance is used, or when the API is consumed through TypeScript type annotations.

Can closures cause memory leaks?

Yes. A closure that references a large object and is itself reachable from long-lived state (module scope, event listener registrations, or a Redux store, for example) will keep the referenced object alive. The closure pitfalls page covers specific patterns and how to detect them.

Are closures synchronous or asynchronous?

A closure is simply a function that captures its lexical scope. The function may be invoked synchronously or asynchronously; the closure mechanism itself is independent of the invocation timing. The var-in-loop example above is a common source of confusion because the variable referenced by the closure changes between the closure's creation and its asynchronous invocation.

Why use closures?

Using closures provides the following benefits:

  1. Data encapsulation: Closures provide a way to create private variables and functions that can't be accessed from outside the closure. This is useful for hiding implementation details and maintaining state in an encapsulated way.
  2. Functional programming: Closures are fundamental in functional programming paradigms, where they are used to create functions that can be passed around and invoked later, retaining access to the scope in which they were created, e.g. partial applications or currying.
  3. Event handlers and callbacks: In JavaScript, closures are often used in event handlers and callbacks to maintain state or access variables that were in scope when the handler or callback was defined.
  4. Module patterns: Closures enable the module pattern in JavaScript, allowing the creation of modules with private and public parts.

Related questions

You can read more about memory leaks and other pitfalls of closures and the module pattern and private state on their dedicated pages.

Further reading

What is the definition of a higher-order function in JavaScript?

Topics
JavaScript

TL;DR

A higher-order function is any function that takes one or more functions as arguments, which it uses to operate on some data, and/or returns a function as a result.

Higher-order functions are meant to abstract some operation that is performed repeatedly. The classic example of this is Array.prototype.map(), which takes an array and a function as arguments. Array.prototype.map() then uses this function to transform each item in the array, returning a new array with the transformed data. Other popular examples in JavaScript are Array.prototype.forEach(), Array.prototype.filter(), and Array.prototype.reduce(). A higher-order function doesn't just need to be manipulating arrays as there are many use cases for returning a function from another function. Function.prototype.bind() is an example that returns another function.

Imagine a scenario where we have an array of names that we need to transform to uppercase. The imperative way will be as such:

const names = ['irish', 'daisy', 'anna'];
function transformNamesToUppercase(names) {
const results = [];
for (let i = 0; i < names.length; i++) {
results.push(names[i].toUpperCase());
}
return results;
}
console.log(transformNamesToUppercase(names)); // ['IRISH', 'DAISY', 'ANNA']

Using Array.prototype.map(transformerFn) makes the code shorter and more declarative.

const names = ['irish', 'daisy', 'anna'];
function transformNamesToUppercase(names) {
return names.map((name) => name.toUpperCase());
}
console.log(transformNamesToUppercase(names)); // ['IRISH', 'DAISY', 'ANNA']

Higher order functions

A higher-order function is a function that takes another function as an argument or returns a function as its result.

Functions as arguments

A higher-order function can take another function as an argument and execute it.

function greet(name) {
return `Hello, ${name}!`;
}
function greetName(greeter, name) {
console.log(greeter(name));
}
greetName(greet, 'Alice'); // Output: Hello, Alice!

In this example, the greetName function is a higher-order function because it takes another function (greet) as an argument and uses it to generate a greeting for the given name.

Functions as return values

A higher-order function can return another function.

function multiplier(factor) {
return function (num) {
return num * factor;
};
}
const double = multiplier(2);
const triple = multiplier(3);
console.log(double(5)); // Output: 10
console.log(triple(5)); // Output: 15

In this example, the multiplier function returns a new function that multiplies any number by the specified factor. The returned function is a closure that remembers the factor value from the outer function. The multiplier function is a higher-order function because it returns another function.

Practical examples

  1. Logging decorator: A higher-order function that adds logging functionality to another function:
function withLogging(fn) {
return function (...args) {
console.log(`Calling ${fn.name} with arguments`, args);
return fn.apply(this, args);
};
}
function add(a, b) {
return a + b;
}
const loggedAdd = withLogging(add);
console.log(loggedAdd(2, 3));
// Output:
// Calling add with arguments [2, 3]
// 5

The withLogging function is a higher-order function that takes a function fn as an argument and returns a new function that logs the function call before executing the original function.

  1. Memoization: A higher-order function that caches the results of a function to avoid redundant computations:
function memoize(fn) {
const cache = new Map();
return function (...args) {
const key = JSON.stringify(args);
if (cache.has(key)) {
return cache.get(key);
}
const result = fn.apply(this, args);
cache.set(key, result);
return result;
};
}
const memoizedFibonacci = memoize(function (n) {
if (n <= 1) return n;
return memoizedFibonacci(n - 1) + memoizedFibonacci(n - 2);
});
console.log(memoizedFibonacci(10)); // Output: 55
console.log(memoizedFibonacci(100)); // Output: 354224848179262000000

The memoize function is a higher-order function that takes a function fn as an argument and returns a new function that caches the results of the original function based on its arguments.

  1. Lodash: Lodash is a utility library that provides a wide range of functions for working with arrays, objects, strings, and more, most of which are higher-order functions.
import _ from 'lodash';
const numbers = [1, 2, 3, 4, 5];
// Filter array
const evenNumbers = _.filter(numbers, (n) => n % 2 === 0); // [2, 4]
// Map array
const doubledNumbers = _.map(numbers, (n) => n * 2); // [2, 4, 6, 8, 10]
// Find the maximum value
const maxValue = _.max(numbers); // 5
// Sum all values
const sum = _.sum(numbers); // 15

Further reading

What are the differences between JavaScript ES2015 classes and ES5 function constructors?

Topics
JavaScriptOOP

TL;DR

ES2015 introduces a new way of creating classes, which provides a more intuitive and concise way to define and work with objects and inheritance compared to the ES5 function constructor syntax. Here's an example of each:

// ES5 function constructor
function Person(name) {
this.name = name;
}
// ES2015 Class
class Person {
constructor(name) {
this.name = name;
}
}

For simple constructors, they look pretty similar. The main difference in the constructor comes when using inheritance. If we want to create a Student class that subclasses Person and adds a studentId field, this is what we have to do.

// ES5 inheritance
// Superclass
function Person1(name) {
this.name = name;
}
// Subclass
function Student1(name, studentId) {
// Call constructor of superclass to initialize superclass-derived members.
Person1.call(this, name);
// Initialize subclass's own members.
this.studentId = studentId;
}
Student1.prototype = Object.create(Person1.prototype);
Student1.prototype.constructor = Student1;
const student1 = new Student1('John', 1234);
console.log(student1.name, student1.studentId); // "John" 1234
// ES2015 inheritance
// Superclass
class Person2 {
constructor(name) {
this.name = name;
}
}
// Subclass
class Student2 extends Person2 {
constructor(name, studentId) {
super(name);
this.studentId = studentId;
}
}
const student2 = new Student2('Alice', 5678);
console.log(student2.name, student2.studentId); // "Alice" 5678

It's much more verbose to use inheritance in ES5, and the ES2015 version is easier to understand and remember.

Comparison of ES5 function constructors vs ES2015 classes

FeatureES5 Function ConstructorES2015 Class
SyntaxUses function constructors and prototypesUses class keyword
ConstructorFunction with properties assigned using thisconstructor method inside the class
Method DefinitionDefined on the prototypeDefined inside the class body
Static MethodsAdded directly to the constructor functionDefined using the static keyword
InheritanceUses Object.create() and manually sets prototype chainUses extends keyword and super function
ReadabilityLess intuitive and more verboseMore concise and intuitive

ES5 function constructor vs ES2015 classes

ES5 function constructors and ES2015 classes are two different ways of defining classes in JavaScript. They both serve the same purpose, but they have different syntax and behavior.

ES5 function constructors

In ES5, you define a class-like structure using a function constructor and prototypes. Here's an example:

// ES5 function constructor
function Person(name, age) {
this.name = name;
this.age = age;
}
Person.prototype.greet = function () {
console.log(
'Hello, my name is ' + this.name + ' and I am ' + this.age + ' years old.',
);
};
// Creating an instance
var person1 = new Person('John', 30);
person1.greet(); // Hello, my name is John and I am 30 years old.

ES2015 classes

ES2015 introduced the class syntax, which simplifies the definition of classes and supports more features such as static methods and subclassing. Here's the same example using ES2015:

// ES2015 Class
class Person {
constructor(name, age) {
this.name = name;
this.age = age;
}
greet() {
console.log(
`Hello, my name is ${this.name} and I am ${this.age} years old.`,
);
}
}
// Creating an instance
const person1 = new Person('John', 30);
person1.greet(); // Hello, my name is John and I am 30 years old.

Key Differences

  1. Syntax and Readability:

    • ES5: Uses function constructors and prototypes, which can be less intuitive and harder to read.
    • ES2015: Uses the class keyword, making the code more concise and easier to understand.
  2. Static Methods:

    • ES5: Static methods are added directly to the constructor function.
    • ES2015: Static methods are defined within the class using the static keyword.
    // ES5
    function Person1(name, age) {
    this.name = name;
    this.age = age;
    }
    Person1.sayHi = function () {
    console.log('Hi from ES5!');
    };
    Person1.sayHi(); // Hi from ES5!
    // ES2015
    class Person2 {
    static sayHi() {
    console.log('Hi from ES2015!');
    }
    }
    Person2.sayHi(); // Hi from ES2015!
  3. Inheritance

    • ES5: Inheritance is achieved using Object.create() and manually setting the prototype chain.
    • ES2015: Inheritance is much simpler and more intuitive with the extends keyword.
    // ES5 Inheritance
    // ES5 function constructor
    function Person1(name, age) {
    this.name = name;
    this.age = age;
    }
    Person1.prototype.greet = function () {
    console.log(
    `Hello, my name is ${this.name} and I am ${this.age} years old.`,
    );
    };
    function Student1(name, age, grade) {
    Person1.call(this, name, age);
    this.grade = grade;
    }
    Student1.prototype = Object.create(Person1.prototype);
    Student1.prototype.constructor = Student1;
    Student1.prototype.study = function () {
    console.log(this.name + ' is studying.');
    };
    var student1 = new Student1('John', 22, 'B+');
    student1.greet(); // Hello, my name is John and I am 22 years old.
    student1.study(); // John is studying.
    // ES2015 Inheritance
    // ES2015 Class
    class Person2 {
    constructor(name, age) {
    this.name = name;
    this.age = age;
    }
    greet() {
    console.log(
    `Hello, my name is ${this.name} and I am ${this.age} years old.`,
    );
    }
    }
    class Student2 extends Person2 {
    constructor(name, age, grade) {
    super(name, age);
    this.grade = grade;
    }
    study() {
    console.log(`${this.name} is studying.`);
    }
    }
    const student2 = new Student2('Alice', 20, 'A');
    student2.greet(); // Hello, my name is Alice and I am 20 years old.
    student2.study(); // Alice is studying.
  4. super calls:

    • ES5: Manually call the parent constructor function.
    • ES2015: Use the super keyword to call the parent class's constructor and methods.

Conclusion

While both ES5 and ES2015 approaches can achieve the same functionality, ES2015 classes provide a clearer and more concise way to define and work with object-oriented constructs in JavaScript, which makes the code easier to write, read, and maintain. If you are working with modern JavaScript, it is generally recommended to use ES2015 classes over ES5 function constructors.

Resources

Describe event bubbling in JavaScript and browsers

Topics
Web APIsJavaScript

TL;DR

Event bubbling is a DOM event propagation mechanism where an event (e.g. a click) starts at the target element and bubbles up to the root of the document. This allows ancestor elements to also respond to the event.

Event bubbling is essential for event delegation, where a single event handler manages events for multiple child elements, enhancing performance and code simplicity. While convenient, failing to manage event propagation properly can lead to unintended behavior, such as multiple handlers firing for a single event.


What is event bubbling?

Event bubbling is a propagation mechanism in the DOM (Document Object Model) where an event, such as a click or a keyboard event, is first triggered on the target element that initiated the event and then propagates upward (bubbles) through the DOM tree to the root of the document.

Note: before the event bubbling phase, there is the event capturing phase, which is the opposite of bubbling, where the event travels down from the document root to the target element.

Bubbling phase

During the bubbling phase, the event starts at the target element and bubbles up through its ancestors in the DOM hierarchy. This means that the event handlers attached to the target element and its ancestors can all potentially receive and respond to the event.

Here's an example using modern ES6 syntax to demonstrate event bubbling:

// HTML:
// <div id="parent">
// <button id="child">Click me!</button>
// </div>
const parentDiv = document.createElement('div');
parentDiv.id = 'parent';
const button = document.createElement('button');
button.id = 'child';
parentDiv.appendChild(button);
document.body.appendChild(parentDiv);
const parent = document.getElementById('parent');
const child = document.getElementById('child');
parent.addEventListener('click', () => {
console.log('Parent element clicked');
});
child.addEventListener('click', () => {
console.log('Child element clicked');
});
// Simulate clicking the button:
child.click();

When you click the "Click me!" button, both the child and parent event handlers will be triggered due to event bubbling.

Stopping the bubbling

Event bubbling can be stopped during the bubbling phase using the stopPropagation() method. If an event handler calls stopPropagation(), it prevents the event from further bubbling up the DOM tree, ensuring that only the handlers of the elements up to that point in the hierarchy are executed.

// HTML:
// <div id="parent">
// <button id="child">Click me!</button>
// </div>
const parentDiv = document.createElement('div');
parentDiv.id = 'parent';
const button = document.createElement('button');
button.id = 'child';
parentDiv.appendChild(button);
document.body.appendChild(parentDiv);
const parent = document.getElementById('parent');
const child = document.getElementById('child');
parent.addEventListener('click', () => {
console.log('Parent element clicked');
});
child.addEventListener('click', (event) => {
console.log('Child element clicked');
event.stopPropagation(); // Stops propagation to parent
});
// Simulate clicking the button:
child.click();

Event delegation

Event bubbling is the basis for a technique called event delegation, where you attach a single event handler to a common ancestor of multiple elements to handle events for those elements efficiently. This is particularly useful when you have a large number of similar elements, like a list of items, and you want to avoid attaching individual event handlers to each item.

parent.addEventListener('click', (event) => {
if (event.target && event.target.id === 'child') {
console.log('Child element clicked');
}
});

Benefits

  • Cleaner code: Reduced number of event listeners improves code readability and maintainability.
  • Efficient event handling: Minimizes performance overhead by attaching fewer listeners.
  • Flexibility: Allows handling events happening on child elements without directly attaching listeners to them.

Pitfalls

  • Accidental event handling: Be mindful that parent elements might unintentionally capture events meant for children. Use event.target to identify the specific element that triggered the event.
  • Event order: Events bubble up in a specific order. If multiple parents have event listeners, their order of execution depends on the DOM hierarchy.
  • Over-delegation: While delegating events to a common ancestor is efficient, attaching a listener too high in the DOM tree might capture unintended events.

Use cases

Here are some practical ways to use event bubbling to write better code.

Reducing code with event delegation

Imagine you have a product list with numerous items, each with a "Buy Now" button. Traditionally, you might attach a separate click event listener to each button:

// HTML:
// <ul id="product-list">
// <li><button id="item1-buy">Buy Now</button></li>
// <li><button id="item2-buy">Buy Now</button></li>
// </ul>
const item1Buy = document.getElementById('item1-buy');
const item2Buy = document.getElementById('item2-buy');
item1Buy.addEventListener('click', handleBuyClick);
item2Buy.addEventListener('click', handleBuyClick);
// ... repeat for each item ...
function handleBuyClick(event) {
console.log('Buy button clicked for item:', event.target.id);
}

This approach becomes cumbersome as the number of items grows. Here's how event bubbling can simplify things:

// HTML:
// <ul id="product-list">
// <li><button id="item1-buy">Buy Now</button></li>
// <li><button id="item2-buy">Buy Now</button></li>
// </ul>
const productList = document.getElementById('product-list');
productList.addEventListener('click', handleBuyClick);
function handleBuyClick(event) {
// Check if the clicked element is a button within the list
if (event.target.tagName.toLowerCase() === 'button') {
console.log('Buy button clicked for item:', event.target.id);
}
}

By attaching the listener to the parent (productList) and checking the clicked element (event.target) within the handler, you achieve the same functionality with less code. This approach scales well when the items are dynamic, as no new event handlers have to be added or removed when the list of items changes.

Dropdown menus

Consider a dropdown menu where clicking anywhere on the menu element (parent) should close it. With event bubbling, you can achieve this with a single listener:

// HTML:
// <div id="dropdown">
// <button>Open Menu</button>
// <ul>
// <li>Item 1</li>
// <li>Item 2</li>
// </ul>
// </div>
const dropdown = document.getElementById('dropdown');
dropdown.addEventListener('click', handleDropdownClick);
function handleDropdownClick(event) {
// Close the dropdown if clicked outside the button
if (event.target !== dropdown.querySelector('button')) {
console.log('Dropdown closed');
// Your logic to hide the dropdown content
}
}

Here, the click event bubbles up from the clicked element (button or list item) to the dropdown element. The handler checks if the clicked element is not the <button> and closes the menu accordingly.

Accordion menus

Imagine an accordion menu where clicking a section header (parent) expands or collapses the content section (child) below it. Event bubbling makes this straightforward:

// HTML:
// <div class="accordion">
// <div class="header">Section 1</div>
// <div class="content">Content for Section 1</div>
// <div class="header">Section 2</div>
// <div class="content">Content for Section 2</div>
// </div>
const accordion = document.querySelector('.accordion');
accordion.addEventListener('click', handleAccordionClick);
function handleAccordionClick(event) {
// Check if clicked element is a header
if (event.target.classList.contains('header')) {
const content = event.target.nextElementSibling;
content.classList.toggle('active'); // Toggle display of content
}
}

By attaching the listener to the accordion element, clicking on any header triggers the event. The handler checks if the clicked element is a header and toggles the visibility of the corresponding content section.

Further reading

Describe event capturing in JavaScript and browsers

Topics
Web APIsJavaScript

TL;DR

Event capturing is a lesser-used counterpart to event bubbling in the DOM event propagation mechanism. It follows the opposite order, where an event triggers first on the ancestor element and then travels down to the target element.

Event capturing is rarely used as compared to event bubbling, but it can be used in specific scenarios where you need to intercept events at a higher level before they reach the target element. It is disabled by default but can be enabled through an option on addEventListener().


What is event capturing?

Event capturing is a propagation mechanism in the DOM (Document Object Model) where an event, such as a click or a keyboard event, is first triggered at the root of the document and then flows down through the DOM tree to the target element.

Capturing has a higher priority than bubbling, meaning that capturing event handlers are executed before bubbling event handlers, as shown by the phases of event propagation:

  • Capturing phase: The event moves down towards the target element
  • Target phase: The event reaches the target element
  • Bubbling phase: The event bubbles up from the target element

Note that event capturing is disabled by default. To enable it, you have to pass the capture option into addEventListener().

Capturing phase

During the capturing phase, the event starts at the document root and propagates down to the target element. Any event listeners on ancestor elements in this path will be triggered before the target element's handler. Note that event capturing will not happen unless the third argument of addEventListener() is set to true as shown below (the default value is false).

Here's an example using modern ES2015 syntax to demonstrate event capturing:

// HTML:
// <div id="parent">
// <button id="child">Click me!</button>
// </div>
const parent = document.getElementById('parent');
const child = document.getElementById('child');
parent.addEventListener(
'click',
() => {
console.log('Parent element clicked (capturing)');
},
true, // Set third argument to true for capturing
);
child.addEventListener('click', () => {
console.log('Child element clicked');
});

When you click the "Click me!" button, it will trigger the parent element's capturing handler first, followed by the child element's handler.

Stopping propagation

Event propagation can be stopped during the capturing phase using the stopPropagation() method. This prevents the event from traveling further down the DOM tree.

// HTML:
// <div id="parent">
// <button id="child">Click me!</button>
// </div>
const parent = document.getElementById('parent');
const child = document.getElementById('child');
parent.addEventListener(
'click',
(event) => {
console.log('Parent element clicked (capturing)');
event.stopPropagation(); // Stop event propagation
},
true,
);
child.addEventListener('click', () => {
console.log('Child element clicked');
});

As a result of stopping event propagation, only the parent event listener will be called when you click the "Click me!" button, and the child event listener will never be called because event propagation has stopped at the parent element.

Predict the output: capture, target, bubble in order

The complete event flow is the part candidates most often get wrong. The same click event passes through every ancestor on the way down (capture), arrives at the target, then walks back up (bubble). Here is the full sequence in one runnable example:

const grandparent = document.createElement('div');
const parent = document.createElement('div');
const child = document.createElement('button');
grandparent.appendChild(parent);
parent.appendChild(child);
document.body.appendChild(grandparent);
// Capture handlers (third arg = true)
grandparent.addEventListener(
'click',
() => console.log('1. grandparent capture'),
true,
);
parent.addEventListener('click', () => console.log('2. parent capture'), true);
// Target handler (default: bubble phase)
child.addEventListener('click', () => console.log('3. target'));
// Bubble handlers
parent.addEventListener('click', () => console.log('4. parent bubble'));
grandparent.addEventListener('click', () =>
console.log('5. grandparent bubble'),
);
child.click();
// Output (in this exact order):
// 1. grandparent capture
// 2. parent capture
// 3. target
// 4. parent bubble
// 5. grandparent bubble

The full picture: events go down, then up. Capture handlers fire from the root toward the target; bubble handlers fire from the target back toward the root. The target's own handler runs in the middle. (Listeners on the target itself fire in registration order regardless of the capture argument; the capture/bubble distinction only applies to ancestors.)

Bubbling vs capturing comparison

CapturingBubbling
Phase orderFirst (down from root)Last (up to root)
useCapture argumenttrue (or { capture: true })false (the default)
Default behaviorOff; must opt inOn; every listener bubbles by default
Effect of event.stopPropagation()Stops the event before it reaches the targetStops the event before higher ancestors see it
Common use casesIntercepting non-bubbling events; pre-empting child handlers; analyticsMost click, input, and change handlers; event delegation

When to use the capture phase in real apps

In practice, the capture phase is the right tool for three specific situations:

  1. Delegating non-bubbling events. focus, blur, scroll, and mouseenter/mouseleave do not bubble, but they are visible to ancestors during the capture phase. Adding addEventListener('focus', handler, true) to a form gives you a delegated focus listener for every input inside it.

    form.addEventListener(
    'focus',
    (event) => {
    highlightField(event.target);
    },
    true, // capture: catches focus events before they stop at the input
    );
  2. Pre-empting child handlers for analytics or feature gates. The capture phase runs before any child's bubble handler, so a region-wide "intercept clicks" listener can record the click (or block the action with stopPropagation()) before component code sees it.

  3. Modal libraries that need first-look at clicks. A modal dialog often listens at the document level with capture: true for outside-click dismissal. Using the bubble phase would let inner handlers call stopPropagation() and accidentally prevent the modal from closing.

stopPropagation() in capture vs bubble

stopPropagation() blocks all subsequent phases, not just the next ancestor.

const outer = document.createElement('div');
const inner = document.createElement('button');
outer.appendChild(inner);
document.body.appendChild(outer);
outer.addEventListener(
'click',
(event) => {
console.log('outer capture: stopping here');
event.stopPropagation();
},
true,
);
inner.addEventListener('click', () => console.log('inner target'));
outer.addEventListener('click', () => console.log('outer bubble'));
inner.click();
// Output: 'outer capture: stopping here'
// The target handler and bubble handler are both skipped.

Calling stopPropagation() during the capture phase prevents the target's own handlers and every bubble-phase ancestor from running. This is useful for an "intercept and replace" pattern, but if the target needs to keep working, listen during the bubble phase instead.

There is also event.stopImmediatePropagation(), which additionally prevents other listeners on the same element (registered in the same phase) from firing. Use it when multiple scripts add listeners to the same element and only one of them should run.

Further reading

What is the difference between `mouseenter` and `mouseover` event in JavaScript and browsers?

Topics
Web APIsHTMLJavaScript

TL;DR

The main difference lies in the bubbling behavior of mouseenter and mouseover events. mouseenter does not bubble while mouseover bubbles.

mouseenter events do not bubble. The mouseenter event is triggered only when the mouse pointer enters the element itself, not its descendants. If a parent element has child elements, and the mouse pointer enters child elements, the mouseenter event will not be triggered on the parent element again; it is only triggered once upon entry of the parent element, without regard for its contents. If both parent and child have mouseenter listeners attached and the mouse pointer moves from the parent element to the child element, mouseenter will only fire for the child.

mouseover events bubble up the DOM tree. The mouseover event is triggered when the mouse pointer enters the element or one of its descendants. If a parent element has child elements, and the mouse pointer enters child elements, the mouseover event will be triggered on the parent element again as well. If the parent element has multiple child elements, this can result in multiple event callbacks fired. If there are child elements, and the mouse pointer moves from the parent element to the child element, mouseover will fire for both the parent and the child.

Propertymouseentermouseover
BubblingNoYes
TriggerOnly when entering itselfWhen entering itself and when entering descendants

mouseenter event:

  • Does not bubble: The mouseenter event does not bubble. It is only triggered when the mouse pointer enters the element to which the event listener is attached, not when it enters any child elements.
  • Triggered once: The mouseenter event is triggered only once when the mouse pointer enters the element, making it more predictable and easier to manage in certain scenarios.

A use case for mouseenter is when you want to detect the mouse entering an element without worrying about child elements triggering the event multiple times.

mouseover Event:

  • Bubbles up the DOM: The mouseover event bubbles up through the DOM. This means that if you have an event listener on a parent element, it will also trigger when the mouse pointer moves over any child elements.
  • Triggered multiple times: The mouseover event is triggered every time the mouse pointer moves over an element or any of its child elements. This can lead to multiple triggers if you have nested elements.

A use case for mouseover is when you want to detect when the mouse enters an element or any of its children and are okay with the events triggering multiple times.

Example

Here's an example demonstrating the difference between mouseover and mouseenter events:

<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Mouse Events Example</title>
<style>
.parent {
width: 200px;
height: 200px;
background-color: lightblue;
padding: 20px;
}
.child {
width: 100px;
height: 100px;
background-color: lightcoral;
}
</style>
</head>
<body>
<div class="parent">
Parent Element
<div class="child">Child Element</div>
</div>
<script>
const parent = document.querySelector('.parent');
const child = document.querySelector('.child');
// Mouseover event on parent.
parent.addEventListener('mouseover', () => {
console.log('Mouseover on parent');
});
// Mouseenter event on parent.
parent.addEventListener('mouseenter', () => {
console.log('Mouseenter on parent');
});
// Mouseover event on child.
child.addEventListener('mouseover', () => {
console.log('Mouseover on child');
});
// Mouseenter event on child.
child.addEventListener('mouseenter', () => {
console.log('Mouseenter on child');
});
</script>
</body>
</html>

Expected behavior

  • When the mouse enters the parent element:
    • The mouseover event on the parent will trigger.
    • The mouseenter event on the parent will trigger.
  • When the mouse enters the child element:
    • The mouseover event on the parent will trigger again because mouseover bubbles up from the child.
    • The mouseover event on the child will trigger.
    • The mouseenter event on the child will trigger.
    • The mouseenter event on the parent will not trigger again because mouseenter does not bubble.

Further reading

What is `'use strict';` (strict mode) in JavaScript for?

What are the advantages and disadvantages to using it?
Topics
JavaScript

TL;DR

'use strict' is a statement used to enable strict mode to entire scripts or individual functions. Strict mode is a way to opt into a restricted variant of JavaScript.

Advantages

  • Makes it impossible to accidentally create global variables.
  • Makes assignments which would otherwise silently fail to throw an exception.
  • Makes attempts to delete undeletable properties throw an exception (where before the attempt would simply have no effect).
  • Requires that function parameter names be unique.
  • this is undefined in the global context.
  • It catches some common coding bloopers, throwing exceptions.
  • It disables features that are confusing or poorly thought out.

Disadvantages

  • Many missing features that some developers might be used to.
  • No more access to function.caller and function.arguments.
  • Concatenation of scripts written in different strict modes might cause issues.

Overall, the benefits outweigh the disadvantages and there is not really a need to rely on the features that strict mode prohibits. We should all be using strict mode by default.


What is "use strict" in JavaScript?

In essence, "use strict" is a directive introduced in ECMAScript 5 (ES5) that signals to the JavaScript engine that the code it surrounds should be executed in "strict mode". Strict mode imposes stricter parsing and error handling rules, essentially making your code more secure and less error-prone.

When you use "use strict", it helps you write cleaner code, such as preventing you from using undeclared variables. It can also make your code more secure because it disallows some potentially insecure actions.

How to use strict mode

  1. Global Scope: To enable strict mode globally, add the directive at the beginning of the JavaScript file:

    'use strict';
    // any code in this file will be run in strict mode
    function add(a, b) {
    return a + b;
    }
  2. Local Scope: To enable strict mode within a function, add the directive at the beginning of the function:

    function myFunction() {
    'use strict';
    // this will tell JavaScript engine to use strict mode only for the `myFunction`
    // Anything that is outside of the scope of this function will be treated as non-strict mode unless specified to use strict mode
    }

Key features of strict mode

  1. Error prevention: Strict mode prevents common errors such as:
    • Using undeclared variables.
    • Assigning values to non-writable properties.
    • Using non-existent properties or variables.
    • Deleting undeletable properties.
    • Using reserved keywords as identifiers.
    • Duplicating parameter names in functions.
  2. Improved security: Strict mode helps in writing more secure code by:
    • Preventing the use of deprecated features like arguments.caller and arguments.callee.
    • Restricting the use of eval() to prevent variable declarations in the calling scope.
  3. Compatibility: Strict mode ensures compatibility with future versions of JavaScript by preventing the use of reserved keywords as identifiers.

Examples

  1. Preventing accidental creation of global variables:

    // Without strict mode
    function defineNumber() {
    count = 123;
    }
    defineNumber();
    console.log(count); // logs: 123
    'use strict'; // With strict mode
    function strictFunc() {
    'use strict';
    strictVar = 123; // ReferenceError: strictVar is not defined
    }
    strictFunc();
    console.log(strictVar); // ReferenceError: strictVar is not defined
  2. Making assignments which would otherwise silently fail to throw an exception:

    // Without strict mode
    NaN = 'foo'; // This fails silently
    console.log(NaN); // logs: NaN
    'use strict'; // With strict mode
    NaN = 'foo'; // Uncaught TypeError: Cannot assign to read only property 'NaN' of object '#<Window>'
  3. Making attempts to delete undeletable properties throw an error in strict mode:

    // Without strict mode
    delete Object.prototype; // This fails silently
    'use strict'; // With strict mode
    delete Object.prototype; // TypeError: Cannot delete property 'prototype' of function Object() { [native code] }

Predict the output: common strict mode gotchas

These are the four most common interview "gotchas" around strict mode. Try predicting the output before running each.

1. Accidental globals

function f() {
x = 5; // no var/let/const, assignment to undeclared variable
}
f();
console.log(x); // 5: sloppy mode silently created a global

In strict mode, that same code throws a ReferenceError because creating implicit globals is forbidden. This is the single most-cited reason to use strict mode.

'use strict';
function f() {
x = 5; // ReferenceError: x is not defined
}
try {
f();
} catch (e) {
console.log(e.message);
}

2. this in a plain function call

function whatIsThis() {
return this;
}
console.log(whatIsThis() === globalThis); // true in sloppy mode

In strict mode, this inside a function called as a plain function is undefined:

'use strict';
function whatIsThis() {
return this;
}
console.log(whatIsThis()); // undefined

This is a common source of bugs in class methods that get extracted as callbacks. const fn = obj.method; fn(); calls method with this === undefined, which usually crashes immediately. In sloppy mode, the same call would silently bind this to the global object and continue with broken behavior.

3. Duplicate parameter names

'use strict';
try {
// Using eval to force a parse error to be thrown at runtime in strict mode
eval('function dup(a, a) {}');
} catch (e) {
console.log(e.message); // "Duplicate parameter name not allowed in this context"
}

In sloppy mode, function f(a, a) {} is silently allowed and the second a shadows the first.

4. Octal literals

console.log(010); // 8 in sloppy mode (leading 0 means octal)

In strict mode (and in ES modules), the legacy 0-prefix octal literal is rejected at parse time. Use the explicit 0o prefix instead:

'use strict';
console.log(0o10); // 8

Is 'use strict' still necessary?

In most modern code, no. Strict mode is now the default in several places:

  • ES modules are automatically strict. Anything imported with import/export, served as <script type="module">, or compiled by Vite, webpack, or Rollup runs in strict mode without the directive.
  • Class bodies (and methods inside them) are strict, even when the surrounding code is not.
  • Most build tools (Babel, TypeScript with target: ES2015+) emit code that is either modules or class-wrapped, so it is strict by default.

The directive still matters in a few places:

  • Legacy plain <script> tags without type="module". The directive is the only way to opt in.
  • IIFEs and old library bundles that ship as scripts, including some CDN copies of libraries.
  • Node.js CommonJS files (.cjs), which are not automatically strict.
  • Quick <script> snippets in HTML pages and CodePens.

If you are writing an ES module or a React/Vue component, you are already in strict mode and the directive at the top of the file is harmless but redundant. If you are writing a plain <script> tag without type="module", you should still add it.

Notes

  1. Placement: The 'use strict' directive must be placed at the beginning of the file or function. Placing it anywhere else will not have any effect.
  2. Compatibility: Strict mode is supported by all modern browsers except Internet Explorer 9 and lower.
  3. Irreversible: There is no way to cancel 'use strict' after it has been set.

Further reading

Explain the difference between synchronous and asynchronous functions in JavaScript

Topics
AsyncJavaScript

TL;DR

Synchronous functions are blocking while asynchronous functions are not. In synchronous functions, statements complete before the next statement is run. As a result, programs containing only synchronous code are evaluated exactly in order of the statements. The execution of the program is paused if one of the statements takes a very long time.

function sum(a, b) {
console.log('Inside sum function');
return a + b;
}
const result = sum(2, 3); // The program waits for sum() to complete before assigning the result
console.log('Result: ', result); // Output: 5

Asynchronous functions usually accept a callback as a parameter and execution continues on to the next line immediately after the asynchronous function is invoked. The callback is only invoked when the asynchronous operation is complete and the call stack is empty. Heavy duty operations such as loading data from a web server or querying a database should be done asynchronously so that the main thread can continue executing other operations instead of blocking until that long operation completes (in the case of browsers, the UI will freeze).

function fetchData(callback) {
setTimeout(() => {
const data = { name: 'John', age: 30 };
callback(data); // Calling the callback function with data
}, 2000); // Simulating a 2-second delay
}
console.log('Fetching data...');
fetchData((data) => {
console.log(data); // Output: { name: 'John', age: 30 } (after 2 seconds)
});
console.log('Call made to fetch data'); // This will print before the data is fetched

Synchronous vs asynchronous functions

In JavaScript, the concepts of synchronous and asynchronous functions are fundamental to understanding how code execution is managed, particularly in the context of handling operations like I/O tasks, API calls, and other time-consuming processes.

Synchronous functions

Synchronous functions execute in a sequential order, one after the other. Each operation must wait for the previous one to complete before moving on to the next.

  • Synchronous code is blocking, meaning the program execution halts until the current operation finishes.
  • It follows a strict sequence, executing instructions line by line.
  • Synchronous functions are easier to understand and debug since the flow is predictable.

Synchronous function examples

  1. Reading files synchronously: When reading a file from the file system using the synchronous readFileSync method from the fs module in Node.js, the program execution is blocked until the entire file is read. This can cause performance issues, especially for large files or when reading multiple files sequentially.

    const fs = require('fs');
    const data = fs.readFileSync('large-file.txt', 'utf8');
    console.log(data); // Execution is blocked until the file is read.
    console.log('End of the program');
  2. Looping over large datasets: Iterating over a large array or dataset synchronously can freeze the user interface or browser tab until the operation completes, leading to an unresponsive application.

    const largeArray = new Array(1_000_000).fill(0);
    // Blocks the main thread until the million operations are completed.
    const result = largeArray.map((num) => num * 2);
    console.log(result);

Asynchronous functions

Asynchronous functions do not block the execution of the program. They allow other operations to continue while waiting for a response or completion of a time-consuming task.

  • Asynchronous code is non-blocking, allowing the program to keep running without waiting for a specific operation to finish.
  • It enables concurrent execution, improving performance and responsiveness.
  • Asynchronous functions are commonly used for tasks like network requests, file I/O, and timers.

Asynchronous function examples

  1. Network requests: Making network requests, such as fetching data from an API or sending data to a server, is typically done asynchronously. This allows the application to remain responsive while waiting for the response, preventing the user interface from freezing.

    console.log('Start of the program'); // This will be printed first as program starts here
    fetch('https://jsonplaceholder.typicode.com/todos/1')
    .then((response) => response.json())
    .then((data) => {
    console.log(data);
    /** Process the data without blocking the main thread
    * and printed at the end if fetch call succeeds
    */
    })
    .catch((error) => console.error(error));
    console.log('End of program'); // This will be printed before the fetch callback
  2. User input and events: Handling user input events, such as clicks, key presses, or mouse movements, is inherently asynchronous. The application needs to respond to these events without blocking the main thread, ensuring a smooth user experience.

    const button = document.getElementById('myButton');
    button.addEventListener('click', () => {
    // Handle the click event asynchronously
    console.log('Button clicked');
    });
  3. Timers and Animations: Timers (setTimeout(), setInterval()) and animations (e.g., requestAnimationFrame()) are asynchronous operations that allow the application to schedule tasks or update animations without blocking the main thread.

    setTimeout(() => {
    console.log('This message is delayed by 2 seconds');
    }, 2000);
    setInterval(() => {
    console.log('Current time:', new Date().toLocaleString());
    }, 2000); // Interval runs every 2 seconds

By using asynchronous functions and operations, JavaScript can handle time-consuming tasks without freezing the user interface or blocking the main thread.

It is important to note that async functions do not run on a different thread. They still run on the main thread. However, it is possible to achieve parallelism in JavaScript by using Web workers.

Achieving parallelism in JavaScript via web workers

Web workers allow you to spawn separate background threads that can perform CPU-intensive tasks in parallel with the main thread. These worker threads can communicate with the main thread via message passing, but they do not have direct access to the DOM or other browser APIs.

// main.js
const worker = new Worker('worker.js');
worker.onmessage = function (event) {
console.log('Result from worker:', event.data);
};
worker.postMessage('Start computation');
// worker.js
self.onmessage = function (event) {
const result = performHeavyComputation();
self.postMessage(result);
};
function performHeavyComputation() {
// CPU-intensive computation
return 'Computation result';
}

In this example, the main thread creates a new web worker and sends it a message to start a computation. The worker performs the heavy computation in parallel with the main thread and sends the result back via postMessage().

Event loop

The async nature of JavaScript is powered by a JavaScript engine's event loop allowing concurrent operations even though JavaScript is single-threaded. It's an important concept to understand so we highly recommend going through that topic as well.

Further reading

What are the pros and cons of using Promises instead of callbacks in JavaScript?

Topics
AsyncJavaScript

TL;DR

Promises offer a cleaner alternative to callbacks, helping to avoid callback hell and making asynchronous code more readable. They facilitate writing sequential and parallel asynchronous operations with ease. However, using promises may introduce slightly more complex code.


Pros

Avoid callback hell which can be unreadable.

Callback hell, also known as the "pyramid of doom," is a phenomenon that occurs when you have multiple nested callbacks in your code. This can lead to code that is difficult to read, maintain, and debug. Here's an example of callback hell:

function getFirstData(callback) {
setTimeout(() => {
callback({ id: 1, title: 'First Data' });
}, 1000);
}
function getSecondData(data, callback) {
setTimeout(() => {
callback({ id: data.id, title: data.title + ' Second Data' });
}, 1000);
}
function getThirdData(data, callback) {
setTimeout(() => {
callback({ id: data.id, title: data.title + ' Third Data' });
}, 1000);
}
// Callback hell
getFirstData((data) => {
getSecondData(data, (data) => {
getThirdData(data, (result) => {
console.log(result); // Output: {id: 1, title: "First Data Second Data Third Data"}
});
});
});

Promises address the problem of callback hell by providing a more linear and readable structure for your code.

// Example of sequential asynchronous code using setTimeout and Promises
function getFirstData() {
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve({ id: 1, title: 'First Data' });
}, 1000);
});
}
function getSecondData(data) {
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve({ id: data.id, title: data.title + ' Second Data' });
}, 1000);
});
}
function getThirdData(data) {
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve({ id: data.id, title: data.title + ' Third Data' });
}, 1000);
});
}
getFirstData()
.then(getSecondData)
.then(getThirdData)
.then((data) => {
console.log(data); // Output: {id: 1, title: "First Data Second Data Third Data"}
})
.catch((error) => console.error('Error:', error));

Makes it easy to write sequential asynchronous code that is readable with .then().

In the above code example, we use the .then() method to chain these Promises together, allowing the code to execute sequentially. It provides a cleaner and more manageable way to handle asynchronous operations in JavaScript.

Makes it easy to write parallel asynchronous code with Promise.all().

Both Promise.all() and callbacks can be used to write parallel asynchronous code. However, Promise.all() provides a more concise and readable way to handle multiple Promises, especially when dealing with complex asynchronous workflows.

function getData1() {
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve({ id: 1, title: 'Data 1' });
}, 1000);
});
}
function getData2() {
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve({ id: 2, title: 'Data 2' });
}, 1000);
});
}
function getData3() {
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve({ id: 3, title: 'Data 3' });
}, 1000);
});
}
Promise.all([getData1(), getData2(), getData3()])
.then((results) => {
console.log(results); // Output: [{ id: 1, title: 'Data 1' }, { id: 2, title: 'Data 2' }, { id: 3, title: 'Data 3' }]
})
.catch((error) => {
console.error('Error:', error);
});

Easier error handling with .catch() and guaranteed cleanup with .finally()

Promises make error handling more straightforward by allowing you to catch errors at the end of a chain using .catch(), instead of manually checking for errors in every callback. This leads to cleaner and more maintainable code.

Additionally, .finally() lets you run code after the Promise settles, whether it succeeded or failed, which is great for cleanup tasks like hiding spinners or resetting UI states.

function getFirstData() {
return new Promise((resolve) => {
setTimeout(() => {
resolve({ id: 1, title: 'First Data' });
}, 1000);
});
}
function getSecondData(data) {
return new Promise((resolve) => {
setTimeout(() => {
resolve({ id: data.id, title: data.title + ' -> Second Data' });
}, 1000);
});
}
getFirstData()
.then(getSecondData)
.then((data) => {
console.log('Success:', data);
})
.catch((error) => {
console.error('Error:', error);
})
.finally(() => {
console.log('This runs no matter what');
});

With promises, these scenarios which are present in callbacks-only coding, will not happen:

  • Call the callback too early
  • Call the callback too late (or never)
  • Call the callback too few or too many times
  • Fail to pass along any necessary environment/parameters
  • Swallow any errors/exceptions that may happen

Cons

  • Slightly more complex code (debatable).

Practice

Further reading

Explain AJAX in as much detail as possible

Topics
JavaScriptNetworking

TL;DR

AJAX (Asynchronous JavaScript and XML) facilitates asynchronous communication between the client and server, enabling dynamic updates to web pages without reloading. It uses techniques like XMLHttpRequest or the fetch() API to send and receive data in the background. In modern web applications, the fetch() API is more commonly used to implement AJAX.

Using XMLHttpRequest

let xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
if (xhr.readyState === XMLHttpRequest.DONE) {
if (xhr.status === 200) {
console.log(xhr.responseText);
} else {
console.error('Request failed: ' + xhr.status);
}
}
};
xhr.open('GET', 'https://jsonplaceholder.typicode.com/todos/1', true);
xhr.send();

Using fetch()

fetch('https://jsonplaceholder.typicode.com/todos/1')
.then((response) => {
if (!response.ok) {
throw new Error('Network response was not ok');
}
return response.json();
})
.then((data) => console.log(data))
.catch((error) => console.error('Fetch error:', error));

AJAX (Asynchronous JavaScript and XML)

AJAX (asynchronous JavaScript and XML) is a set of web development techniques using many web technologies on the client side to create asynchronous web applications. Unlike traditional web applications where every user interaction triggers a full page reload, with AJAX, web applications can send data to and retrieve it from a server asynchronously (in the background) without interfering with the display and behavior of the existing page. By decoupling the data interchange layer from the presentation layer, AJAX allows for web pages, and by extension web applications, to change content dynamically without the need to reload the entire page. In practice, modern implementations commonly use JSON instead of XML, due to the advantages of JSON being native to JavaScript.

Traditionally, AJAX was implemented using the XMLHttpRequest API, but the fetch() API is more suitable and easier to use for modern web applications.

XMLHttpRequest API

Here's a basic example of how it can be used:

let xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
if (xhr.readyState === XMLHttpRequest.DONE) {
if (xhr.status === 200) {
console.log(xhr.responseText);
} else {
console.error('Request failed: ' + xhr.status);
}
}
};
xhr.open('GET', 'https://jsonplaceholder.typicode.com/todos/1', true);
xhr.send();

fetch() API

Alternatively, the fetch() API provides a modern, promise-based approach to making AJAX requests. It is more commonly used in modern web applications.

Here's how you can use it:

fetch('https://jsonplaceholder.typicode.com/todos/1')
.then((response) => {
if (!response.ok) {
throw new Error('Network response was not ok');
}
return response.json();
})
.then((data) => console.log(data))
.catch((error) => console.error('Fetch error:', error));

How does AJAX work?

In modern browsers, AJAX is done using the fetch() API instead of XMLHttpRequest, so we will explain how the fetch() API works instead:

  1. Making a request: The fetch() function initiates an asynchronous request to fetch a resource from a URL. It takes one mandatory argument – the URL of the resource to fetch, and optionally accepts a second argument - an options object that allows configuring the HTTP request with options like the HTTP method, headers, body, etc.

    fetch('https://api.example.com/data', {
    method: 'GET', // or 'POST', 'PUT', 'DELETE', etc.
    headers: {
    'Content-Type': 'application/json',
    },
    });
  2. Return a promise: The fetch() function returns a Promise that resolves to a Response object representing the response from the server. This Promise needs to be handled using .then() or async/await.

  3. Handling the response: The Response object provides methods to define how the body content should be handled, such as .json() for parsing JSON data, .text() for plain text, .blob() for binary data, etc.

    fetch('https://jsonplaceholder.typicode.com/todos/1')
    .then((response) => response.json())
    .then((data) => console.log(data))
    .catch((error) => console.error('Error:', error));
  4. Asynchronous nature: The fetch API is asynchronous, allowing the browser to continue executing other tasks while waiting for the server response. This prevents blocking the main thread and provides a better user experience. The then() and catch() callbacks are put onto the microtask queue when executed as part of the event loop.

  5. Request options: The optional second argument to fetch() allows configuring various aspects of the request, such as the HTTP method, headers, body, credentials, caching behavior, and more.

  6. Error handling: Errors during the request, such as network failures or invalid responses, are caught and propagated through the Promise chain using the .catch() method or try/catch blocks with async/await.

The fetch() API provides a modern, Promise-based approach to making HTTP requests in JavaScript, replacing the older XMLHttpRequest API. It offers a simpler and more flexible way to interact with APIs and fetch resources from servers, while integrating advanced HTTP concepts like CORS and other extensions.

Advantages and disadvantages of AJAX

While useful, using AJAX also comes with some considerations. Read more about the advantages and disadvantages of AJAX.

Further reading

What are the advantages and disadvantages of using AJAX?

Topics
JavaScriptNetworking

TL;DR

AJAX (Asynchronous JavaScript and XML) is a technique in JavaScript that allows web pages to send and retrieve data asynchronously from servers without refreshing or reloading the entire page.

Advantages

  • Smoother user experience: Updates happen without full page reloads, like in mail and chat applications.
  • Lighter server load: Only necessary data is fetched via AJAX, reducing server load and improving perceived performance of webpages.
  • Maintains client state: User interactions and any client states are persisted within the page.

Disadvantages

  • Reliance on JavaScript: If disabled, AJAX functionality breaks.
  • Bookmarking issues: Dynamic content makes bookmarking specific page states difficult.
  • SEO challenges: Search engines may struggle to index dynamic content.
  • Performance concerns: Processing AJAX data on low-end devices can be slow.

AJAX (Asynchronous JavaScript and XML)

AJAX (Asynchronous JavaScript and XML) is a technique in JavaScript that allows web pages to send and retrieve data asynchronously from servers without refreshing or reloading the entire page. When it was first created, it revolutionized web development and resulted in a smoother and more responsive user experience. AJAX is explained in detail in this question.

Here's a breakdown of AJAX's pros and cons:

Advantages

  • Enhanced user experience: AJAX allows for partial page updates without full reloads. This creates a smoother and more responsive feel for users, as they don't have to wait for the entire page to refresh for every interaction.
  • Reduced server load and bandwidth usage: By exchanging only specific data with the server, AJAX minimizes the amount of data transferred. This leads to faster loading times and reduced server strain, especially for frequently updated content.
  • Improved performance: Faster data exchange and partial page updates contribute to a quicker web application overall. Users perceive the application as more responsive and efficient.
  • Dynamic content updates, preserving client-only state: AJAX enables real-time data updates without full page reloads, preserving client-only state like form inputs and scroll positions. This is ideal for features like live chat, stock tickers, or collaborative editing.
  • Form validation: AJAX can be used for client-side form validation that requires back end interactions (e.g. checking for duplicate usernames), providing immediate feedback to users without requiring a form submission request. This improves the user experience and avoids unnecessary full page reloads for invalid submissions.

Disadvantages

  • Increased complexity: Developing AJAX-powered applications can be more complex compared to traditional web development. It requires handling asynchronous communication and potential race conditions between requests and responses. Since pages are not reloaded, parts of the page can be outdated over time and can be confusing.
  • Dependency on JavaScript: AJAX relies on JavaScript to function. Users with JavaScript disabled or unsupported browsers won't experience the full functionality of the application. A fallback mechanism (graceful degradation) is necessary to ensure basic functionality for these users.
  • Security concerns: AJAX introduces new security considerations like Cross-Site Scripting (XSS) vulnerabilities (if servers directly return HTML markup) if not implemented carefully. Proper data validation and sanitization are crucial to prevent security risks.
  • Browser support: Older browsers might not fully support AJAX features. Developers need to consider compatibility when building with AJAX to ensure a good user experience across different browsers.
  • SEO challenges: Search engines might have difficulty indexing content dynamically loaded through AJAX. Developers need to employ techniques like server-side rendering or proper content embedding to ensure search engine visibility.
  • Navigation problems: AJAX can interfere with the browser's back and forward navigation buttons, as well as bookmarking, since the URL may not change with asynchronous updates.
  • State management: Maintaining the application state and ensuring proper navigation can be challenging with AJAX, requiring additional techniques such as the History API or URL hash fragments.

While AJAX offers significant advantages in terms of user experience, performance, and functionality, it also introduces complexities and potential drawbacks related to development, SEO, browser compatibility, security, and navigation.

Is AJAX still relevant today?

Mostly as a historical term. The technique it described, fetching data without a page reload, is now the universal default. The original technology stack (XMLHttpRequest, XML payloads, callback-based code) has been replaced almost everywhere.

Here is what changed:

Original AJAX (~2005)Modern equivalent
Transport APIXMLHttpRequestfetch()
Async styleCallbacksasync/await over Promises
Payload formatXML (responseXML)JSON (almost always)
Cross-originForbidden by same-origin policy; required JSONP hacksSolved by CORS
SEO of dynamic contentA real problem (Google couldn't index)Addressed by SSR, SSG, RSC; Googlebot also runs JavaScript
Browser inconsistencyRequired wrapper libraries (jQuery $.ajax)All evergreen browsers ship the same fetch API

The pattern AJAX introduced is alive: async data loading without a full page reload is now the default expectation. The underlying technology has moved on.

Using "AJAX" today to mean "we make a fetch call from JavaScript" is loose but generally fine. Reaching for XMLHttpRequest for new code, on the other hand, is a red flag, since fetch is the modern API.

The modern equivalent: fetch() with async/await

Side-by-side, classic AJAX vs current best practice:

// Classic XHR (~2005-style)
const xhr = new XMLHttpRequest();
xhr.open('GET', '/api/users');
xhr.onreadystatechange = function () {
if (xhr.readyState === 4 && xhr.status === 200) {
const users = JSON.parse(xhr.responseText);
render(users);
} else if (xhr.readyState === 4) {
showError();
}
};
xhr.onerror = function () {
showError();
};
xhr.send();
// Modern fetch + async/await
async function loadUsers() {
try {
const res = await fetch('/api/users');
if (!res.ok) throw new Error(`HTTP ${res.status}`);
const users = await res.json();
render(users);
} catch (err) {
showError(err);
}
}

The modern version is shorter, easier to read, and composes naturally with Promise.all, AbortController for cancellation, and libraries like React Query and SWR for caching and deduplication. The remaining reason some teams still use XMLHttpRequest is to track upload progress with the xhr.upload.onprogress event, since fetch upload-progress reporting is more limited. Streaming a ReadableStream request body in fetch (which would let you build progress on top) is supported in Chromium (since Chrome 105) and Safari (since 18.2), but not yet in Firefox; it also requires the duplex: 'half' option on the request.

Re-examining the disadvantages

Several disadvantages historically listed for AJAX are no longer real concerns in modern apps:

  • SEO challenges: Googlebot has rendered JavaScript since around 2015, and modern frameworks ship SSR (Next.js, Nuxt, Remix), SSG (Astro, 11ty), or React Server Components for content that needs to be indexable. The AJAX-era SEO problem is largely solved if you choose your rendering strategy intentionally.
  • Bookmarking and back-button issues: the History API (pushState, popState) and modern routers (Next.js, React Router, Vue Router) handle URL state automatically. AJAX-era apps that broke the back button were generally not using these.
  • Browser support: fetch is in every evergreen browser (Chrome, Firefox, Safari, Edge). Internet Explorer is end-of-life, and the browser-compatibility argument no longer applies.
  • Reliance on JavaScript: a tiny minority of users have JavaScript disabled, and modern apps generally assume JavaScript anyway. If you need a no-JS fallback, you build a server-rendered version (which RSC and progressive enhancement support directly).

The disadvantages that genuinely remain are about complexity and security: race conditions and stale state, error handling in async code, XSS via innerHTML from API responses, and the ongoing complexity of state management. These are real, but they are problems of any client-side data fetching, not specifically of AJAX.

Further reading

What are the differences between `XMLHttpRequest` and `fetch()` in JavaScript and browsers?

Topics
JavaScriptNetworking

TL;DR

XMLHttpRequest (XHR) and fetch() API are both used for asynchronous HTTP requests in JavaScript (AJAX). fetch() offers a cleaner syntax, promise-based approach, and more modern feature set compared to XHR. However, there are some differences:

  • XMLHttpRequest uses event callbacks, while fetch() utilizes promise chaining.
  • fetch() provides more flexibility in headers and request bodies.
  • fetch() supports cleaner error handling with catch().
  • Handling caching with XMLHttpRequest is difficult, but caching is supported by fetch() by default via the cache value of the second parameter to fetch() or Request().
  • fetch() requires an AbortController for cancelation, while XMLHttpRequest provides an abort() method.
  • XMLHttpRequest has good support for progress tracking, which fetch() lacks.
  • XMLHttpRequest is only available in the browser and not natively supported in Node.js environments. On the other hand fetch() is part of the JavaScript language and is supported on all modern JavaScript runtimes.

These days fetch() is preferred for its cleaner syntax and modern features.


XMLHttpRequest vs fetch()

Both XMLHttpRequest (XHR) and fetch() are ways to make asynchronous HTTP requests in JavaScript. However, they differ significantly in syntax, promise handling, and feature set.

Syntax and usage

XMLHttpRequest is event-driven and requires attaching event listeners to handle response/error states. The basic syntax for creating an XMLHttpRequest object and sending a request is as follows:

const xhr = new XMLHttpRequest();
xhr.open('GET', 'https://jsonplaceholder.typicode.com/todos/1', true);
xhr.responseType = 'json';
xhr.onload = function () {
if (xhr.status === 200) {
console.log(xhr.response);
}
};
xhr.send();

xhr is an instance of the XMLHttpRequest class. The open method is used to specify the request method, URL, and whether the request should be asynchronous. The onload event is used to handle the response, and the send method is used to send the request.

fetch() provides a more straightforward and intuitive way of making HTTP requests. It is Promise-based and returns a promise that resolves with the response or rejects with an error. The basic syntax for making a GET request using fetch() is as follows:

fetch('https://jsonplaceholder.typicode.com/todos/1')
.then((response) => response.text())
.then((data) => console.log(data));

Request headers

Both XMLHttpRequest and fetch() support setting request headers. However, fetch() provides more flexibility in terms of setting headers, as it supports custom headers and allows for more complex header configurations.

XMLHttpRequest supports setting request headers using the setRequestHeader method:

xhr.setRequestHeader('Content-Type', 'application/json');
xhr.setRequestHeader('Authorization', 'Bearer YOUR_TOKEN');

For fetch(), headers are passed as an object in the second argument to fetch():

fetch('https://jsonplaceholder.typicode.com/todos/1', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: 'Bearer YOUR_TOKEN',
},
body: JSON.stringify({
name: 'John Doe',
age: 30,
}),
});

Request body

Both XMLHttpRequest and fetch() support sending request bodies. However, fetch() provides more flexibility in terms of sending request bodies, as it supports sending JSON data, form data, and more.

XMLHttpRequest supports sending request bodies using the send method:

const xhr = new XMLHttpRequest();
xhr.open('POST', 'https://jsonplaceholder.typicode.com/todos/1', true);
xhr.send(
JSON.stringify({
name: 'John Doe',
age: 30,
}),
);

fetch() supports sending request bodies using the body property in the second argument to fetch():

fetch('https://jsonplaceholder.typicode.com/todos/1', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
name: 'John Doe',
age: 30,
}),
});

Response handling

XMLHttpRequest provides a responseType property to set the response format that we are expecting. responseType is 'text' by default but it supports types like 'text', 'arraybuffer', 'blob', 'document' and 'json'.

const xhr = new XMLHttpRequest();
xhr.open('GET', 'https://jsonplaceholder.typicode.com/todos/1', true);
xhr.responseType = 'json'; // or 'text', 'blob', 'arraybuffer'
xhr.onload = function () {
if (xhr.status === 200) {
console.log(xhr.response);
}
};
xhr.send();

On the other hand, fetch() provides a unified Response object with methods like .json() and .text() for accessing data.

// JSON data
fetch('https://jsonplaceholder.typicode.com/todos/1')
.then((response) => response.json())
.then((data) => console.log(data));
// Text data
fetch('https://jsonplaceholder.typicode.com/todos/1')
.then((response) => response.text())
.then((data) => console.log(data));

Error handling

Both support error handling but fetch() provides more flexibility in terms of error handling, as it supports handling errors using the .catch() method.

XMLHttpRequest supports error handling using the onerror event:

const xhr = new XMLHttpRequest();
xhr.open('GET', 'https://jsonplaceholder.typicod.com/todos/1', true); // Typo in URL
xhr.responseType = 'json';
xhr.onload = function () {
if (xhr.status === 200) {
console.log(xhr.response);
}
};
xhr.onerror = function () {
console.error('Error occurred');
};
xhr.send();

fetch() supports error handling using the catch() method on the returned Promise:

fetch('https://jsonplaceholder.typicod.com/todos/1') // Typo in URL
.then((response) => response.json())
.then((data) => console.log(data))
.catch((error) => console.error('Error occurred: ' + error));

Caching control

Handling caching with XMLHttpRequest is difficult, and you might need to add a random value to the query string in order to get around the browser cache. Caching is supported by fetch() by default in the second parameter of the options object:

const res = await fetch('https://jsonplaceholder.typicode.com/todos/1', {
method: 'GET',
cache: 'default',
});

Other values for the cache option include default, no-store, reload, no-cache, force-cache, and only-if-cached.

Cancelation

In-flight XMLHttpRequests can be canceled by running the XMLHttpRequest's abort() method. An abort handler can be attached by assigning to the .onabort property if necessary:

const xhr = new XMLHttpRequest();
xhr.open('GET', 'https://jsonplaceholder.typicode.com/todos/1');
xhr.send();
// ...
xhr.onabort = () => console.log('aborted');
xhr.abort();

Aborting a fetch() requires creating an AbortController object and passing its signal as the signal property of the options object when calling fetch().

const controller = new AbortController();
const signal = controller.signal;
fetch('https://jsonplaceholder.typicode.com/todos/1', { signal })
.then((response) => response.json())
.then((data) => console.log(data))
.catch((error) => console.error('Error occurred: ' + error));
// Abort request.
controller.abort();

Progress support

XMLHttpRequest supports tracking the progress of requests by attaching a handler to the XMLHttpRequest object's progress event. This is especially useful when uploading large files such as videos to track the progress of the upload.

const xhr = new XMLHttpRequest();
// The callback is passed a `ProgressEvent`.
xhr.upload.onprogress = (event) => {
console.log(Math.round((event.loaded / event.total) * 100) + '%');
};

The callback assigned to onprogress is passed a ProgressEvent:

  • The loaded field on the ProgressEvent is a 64-bit integer indicating the amount of work already performed (bytes uploaded/downloaded) by the underlying process.
  • The total field on the ProgressEvent is a 64-bit integer representing the total amount of work that the underlying process is in the progress of performing. When downloading resources, this is the Content-Length value of the HTTP response.

On the other hand, the fetch() API does not offer any convenient way to track upload progress. It can be implemented by monitoring the body of the Response object as a fraction of the Content-Length header, but it's quite complicated.

Choosing between XMLHttpRequest and fetch()

In modern development scenarios, fetch() is the preferred choice due to its cleaner syntax, promise-based approach, and improved handling of features like error handling, headers, and CORS.

Further reading

How do you abort a web request using `AbortController` in JavaScript?

Topics
JavaScriptNetworking

TL;DR

AbortController is used to cancel ongoing asynchronous operations like fetch requests.

const controller = new AbortController();
const signal = controller.signal;
fetch('https://jsonplaceholder.typicode.com/todos/1', { signal })
.then((response) => {
// Handle response
})
.catch((error) => {
if (error.name === 'AbortError') {
console.log('Request aborted');
} else {
console.error('Error:', error);
}
});
// Call abort() to abort the request
controller.abort();

Aborting web requests is useful for:

  • Canceling requests based on user actions.
  • Prioritizing the latest requests in scenarios with multiple simultaneous requests.
  • Canceling requests that are no longer needed, e.g. after the user has navigated away from the page.

AbortControllers

AbortController allows graceful cancelation of ongoing asynchronous operations like fetch requests. It offers a mechanism to signal to the underlying network layer that the request is no longer required, preventing unnecessary resource consumption and improving user experience.

Using AbortControllers

Using AbortControllers involves the following steps:

  1. Create an AbortController instance: Initialize an AbortController instance, which creates a signal that can be used to abort requests.
  2. Pass the signal to the request: Pass the signal to the request, typically through the signal property in the request options.
  3. Abort the request: Call the abort() method on the AbortController instance to cancel the ongoing request.

Here is an example of how to use AbortControllers with the fetch() API:

const controller = new AbortController();
const signal = controller.signal;
fetch('https://jsonplaceholder.typicode.com/todos/1', { signal })
.then((response) => {
// Handle response
})
.catch((error) => {
if (error.name === 'AbortError') {
console.log('Request aborted');
} else {
console.error('Error:', error);
}
});
// Call abort() to abort the request
controller.abort();

Use cases

Canceling a fetch() request on a user action

Cancel requests that take too long or are no longer relevant due to user interactions (e.g., user cancels uploading of a huge file).

// HTML: <button id='cancel-button'>Cancel upload</button>
const btn = document.createElement('button');
btn.id = 'cancel-button';
btn.innerHTML = 'Cancel upload';
document.body.appendChild(btn);
const controller = new AbortController();
const signal = controller.signal;
fetch('https://jsonplaceholder.typicode.com/todos/1', { signal })
.then((response) => {
// Handle successful response
})
.catch((error) => {
if (error.name === 'AbortError') {
console.log('Request canceled');
} else {
console.error('Network or other error:', error);
}
});
document.getElementById('cancel-button').addEventListener('click', () => {
controller.abort();
});
document.getElementById('cancel-button').click(); // Simulate clicking the cancel button

When you click the "Cancel upload" button, the in-flight request will be aborted.

Prioritizing latest requests in a race condition

In scenarios where multiple requests are initiated for the same data, use AbortController to prioritize the latest request and abort earlier ones.

let latestController = null; // Keeps track of the latest controller
function fetchData(url) {
if (latestController) {
latestController.abort(); // Abort any previous request
}
const controller = new AbortController();
latestController = controller;
const signal = controller.signal;
fetch(url, { signal })
.then((response) => response.json())
.then((data) => console.log('Fetched data:', data))
.catch((error) => {
if (error.name === 'AbortError') {
console.log('Request canceled');
} else {
console.error('Network or other error:', error);
}
});
}
fetchData('https://jsonplaceholder.typicode.com/posts/1');
// Simulate race conditions with new requests that quickly cancel the previous one
setTimeout(() => {
fetchData('https://jsonplaceholder.typicode.com/posts/2');
}, 5);
setTimeout(() => {
fetchData('https://jsonplaceholder.typicode.com/posts/3');
}, 5);
// Only the last request should (posts/3) will be allowed to complete

In this example, when the fetchData() function is called multiple times triggering multiple fetch requests, AbortControllers will cancel all the previous requests except the latest request. This is common in scenarios like type-ahead search or infinite scrolling, where new requests are triggered frequently.

Canceling requests that are no longer needed

In situations where the user has navigated away from the page, aborting the request can prevent unnecessary operations (e.g. success callback handling) and free up resources by lowering the likelihood of memory leaks.

Notes

  • AbortControllers are not fetch()-specific, they can be used to abort other asynchronous tasks as well.
  • A singular AbortController instance can be reused on multiple async tasks and cancel all of them at once.
  • Calling abort() on AbortControllers does not send any notification or signal to the server. The server is unaware of the cancelation and will continue processing the request until it completes or times out.

Further reading

What are JavaScript polyfills for?

Topics
JavaScript

TL;DR

Polyfills in JavaScript are pieces of code that provide modern functionality to older browsers that lack native support for those features. They bridge the gap between the JavaScript language features and APIs available in modern browsers and the limited capabilities of older browser versions.

They can be implemented manually or included through libraries and are often used in conjunction with feature detection.

Common use cases include:

  • New JavaScript Methods: For example, Array.prototype.includes(), Object.assign(), etc.
  • New APIs: Such as fetch(), Promise, IntersectionObserver, etc. Modern browsers support these now, but for a long time they had to be polyfilled.

Libraries and services for polyfills:

  • core-js: A modular standard library for JavaScript which includes polyfills for a wide range of ECMAScript features.

    import 'core-js/actual/array/flat-map'; // With this, Array.prototype.flatMap is available to be used.
    [1, 2].flatMap((it) => [it, it]); // => [1, 1, 2, 2]
  • Polyfill.io: A service that provides polyfills based on the features and user agents specified in the request.

    <script src="https://polyfill.io/v3/polyfill.min.js"></script>

Polyfills in JavaScript

Polyfills in JavaScript are pieces of code (usually JavaScript) that provide modern functionality on older browsers that do not natively support it. They enable developers to use newer features of the language and APIs while maintaining compatibility with older environments.

How polyfills work

Polyfills detect if a feature or API is missing in a browser and provide a custom implementation of that feature using existing JavaScript capabilities. This allows developers to write code using the latest JavaScript features and APIs without worrying about browser compatibility issues.

For example, let's consider the Array.prototype.includes() method, which determines if an array includes a specific element. This method is not supported in older browsers like Internet Explorer 11. To address this, we can use a polyfill:

// Polyfill for Array.prototype.includes()
if (!Array.prototype.includes) {
Array.prototype.includes = function (searchElement) {
for (var i = 0; i < this.length; i++) {
if (this[i] === searchElement) {
return true;
}
}
return false;
};
}
console.log([1, 2, 3].includes(2)); // true
console.log([1, 2, 3].includes(4)); // false

By including this polyfill, we can safely use Array.prototype.includes() even in browsers that don't support it natively.

Implementing polyfills

  1. Identify the missing feature: Determine if the feature is compatible with the target browsers or detect its presence using feature detection methods like typeof, in, or window.
  2. Write the fallback implementation: Develop the fallback implementation that provides similar functionality, either using a pre-existing polyfill library or pure JavaScript code.
  3. Test the polyfill: Thoroughly test the polyfill to ensure it functions as intended across different contexts and browsers.
  4. Implement the polyfill: Enclose the code that uses the missing feature in an if statement that checks for feature support. If not supported, run the polyfill code instead.

Considerations

  • Selective loading: Polyfills should only be loaded for browsers that need them to optimize performance.
  • Feature detection: Perform feature detection before applying a polyfill to avoid overwriting native implementations or applying unnecessary polyfills.
  • Size and performance: Polyfills can increase the JavaScript bundle size, so minification and compression techniques should be used to mitigate this impact.
  • Existing libraries: Consider using existing libraries and tools that offer comprehensive polyfill solutions for multiple features, handling feature detection, conditional loading, and fallbacks efficiently.

Libraries and services for polyfills

  • core-js: A modular standard library for JavaScript which includes polyfills for a wide range of ECMAScript features.

    import 'core-js/actual/array/flat-map'; // With this, Array.prototype.flatMap is available to be used.
    [1, 2].flatMap((it) => [it, it]); // => [1, 1, 2, 2]
  • Polyfill.io: A service that provides polyfills based on the features and user agents specified in the request.

    <script src="https://polyfill.io/v3/polyfill.min.js"></script>

Further reading

Why is extending built-in JavaScript objects not a good idea?

Topics
JavaScriptOOP

TL;DR

Extending a built-in/native JavaScript object means adding properties/functions to its prototype. While this may seem like a good idea at first, it is dangerous in practice. Imagine your code uses two libraries that both extend the Array.prototype by adding the same contains method; the implementations will overwrite each other and your code will have unpredictable behavior if these two methods do not work the same way.

The only time you may want to extend a native object is when you want to create a polyfill, essentially providing your own implementation for a method that is part of the JavaScript specification but might not exist in the user's browser due to it being an older browser.


Extending JavaScript

In JavaScript it's very easy to extend a built-in/native object. You can simply extend a built-in object by adding properties and functions to its prototype.

String.prototype.reverseString = function () {
return this.split('').reverse().join('');
};
console.log('hello world'.reverseString()); // Outputs 'dlrow olleh'
// Instead of extending the built-in object, write a pure utility function to do it.
function reverseString(str) {
return str.split('').reverse().join('');
}
console.log(reverseString('hello world')); // Outputs 'dlrow olleh'

Disadvantages

Extending built-in JavaScript objects is essentially modifying the global scope and it's not a good idea because:

  1. Future-proofing: If a browser decides to implement its own version of a method, your custom extension might get overridden silently, leading to unexpected behavior or conflicts.
  2. Collisions: Adding custom methods to built-in objects can lead to collisions with future browser implementations or other libraries, causing unexpected behavior or errors.
  3. Maintenance and debugging: When extending built-in objects, it can be difficult for other developers to understand the changes made, making maintenance and debugging more challenging.
  4. Performance: Extending built-in objects can potentially impact performance, especially if the extensions are not optimized for the specific use case.
  5. Security: In some cases, extending built-in objects can introduce security vulnerabilities if not done correctly, such as adding enumerable properties that can be exploited by malicious code.
  6. Compatibility: Custom extensions to built-in objects may not be compatible with all browsers or environments, leading to issues with cross-browser compatibility.
  7. Namespace clashes: Extending built-in objects can lead to namespace clashes if multiple libraries or scripts extend the same object in different ways, causing conflicts and unexpected behavior.

We dive deeper into why it is a bad idea to modify the global scope.

It is not recommended to extend built-in objects due to these potential issues. Instead, use composition or create custom classes and utility functions to achieve the desired functionality.

Alternatives to extending built-in objects

Instead of extending built-in objects, do the following instead:

  1. Create custom utility functions: For simple tasks, creating small utility functions specific to your needs can be a cleaner and more maintainable solution.
  2. Use libraries and frameworks: Many libraries and frameworks provide their own helper methods and extensions, eliminating the need to modify built-in objects directly.

Polyfilling as a valid reason

One valid reason to extend built-in objects is to implement polyfills for the latest ECMAScript standard and proposals. core-js is a popular library that is present on most popular websites. It not only polyfills missing features but also fixes incorrect or non-compliant implementations of JavaScript features in various browsers and runtimes.

import 'core-js/actual/array/flat-map'; // With this, Array.prototype.flatMap is available to be used.
[1, 2].flatMap((it) => [it, it]); // => [1, 1, 2, 2]

Further reading

Why is it, in general, a good idea to leave the global JavaScript scope of a website as-is and never touch it?

Topics
JavaScript

TL;DR

JavaScript that is executed in the browser has access to the global scope (the window object). In general it's a good software engineering practice to not pollute the global namespace unless you are working on a feature that truly needs to be global – it is needed by the entire page. Several reasons to avoid touching the global scope:

  • Naming conflicts: Sharing the global scope across scripts can cause conflicts and bugs when new global variables or changes are introduced.
  • Cluttered global namespace: Keeping the global namespace minimal avoids making the codebase hard to manage and maintain.
  • Scope leaks: Unintentional references to global variables in closures or event handlers can cause memory leaks and performance issues.
  • Modularity and encapsulation: Good design promotes keeping variables and functions within their specific scopes, enhancing organization, reusability, and maintainability.
  • Security concerns: Global variables are accessible by all scripts, including potentially malicious ones, posing security risks, especially if sensitive data is stored there.
  • Compatibility and portability: Heavy reliance on global variables reduces code portability and integration ease with other libraries or frameworks.

Follow these best practices to avoid global scope pollution:

  • Use local variables: Declare variables within functions or blocks using var, let, or const to limit their scope.
  • Pass variables as function parameters: Maintain encapsulation by passing variables as parameters instead of accessing them globally.
  • Use immediately invoked function expressions (IIFE): Create new scopes with IIFEs to prevent adding variables to the global scope.
  • Use modules: Encapsulate code with module systems to maintain separate scopes and manageability.

What is the global scope?

In the browser, the global scope is the top-level context where variables, functions, and objects are accessible from anywhere in the code. The global scope is represented by the window object. Any variables or functions declared outside of any function or block (that is not within any module) are added to the window object and can be accessed globally.

For example:

// Assuming this is run in the global scope and not within a module.
var globalVariable = 'I am global';
function globalFunction() {
console.log('I am a global function');
}
console.log(window.globalVariable); // 'I am global'
window.globalFunction(); // 'I am a global function'

In this example, globalVariable and globalFunction are added to the window object and can be accessed from anywhere in the global context.

Pitfalls of global scope

In general, it's a good software engineering practice to not pollute the global namespace unless you are working on a feature that truly needs to be global – it is needed by the entire page. There are many reasons to avoid touching the global scope:

  • Naming conflicts: The global scope is shared across all scripts on a web page. If you introduce new global variables or modify existing ones, you risk causing naming conflicts with other scripts or libraries used on the same page. This can lead to unexpected behavior and difficult-to-debug issues.
  • Cluttered global namespace: The global namespace should be kept as clean and minimal as possible. Adding unnecessary global variables or functions can clutter the namespace and make it harder to manage and maintain the codebase over time.
  • Scope leaks: When working with closures or event handlers, it's easy to accidentally create unintended references to global variables, leading to memory leaks and performance issues. By avoiding global variables altogether, you can prevent these types of scope leaks.
  • Modularity and encapsulation: One of the principles of good software design is modularity and encapsulation. By keeping variables and functions within their respective scopes (e.g., module, function, or block scope), you promote better code organization, reusability, and maintainability.
  • Security concerns: Global variables can be accessed and modified by any script running on the page, including potentially malicious scripts. It is quite common for websites to load third-party scripts and in the event someone's network is compromised, it can pose security risks, especially if sensitive data is stored in global variables. However, in the first place you should not expose any sensitive data on the client.
  • Compatibility and portability: By relying heavily on global variables, your code becomes less portable and more dependent on the specific environment it was written for. This can make it harder to integrate with other libraries or frameworks, or to run the code in different environments (e.g., server-side vs browser).

Here's an example of global scope being used.

// Assuming this is run in the global scope, not within a module.
let count = 0;
function incrementCount() {
count++;
console.log(count);
}
function decrementCount() {
count--;
console.log(count);
}
incrementCount(); // Output: 1
decrementCount(); // Output: 0

In this example, count, incrementCount, and decrementCount are defined on the global scope. Any script on the page can access and modify the count, as well as all variables on window.

Avoiding global scope pollution

By now we hope that you're convinced that it's not a good idea to define variables on the global scope. To avoid polluting the global scope, it is recommended to follow best practices such as:

  • Use local variables: Declare variables within functions or blocks to limit their scope and prevent them from being accessed globally. Use var, let, or const to declare variables within a specific scope, ensuring they are not accidentally made global.
  • Pass variables as function parameters: Instead of accessing variables directly from the outer scope, pass them as parameters to functions to maintain encapsulation and avoid global scope pollution.
  • Use modules: Utilize module systems to encapsulate your code and prevent global scope pollution. Each module has its own scope, making it easier to manage and maintain your code.
  • Use immediately invoked function expressions (IIFE): If modules are not available, wrap your code in an IIFE to create a new scope, preventing variables from being added to the global scope unless you explicitly expose them.
// Assuming this is run in the global scope, not within a module.
(function () {
let count = 0;
window.incrementCount = function () {
count++;
console.log(count);
};
window.decrementCount = function () {
count--;
console.log(count);
};
})();
incrementCount(); // Output: 1
decrementCount(); // Output: 0

In this example, count is not accessible in the global scope. It can only be accessed and modified by the incrementCount and decrementCount functions. These functions are exposed to the global scope by attaching them to the window object, but they still have access to the count variable in their parent scope. This provides a way to encapsulate the underlying data and only expose the necessary operations – no direct manipulation of the value is allowed.


Further reading

Explain the differences between CommonJS modules and ES modules in JavaScript

Topics
JavaScript

TL;DR

In JavaScript, modules are reusable pieces of code that encapsulate functionality, making it easier to manage, maintain, and structure your applications. Modules allow you to break down your code into smaller, manageable parts, each with its own scope.

CommonJS is an older module system that was initially designed for server-side JavaScript development with Node.js. It uses the require() function to load modules and the module.exports or exports object to define the exports of a module.

// my-module.js
const value = 42;
module.exports = { value };
// main.js
const myModule = require('./my-module.js');
console.log(myModule.value); // 42

ES Modules (ECMAScript Modules) are the standardized module system introduced in ES6 (ECMAScript 2015). They use the import and export statements to handle module dependencies.

// my-module.js
export const value = 42;
// main.js
import { value } from './my-module.js';
console.log(value); // 42

CommonJS vs ES modules

FeatureCommonJSES modules
Module Syntaxrequire() for importing module.exports for exportingimport for importing export for exporting
EnvironmentPrimarily used in Node.js for server-side developmentDesigned for both browser and server-side JavaScript (Node.js)
LoadingSynchronous loading of modulesAsynchronous loading of modules
StructureDynamic imports, can be conditionally calledES modules use static top-level import/export statements, while dynamic loading is supported separately via the import()
File extensions.js (default).mjs or .js (with type: "module" in package.json)
Browser supportNot natively supported in browsersNatively supported in modern browsers
OptimizationLimited optimization due to dynamic natureAllows for optimizations like tree-shaking due to static structure
CompatibilityWidely used in existing Node.js codebases and librariesNewer standard, but gaining adoption in modern projects

Modules in JavaScript

Modules in JavaScript are a way to organize and encapsulate code into reusable and maintainable units. They allow developers to break down their codebase into smaller, self-contained pieces, promoting code reuse, separation of concerns, and better organization. There are two main module systems in JavaScript: CommonJS and ES modules.

CommonJS

CommonJS is an older module system that was initially designed for server-side JavaScript development with Node.js. It uses the require function to load modules and the module.exports or exports object to define the exports of a module.

  • Syntax: Modules are included using require() and exported using module.exports.
  • Environment: Primarily used in Node.js.
  • Execution: Modules are loaded synchronously.
  • Modules are loaded dynamically at runtime.
// my-module.js
const value = 42;
module.exports = { value };
// main.js
const myModule = require('./my-module.js');
console.log(myModule.value); // 42

ES Modules

ES Modules (ECMAScript Modules) are the standardized module system introduced in ES6 (ECMAScript 2015). They use the import and export statements to handle module dependencies.

  • Syntax: Modules are imported using import and exported using export.
  • Environment: Can be used in both browser environments and Node.js (with certain configurations).
  • Execution: Modules are loaded asynchronously.
  • Support: Introduced in ES2015, now widely supported in modern browsers and Node.js.
  • Modules are loaded statically at compile-time.
  • Enables better performance due to static analysis and tree-shaking.
// my-module.js
export const value = 42;
// main.js
import { value } from './my-module.js';
console.log(value); // 42

Summary

While CommonJS was the default module system in Node.js initially, ES modules are now the recommended approach for new projects, as they provide better tooling, performance, and ecosystem compatibility. However, CommonJS modules are still widely used in existing codebases and libraries, especially for legacy dependencies.

Further reading

What are the various data types in JavaScript?

Topics
JavaScript

TL;DR

In JavaScript, data types can be categorized into primitive and non-primitive types:

Primitive data types

  • Number: Represents both integers and floating-point numbers.
  • String: Represents sequences of characters.
  • Boolean: Represents true or false values.
  • Undefined: A variable that has been declared but not assigned a value.
  • Null: Represents the intentional absence of any object value.
  • Symbol: A unique and immutable value used as object property keys. Read more in our deep dive on Symbols.
  • BigInt: Represents integers with arbitrary precision.

Non-primitive (Reference) data types

  • Object: Used to store collections of data.
  • Array: An ordered collection of data.
  • Function: A callable object.
  • Date: Represents dates and times.
  • RegExp: Represents regular expressions.
  • Map: A collection of keyed data items.
  • Set: A collection of unique values.

The primitive types store a single value, while non-primitive types can store collections of data or complex entities.


Data types in JavaScript

JavaScript, like many programming languages, has a variety of data types to represent different kinds of data. The main data types in JavaScript can be divided into two categories: primitive and non-primitive (reference) types.

Primitive data types

  1. Number: Represents both integer and floating-point numbers. JavaScript only has one type of number.
let age = 25;
let price = 99.99;
console.log(price); // 99.99
  1. String: Represents sequences of characters. Strings can be enclosed in single quotes, double quotes, or backticks (for template literals).
let myName = 'John Doe';
let greeting = 'Hello, world!';
let message = `Welcome, ${myName}!`;
console.log(message); // "Welcome, John Doe!"
  1. Boolean: Represents logical entities and can have two values: true or false.
let isActive = true;
let isOver18 = false;
console.log(isOver18); // false
  1. Undefined: A variable that has been declared but not assigned a value is of type undefined.
let user;
console.log(user); // undefined
  1. Null: Represents the intentional absence of any object value. It is a primitive value and is treated as a falsy value.
let user = null;
console.log(user); // null
if (!user) {
console.log('user is a falsy value');
}
  1. Symbol: A unique and immutable primitive value, typically used as the key of an object property.
let sym1 = Symbol();
let sym2 = Symbol('description');
console.log(sym1); // Symbol()
console.log(sym2); // Symbol(description)
  1. BigInt: Used for representing integers with arbitrary precision, useful for working with very large numbers.
let bigNumber = BigInt(9007199254740991);
let anotherBigNumber = 1234567890123456789012345678901234567890n;
console.log(bigNumber); // 9007199254740991n
console.log(anotherBigNumber); // 1234567890123456789012345678901234567890n

Non-primitive (reference) data types

  1. Object: It is used to store collections of data and more complex entities. Objects are created using curly braces {}.
let person = {
name: 'Alice',
age: 30,
};
console.log(person); // {name: "Alice", age: 30}
  1. Array: A special type of object used for storing ordered collections of data. Arrays are created using square brackets [].
let numbers = [1, 2, 3, 4, 5];
console.log(numbers);
  1. Function: Functions in JavaScript are objects. They can be defined using function declarations or expressions.
function greet() {
console.log('Hello!');
}
let add = function (a, b) {
return a + b;
};
greet(); // "Hello!"
console.log(add(2, 3)); // 5
  1. Date: Represents dates and times. The Date object is used to work with dates.
let today = new Date().toLocaleTimeString();
console.log(today);
  1. RegExp: Represents regular expressions, which are patterns used to match character combinations in strings.
let pattern = /abc/;
let str = '123abc456';
console.log(pattern.test(str)); // true
  1. Map: A collection of keyed data items, similar to an object but allows keys of any type.
let map = new Map();
map.set('key1', 'value1');
console.log(map);
  1. Set: A collection of unique values.
let set = new Set();
set.add(1);
set.add(2);
console.log(set); // { 1, 2 }

Determining data types

JavaScript is a dynamically-typed language, which means variables can hold values of different data types over time. The typeof operator can be used to determine the data type of a value or variable.

console.log(typeof 42); // "number"
console.log(typeof 'hello'); // "string"
console.log(typeof true); // "boolean"
console.log(typeof undefined); // "undefined"
console.log(typeof null); // "object" (this is a historical bug in JavaScript)
console.log(typeof Symbol()); // "symbol"
console.log(typeof BigInt(123)); // "bigint"
console.log(typeof {}); // "object"
console.log(typeof []); // "object"
console.log(typeof function () {}); // "function"

Pitfalls

Type coercion

JavaScript often performs type coercion, converting values from one type to another, which can lead to unexpected results.

let result = '5' + 2;
console.log(result, typeof result); // "52 string" (string concatenation)
let difference = '5' - 2;
console.log(difference, typeof difference); // 3 "number" (numeric subtraction)

In the first example, since strings can be concatenated with the + operator, the number is converted into a string and the two strings are concatenated together. In the second example, strings cannot work with the minus operator (-), but two numbers can be subtracted, so the string is first converted into a number and the result is the difference.

Further reading

What language constructs do you use for iterating over object properties and array items in JavaScript?

Topics
JavaScript

TL;DR

There are multiple ways to iterate over object properties as well as arrays in JavaScript:

for...in loop

The for...in loop iterates over all enumerable properties of an object, including inherited enumerable properties. So it is important to have a check if you only want to iterate over the object's own properties.

const obj = {
a: 1,
b: 2,
c: 3,
};
for (const key in obj) {
// To avoid iterating over inherited properties
if (Object.hasOwn(obj, key)) {
console.log(`${key}: ${obj[key]}`);
}
}

Object.keys()

Object.keys() returns an array of the object's own enumerable property names. You can then use a for...of loop or forEach to iterate over this array.

const obj = {
a: 1,
b: 2,
c: 3,
};
Object.keys(obj).forEach((key) => {
console.log(`${key}: ${obj[key]}`);
});

The most common ways to iterate over an array are using a for loop and the Array.prototype.forEach method.

Using for loop

let array = [1, 2, 3, 4, 5, 6];
for (let index = 0; index < array.length; index++) {
console.log(array[index]);
}

Using Array.prototype.forEach method

let array = [1, 2, 3, 4, 5, 6];
array.forEach((number, index) => {
console.log(`${number} at index ${index}`);
});

Using for...of

This method is the newest and most convenient way to iterate over arrays. It automatically iterates over each element without requiring you to manage the index.

const numbers = [1, 2, 3, 4, 5];
for (const number of numbers) {
console.log(number);
}

There are also other built-in methods available which are suitable for specific scenarios, for example:

  • Array.prototype.filter: You can use the filter method to create a new array containing only the elements that satisfy a certain condition.
  • Array.prototype.map: You can use the map method to create a new array based on the existing one, transforming each element with a provided function.
  • Array.prototype.reduce: You can use the reduce method to combine all elements into a single value by repeatedly calling a function that takes two arguments: the accumulated value and the current element.

Iterating over objects

Iterating over object properties and arrays is very common in JavaScript and we have various ways to achieve this. Here are some of the ways to do it:

for...in statement

This loop iterates over all enumerable properties of an object, including those inherited from its prototype chain.

const obj = {
status: 'working',
hoursWorked: 3,
};
for (const property in obj) {
console.log(property);
}

Since the for...in statement iterates over all the object's enumerable properties (including inherited enumerable properties), most of the time you should check whether the property exists directly on the object via Object.hasOwn(object, property) before using it.

const obj = {
status: 'working',
hoursWorked: 3,
};
for (const property in obj) {
if (Object.hasOwn(obj, property)) {
console.log(property);
}
}

Note that obj.hasOwnProperty() is not recommended because it doesn't work for objects created using Object.create(null). It is recommended to use Object.hasOwn() in newer browsers, or use the good old Object.prototype.hasOwnProperty.call(object, key).

Object.keys()

Object.keys() is a static method that returns an array of all the enumerable property names of the object that you pass it. Since Object.keys() returns an array, you can also use the array iteration approaches listed below to iterate through it.

const obj = {
status: 'working',
hoursWorked: 3,
};
Object.keys(obj).forEach((property) => {
console.log(property);
});

Object.entries():

This method returns an array of an object's enumerable properties in [key, value] pairs.

const obj = { a: 1, b: 2, c: 3 };
Object.entries(obj).forEach(([key, value]) => {
console.log(`${key}: ${value}`);
});

Object.getOwnPropertyNames()

const obj = { a: 1, b: 2, c: 3 };
Object.getOwnPropertyNames(obj).forEach((property) => {
console.log(property);
});

Object.getOwnPropertyNames() is a static method that lists all enumerable and non-enumerable properties of the object that you pass it. Since Object.getOwnPropertyNames() returns an array, you can also use the array iteration approaches listed below to iterate through it.

Arrays

for loop

const arr = [1, 2, 3, 4, 5];
for (var i = 0; i < arr.length; i++) {
console.log(arr[i]);
}

A common pitfall here is that var is in the function scope and not the block scope and most of the time you would want block scoped iterator variable. ES2015 introduces let which has block scope and it is recommended to use let over var.

const arr = [1, 2, 3, 4, 5];
for (let i = 0; i < arr.length; i++) {
console.log(arr[i]);
}

Array.prototype.forEach()

const arr = [1, 2, 3, 4, 5];
arr.forEach((element, index) => {
console.log(`${element} at index ${index}`);
});

The Array.prototype.forEach() method can be more convenient at times if you do not need to use the index and all you need are the individual array elements. However, the downside is that you cannot stop the iteration halfway and the provided function will be executed on each element once. A for loop or for...of statement is more relevant if you need finer control over the iteration.

for...of statement

const arr = [1, 2, 3, 4, 5];
for (let element of arr) {
console.log(element);
}

ES2015 introduces a new way to iterate, the for...of loop, that allows you to loop over objects that conform to the iterable protocol such as String, Array, Map, Set, etc. It combines the advantages of the for loop and the forEach() method. The advantage of the for loop is that you can break from it, and the advantage of forEach() is that it is more concise than the for loop because you don't need a counter variable. With the for...of statement, you get both the ability to break from a loop and a more concise syntax.

Most of the time, prefer the .forEach method, but it really depends on what you are trying to do. Before ES2015, we used for loops when we needed to prematurely terminate the loop using break. But now with ES2015, we can do that with for...of statement. Use for loops when you need more flexibility, such as incrementing the iterator more than once per loop.

Also, when using the for...of statement, if you need to access both the index and value of each array element, you can do so with ES2015 Array.prototype.entries() method:

const arr = ['a', 'b', 'c'];
for (let [index, elem] of arr.entries()) {
console.log(index, elem);
}

Further reading

What are the benefits of using spread syntax in JavaScript and how is it different from rest syntax?

Topics
JavaScript

TL;DR

Spread syntax (...) allows an iterable (like an array or string) to be expanded into individual elements. This is often used as a convenient and modern way to create new arrays or objects by combining existing ones.

OperationTraditionalSpread
Array cloningarr.slice()[...arr]
Array mergingarr1.concat(arr2)[...arr1, ...arr2]
Object cloningObject.assign({}, obj){ ...obj }
Object mergingObject.assign({}, obj1, obj2){ ...obj1, ...obj2 }

Rest syntax is the opposite of what spread syntax does. It collects a variable number of arguments into an array. This is often used in function parameters to handle a dynamic number of arguments.

// Using rest syntax in a function
function sum(...numbers) {
return numbers.reduce((total, num) => total + num, 0);
}
console.log(sum(1, 2, 3)); // Output: 6

Spread syntax

ES2015's spread syntax is very useful when coding in a functional paradigm as we can easily create copies of / merge arrays or objects without resorting to Object.create, Object.assign, Array.prototype.slice, or a library function. This language feature is used often in Redux and RxJS projects.

Copying arrays/objects

The spread syntax provides a concise way to create copies of arrays or objects without modifying the originals. This is useful for creating immutable data structures. However, do note that arrays copied via the spread operator are shallowly-copied.

// Copying arrays
const array = [1, 2, 3];
const newArray = [...array];
console.log(newArray); // Output: [1, 2, 3]
// Copying objects
const person = { name: 'John', age: 30 };
const newObj = { ...person, city: 'New York' };
console.log(newObj); // Output: { name: 'John', age: 30, city: 'New York' }

Merging arrays/objects

The spread syntax allows you to merge arrays or objects by spreading their elements/properties into a new array or object.

// Merging arrays
const arr1 = [1, 2, 3];
const arr2 = [4, 5, 6];
const mergedArray = [...arr1, ...arr2];
console.log(mergedArray); // Output: [1, 2, 3, 4, 5, 6]
// Merging objects
const obj1 = {
foo: 'bar',
};
const obj2 = {
qux: 'baz',
};
const mergedObj = { ...obj1, ...obj2 };
console.log(mergedObj); // Output: { foo: "bar", qux: "baz" }

Passing arguments to functions

Use the spread syntax to pass an array of values as individual arguments to a function, avoiding the need for apply().

const numbers = [1, 2, 3];
const max = Math.max(...numbers); // Same as Math.max(1, 2, 3)
console.log(max); // Output: 3

Array vs object spreads

Only iterable values like Arrays and Strings can be spread in an array. Trying to spread non-iterables will result in a TypeError.

Spreading object into array:

const person = {
name: 'Todd',
age: 29,
};
const array = [...person]; // Error: Uncaught TypeError: person is not iterable

On the other hand, arrays can be spread into objects.

const array = [1, 2, 3];
const obj = { ...array };
console.log(obj); // { 0: 1, 1: 2, 2: 3 }

Rest syntax

The rest syntax (...) in JavaScript allows you to represent an indefinite number of elements as an array or object. It is like an inverse of the spread syntax, taking data and stuffing it into an array rather than unpacking an array of data, and it works in function arguments, as well as in array and object destructuring assignments.

Rest parameters in functions

The rest syntax can be used in function parameters to collect all remaining arguments into an array. This is particularly useful when you don't know how many arguments will be passed to the function.

function addFiveToABunchOfNumbers(...numbers) {
return numbers.map((x) => x + 5);
}
const result = addFiveToABunchOfNumbers(4, 5, 6, 7, 8, 9, 10);
console.log(result); // Output: [9, 10, 11, 12, 13, 14, 15]

Provides a cleaner syntax than using the arguments object, which is unsupported for arrow functions and represents all arguments whereas the usage of the rest syntax below allows remaining to represent the 3rd argument and beyond.

const [first, second, ...remaining] = [1, 2, 3, 4, 5];
console.log(first); // Output: 1
console.log(second); // Output: 2
console.log(remaining); // Output: [3, 4, 5]

Note that the rest parameters must be at the end. The rest parameters gather all remaining arguments, so the following does not make sense and causes an error:

function addFiveToABunchOfNumbers(arg1, ...numbers, arg2) {
// Error: Rest parameter must be last formal parameter.
}

Array destructuring

The rest syntax can be used in array destructuring to collect the remaining elements into a new array.

const [a, b, ...rest] = [1, 2, 3, 4];
console.log(a); // Output: 1
console.log(b); // Output: 2
console.log(rest); // Output: [3, 4]

Object destructuring

The rest syntax can be used in object destructuring to collect the remaining properties into a new object.

const { e, f, ...others } = {
e: 1,
f: 2,
g: 3,
h: 4,
};
console.log(e); // Output: 1
console.log(f); // Output: 2
console.log(others); // Output: { g: 3, h: 4 }

Further Reading

What are iterators and generators in JavaScript and what are they used for?

Topics
JavaScript

TL;DR

In JavaScript, iterators and generators are powerful tools for managing sequences of data and controlling the flow of execution in a more flexible way.

Iterators are objects that define a sequence and potentially a return value upon its termination. They adhere to a specific interface:

  • An iterator object must implement a next() method.
  • The next() method returns an object with two properties:
    • value: The next value in the sequence.
    • done: A boolean that is true if the iterator has finished its sequence, otherwise false.

Here's an example of an object implementing the iterator interface.

const iterator = {
current: 0,
last: 5,
next() {
if (this.current <= this.last) {
return { value: this.current++, done: false };
} else {
return { value: undefined, done: true };
}
},
};
let result = iterator.next();
while (!result.done) {
console.log(result.value); // Logs 0, 1, 2, 3, 4, 5
result = iterator.next();
}

Generators are special functions that can pause execution and resume at a later point. They use the function* syntax and the yield keyword to control the flow of execution. When you call a generator function, it doesn't execute completely like a regular function. Instead, it returns an iterator object. Calling the next() method on the returned iterator advances the generator to the next yield statement, and the value after yield becomes the return value of next().

function* numberGenerator() {
let num = 0;
while (num <= 5) {
yield num++;
}
}
const gen = numberGenerator();
console.log(gen.next()); // { value: 0, done: false }
console.log(gen.next()); // { value: 1, done: false }
console.log(gen.next()); // { value: 2, done: false }
console.log(gen.next()); // { value: 3, done: false }
console.log(gen.next()); // { value: 4, done: false }
console.log(gen.next()); // { value: 5, done: false }
console.log(gen.next()); // { value: undefined, done: true }

Generators are powerful for creating iterators on-demand, especially for infinite sequences or complex iteration logic. They can be used for:

  • Lazy evaluation – processing elements only when needed, improving memory efficiency for large datasets.
  • Implementing iterators for custom data structures.
  • Creating asynchronous iterators for handling data streams.

Iterators

Iterators are objects that define a sequence and provide a next() method to access the next value in the sequence. They are used to iterate over data structures like arrays, strings, and custom objects. The key use cases of iterators include:

  • Implementing the iterator protocol to make custom objects iterable, allowing them to be used with for...of loops and other language constructs that expect iterables.
  • Providing a standard way to iterate over different data structures, making code more reusable and maintainable.

Creating a custom iterator for a range of numbers

In JavaScript, we can provide a default implementation for an iterator by implementing [Symbol.iterator]() in any custom object.

// Define a class named Range
class Range {
// The constructor takes two parameters: start and end
constructor(start, end) {
// Assign the start and end values to the instance
this.start = start;
this.end = end;
}
// Define the default iterator for the object
[Symbol.iterator]() {
// Initialize the current value to the start value
let current = this.start;
const end = this.end;
// Return an object with a next method
return {
// The next method returns the next value in the iteration
next() {
// If the current value is less than or equal to the end value...
if (current <= end) {
// ...return an object with the current value and done set to false
return { value: current++, done: false };
}
// ...otherwise, return an object with value set to undefined and done set to true
return { value: undefined, done: true };
},
};
}
}
// Create a new Range object with start = 1 and end = 3
const range = new Range(1, 3);
// Iterate over the range object
for (const number of range) {
// Log each number to the console
console.log(number); // 1, 2, 3
}

Built-in objects using the iterator protocol

In JavaScript, several built-in objects implement the iterator protocol, meaning they have a default @@iterator method. This allows them to be used in constructs like for...of loops and with the spread operator. Here are some of the key built-in objects that implement iterators:

  1. Arrays: Arrays have a built-in iterator that allows you to iterate over their elements.

    const array = [1, 2, 3];
    const iterator = array[Symbol.iterator]();
    console.log(iterator.next()); // { value: 1, done: false }
    console.log(iterator.next()); // { value: 2, done: false }
    console.log(iterator.next()); // { value: 3, done: false }
    console.log(iterator.next()); // { value: undefined, done: true }
    for (const value of array) {
    console.log(value); // Logs 1, 2, 3
    }
  2. Strings: Strings have a built-in iterator that allows you to iterate over their characters.

    const string = 'hello';
    const iterator = string[Symbol.iterator]();
    console.log(iterator.next()); // { value: "h", done: false }
    console.log(iterator.next()); // { value: "e", done: false }
    console.log(iterator.next()); // { value: "l", done: false }
    console.log(iterator.next()); // { value: "l", done: false }
    console.log(iterator.next()); // { value: "o", done: false }
    console.log(iterator.next()); // { value: undefined, done: true }
    for (const char of string) {
    console.log(char); // Logs h, e, l, l, o
    }
  3. DOM NodeLists

    // Create a new div and append it to the DOM
    const newDiv = document.createElement('div');
    newDiv.id = 'div1';
    document.body.appendChild(newDiv);
    const nodeList = document.querySelectorAll('div');
    const iterator = nodeList[Symbol.iterator]();
    console.log(iterator.next()); // { value: HTMLDivElement, done: false }
    console.log(iterator.next()); // { value: undefined, done: true }
    for (const node of nodeList) {
    console.log(node); // Logs each <div> element, in this case only div1
    }

Maps and Sets also have built-in iterators.

Generators

Generators are a special kind of function that can pause and resume their execution, allowing them to generate a sequence of values on-the-fly. They are commonly used to create iterators but have other applications as well. The key use cases of generators include:

  • Creating iterators in a more concise and readable way compared to manually implementing the iterator protocol.
  • Implementing lazy evaluation, where values are generated only when needed, saving memory and computation time.
  • Simplifying asynchronous programming by allowing code to be written in a synchronous-looking style using yield and await.

Generators provide several benefits:

  • Lazy evaluation: They generate values on the fly and only when required, which is memory efficient.
  • Pause and resume: Generators can pause execution (via yield) and can also receive new data upon resuming.
  • Asynchronous iteration: With the advent of async/await, generators can be used to manage asynchronous data flows.

Creating an iterator using a generator function

We can rewrite our Range example to use a generator function:

// Define a class named Range
class Range {
// The constructor takes two parameters: start and end
constructor(start, end) {
// Assign the start and end values to the instance
this.start = start;
this.end = end;
}
// Define the default iterator for the object using a generator
*[Symbol.iterator]() {
// Initialize the current value to the start value
let current = this.start;
// While the current value is less than or equal to the end value...
while (current <= this.end) {
// ...yield the current value
yield current++;
}
}
}
// Create a new Range object with start = 1 and end = 3
const range = new Range(1, 3);
// Iterate over the range object
for (const number of range) {
// Log each number to the console
console.log(number); // 1, 2, 3
}

Iterating over data streams

Generators are well-suited for iterating over data streams, such as fetching data from an API or reading files. This example demonstrates using a generator to fetch data from an API in batches:

async function* fetchDataInBatches(url, numBatches = 5, batchSize = 10) {
let startIndex = 0;
let currBatch = 0;
while (currBatch < numBatches) {
const response = await fetch(
`${url}?_start=${startIndex}&_limit=${batchSize}`,
);
const data = await response.json();
if (data.length === 0) break;
yield data;
startIndex += batchSize;
currBatch += 1;
}
}
async function fetchAndLogData() {
const dataGenerator = fetchDataInBatches(
'https://jsonplaceholder.typicode.com/todos',
);
for await (const batch of dataGenerator) {
console.log(batch);
}
}
fetchAndLogData();

This generator function fetchDataInBatches fetches data from an API in batches of a specified size. It yields each batch of data, allowing you to process it before fetching the next batch. This approach can be more memory-efficient than fetching all data at once.

Implementing asynchronous iterators

Generators can be used to implement asynchronous iterators, which are useful for working with asynchronous data sources. This example demonstrates an asynchronous iterator for fetching data from an API:

async function* fetchDataAsyncIterator(url, pagesToFetch = 3) {
let currPage = 1;
while (currPage <= pagesToFetch) {
const response = await fetch(`${url}?_page=${currPage}`);
const data = await response.json();
if (data.length === 0) break;
yield data;
currPage++;
}
}
async function fetchAndLogData() {
const asyncIterator = fetchDataAsyncIterator(
'https://jsonplaceholder.typicode.com/todos',
);
for await (const chunk of asyncIterator) {
console.log(chunk);
}
}
fetchAndLogData();

The generator function fetchDataAsyncIterator is an asynchronous iterator that fetches data from an API in pages. It yields each page of data, allowing you to process it before fetching the next page. This approach can be useful for handling large datasets or long-running operations.

Generators are also used extensively in JavaScript libraries and frameworks, such as Redux-Saga and RxJS, for handling asynchronous operations and reactive programming.

Summary

Iterators and generators provide a powerful and flexible way to work with collections of data in JavaScript. Iterators define a standardized way to traverse data sequences, while generators offer a more expressive and efficient way to create iterators, handle asynchronous operations, and compose complex data pipelines.

Further reading

Explain the difference between mutable and immutable objects in JavaScript

Topics
JavaScript

TL;DR

Mutable objects allow for modification of properties and values after creation, which is the default behavior for most objects.

const mutableObject = {
name: 'John',
age: 30,
};
// Modify the object
mutableObject.name = 'Jane';
// The object has been modified
console.log(mutableObject); // Output: { name: 'Jane', age: 30 }

Immutable objects cannot be directly modified after creation. Their contents cannot be changed without creating an entirely new value.

const immutableObject = Object.freeze({
name: 'John',
age: 30,
});
// Attempt to modify the object
immutableObject.name = 'Jane';
// The object remains unchanged
console.log(immutableObject); // Output: { name: 'John', age: 30 }

The key difference between mutable and immutable objects is modifiability. Immutable objects cannot be modified after they are created, while mutable objects can be.


Immutability

Immutability is a core principle in functional programming but it has lots to offer to object-oriented programs as well.

Mutable objects

Mutability refers to the ability of an object to have its properties or elements changed after it's created. A mutable object is an object whose state can be modified after it is created. In JavaScript, objects and arrays are mutable by default. They store references to their data in memory. Changing a property or element modifies the original object. Here is an example of a mutable object:

const mutableObject = {
name: 'John',
age: 30,
};
// Modify the object
mutableObject.name = 'Jane';
// The object has been modified
console.log(mutableObject); // Output: { name: 'Jane', age: 30 }

Immutable objects

An immutable object is an object whose state cannot be modified after it is created. Here is an example of an immutable object:

const immutableObject = Object.freeze({
name: 'John',
age: 30,
});
// Attempt to modify the object
immutableObject.name = 'Jane';
// The object remains unchanged
console.log(immutableObject); // Output: { name: 'John', age: 30 }

Primitive data types like numbers, strings, booleans, null, and undefined are inherently immutable. Once assigned a value, you cannot directly modify them.

let name = 'Alice';
name.toUpperCase(); // This won't modify the original name variable
console.log(name); // Still prints "Alice"
// To change the value, you need to reassign a new string
name = name.toUpperCase();
console.log(name); // Now prints "ALICE"

Some built-in immutable JavaScript objects are Math, Date, but custom objects are generally mutable.

const vs immutable objects

A common confusion / misunderstanding is that declaring a variable using const makes the value immutable, which is not true at all.

const prevents reassignment of the variable itself, but does not make the value it holds immutable. This means:

  • For primitive values (numbers, strings, booleans), const makes the value immutable since primitives are immutable by nature.
  • For non-primitive values like objects and arrays, const only prevents reassigning a new object/array to the variable, but the properties/elements of the existing object/array can still be modified.

On the other hand, an immutable object is an object whose state (properties and values) cannot be modified after it is created. This is achieved by using methods like Object.freeze() which makes the object immutable by preventing any changes to its properties.

// Using const
const person = { name: 'John' };
person = { name: 'Jane' }; // Error: Assignment to constant variable
person.name = 'Jane'; // Allowed, person.name is now 'Jane'
// Using Object.freeze() to create an immutable object
const frozenPerson = Object.freeze({ name: 'John' });
frozenPerson.name = 'Jane'; // Fails silently (no error, but no change)
frozenPerson = { name: 'Jane' }; // Error: Assignment to constant variable

In the first example with const, reassigning a new object to person is not allowed, but modifying the name property is permitted. In the second example, Object.freeze() makes the frozenPerson object immutable, preventing any changes to its properties.

It's important to note that Object.freeze() creates a shallow immutable object. If the object contains nested objects or arrays, those nested data structures are still mutable unless frozen separately.

Therefore, while const provides immutability for primitive values, creating truly immutable objects requires using Object.freeze() or other immutability techniques like deep freezing or using immutable data structures from libraries like Immer or Immutable.js.

Various ways to implement immutability in plain JavaScript objects

Here are a few ways to add/simulate different forms of immutability in plain JavaScript objects.

Immutable object properties

By combining writable: false and configurable: false, you can essentially create a constant (cannot be changed, redefined or deleted) as an object property, like:

const myObject = {};
Object.defineProperty(myObject, 'number', {
value: 42,
writable: false,
configurable: false,
});
console.log(myObject.number); // 42
myObject.number = 43;
console.log(myObject.number); // 42

Preventing extensions on objects

If you want to prevent an object from having new properties added to it, but otherwise leave the rest of the object's properties alone, call Object.preventExtensions(...):

let myObject = {
a: 2,
};
Object.preventExtensions(myObject);
myObject.b = 3;
console.log(myObject.b); // undefined

In non-strict mode, the creation of b fails silently. In strict mode, it throws a TypeError.

Sealing an object

Object.seal() creates a "sealed" object, which means it takes an existing object and essentially calls Object.preventExtensions() on it, but also marks all its existing properties as configurable: false. Therefore, not only can you not add any more properties, but you also cannot reconfigure or delete any existing properties, though you can still modify their values.

// Create an object
const person = {
name: 'John Doe',
age: 30,
};
// Seal the object
Object.seal(person);
// Try to add a new property (this will fail silently)
person.city = 'New York'; // This has no effect
// Try to delete an existing property (this will fail silently)
delete person.age; // This has no effect
// Modify an existing property (this will work)
person.age = 35;
console.log(person); // Output: { name: 'John Doe', age: 35 }
// Try to re-configure an existing property descriptor (this will fail silently)
Object.defineProperty(person, 'name', { writable: false }); // Fails silently in non strict mode
// Check if the object is sealed
console.log(Object.isSealed(person)); // Output: true

Freezing an object

Object.freeze() creates a frozen object, which means it takes an existing object and essentially calls Object.seal() on it, but it also marks all "data accessor" properties as writable:false, so that their values cannot be changed.

This approach is the highest level of immutability that you can attain for an object itself, as it prevents any changes to the object or to any of its direct properties (though, as mentioned above, the contents of any referenced other objects are unaffected).

let immutableObject = Object.freeze({});

Freezing an object does not allow new properties to be added to an object and prevents users from removing or altering the existing properties. Object.freeze() preserves the enumerability, configurability, writability and the prototype of the object. It returns the passed object and does not create a frozen copy.

Object.freeze() makes the object immutable. However, it is not necessarily constant. While Object.freeze prevents modifications to the object itself and its direct properties, nested objects within the frozen object can still be modified.

let obj = {
user: {},
};
Object.freeze(obj);
obj.user.name = 'John';
console.log(obj.user.name); //Output: 'John'

What are the pros and cons of immutability?

Pros

  • Easier change detection: Object equality can be determined in a performant and easy manner through referential equality. This is useful for comparing object differences in React and Redux.
  • Less complicated: Programs with immutable objects are less complicated to think about, since you don't need to worry about how an object may evolve over time.
  • Easy sharing via references: One copy of an object is just as good as another, so you can cache objects or reuse the same object multiple times.
  • Thread-safe: Immutable objects can be safely used between threads in a multi-threaded environment since there is no risk of them being modified in other concurrently running threads. In most cases, JavaScript runs in a single-threaded environment.
  • Less memory needed: Using libraries like Immer and Immutable.js, objects are modified using structural sharing and less memory is needed for having multiple objects with similar structures.
  • No need for defensive copying: Defensive copies are no longer necessary when immutable objects are returned from or passed to functions, since there is no possibility an immutable object will be modified by it.

Cons

  • Complex to create yourself: Naive implementations of immutable data structures and their operations can result in extremely poor performance because new objects are created each time. It is recommended to use libraries for efficient immutable data structures and operations that leverage structural sharing.
  • Potential negative performance: Allocation (and deallocation) of many small objects rather than modifying existing ones can cause a performance impact. The complexity of either the allocator or the garbage collector usually depends on the number of objects on the heap.
  • Complexity for cyclic data structures: Cyclic data structures such as graphs are difficult to implement.

Further reading

What is the difference between a `Map` object and a plain object in JavaScript?

Topics
JavaScript

TL;DR

Both Map objects and plain objects in JavaScript can store key-value pairs, but they have several key differences:

FeatureMapPlain object
Key typeAny data typeString (or Symbol)
Key orderMaintainedNot guaranteed
Size propertyYes (size)None
IterationforEach, keys(), values(), entries()for...in, Object.keys(), etc.
InheritanceNoYes
PerformanceGenerally better for larger datasets and frequent additions/deletionsFaster for small datasets and simple operations
SerializableNoYes

Map vs plain JavaScript objects

In JavaScript, Map objects and plain objects (also known as a "POJO" or "plain old JavaScript object") are both used to store key-value pairs, but they have different characteristics, use cases, and behaviors.

Plain JavaScript objects (POJO)

A plain object is a basic JavaScript object created using the {} syntax. It is a collection of key-value pairs, where each key is a string (or a symbol, in modern JavaScript) and each value can be of any type, including strings, numbers, booleans, arrays, objects, and more.

const person = { name: 'John', age: 30, occupation: 'Developer' };
console.log(person);

Map objects

A Map object, introduced in ECMAScript 2015 (ES6), is a more advanced data structure that allows you to store key-value pairs with additional features. A Map is an iterable, which means you can use it with for...of loops, and it provides methods for common operations like get, set, has, and delete.

const person = new Map([
['name', 'John'],
['age', 30],
['occupation', 'Developer'],
]);
console.log(person);

Key differences

Here are the main differences between a Map object and a plain object:

  1. Key types: In a plain object, keys are always strings (or symbols). In a Map, keys can be any type of value, including objects, arrays, and even other Maps.
  2. Key ordering: In a plain object, the order of keys is not guaranteed. In a Map, the order of keys is preserved, and you can iterate over them in the order they were inserted.
  3. Iteration: A Map is iterable, which means you can use for...of loops to iterate over its key-value pairs. A plain object is not iterable by default, but you can use Object.keys() or Object.entries() to iterate over its properties.
  4. Performance: Map objects are generally faster and more efficient than plain objects, especially when dealing with large datasets.
  5. Methods: A Map object provides additional methods, such as get, set, has, and delete, which make it easier to work with key-value pairs.
  6. Serialization: When serializing a Map object to JSON, it will be converted to an object but the existing Map properties might be lost in the conversion. A plain object, on the other hand, is serialized to a JSON object with the same structure.

When to use which

Use a plain object (POJO) when:

  • You need a simple, lightweight object with string keys.
  • You're working with a small dataset.
  • You need to serialize the object to JSON (e.g. to send over the network).

Use a Map object when:

  • You need to store key-value pairs with non-string keys (e.g., objects, arrays).
  • You need to preserve the order of key-value pairs.
  • You need to iterate over the key-value pairs in a specific order.
  • You're working with a large dataset and need better performance.

In summary, while both plain objects and Map objects can be used to store key-value pairs, Map objects offer more advanced features, better performance, and additional methods, making them a better choice for more complex use cases.

Notes

Map objects cannot be serialized to be sent in HTTP requests, but libraries like superjson allow them to be serialized and deserialized.

Further reading

What are the differences between `Map`/`Set` and `WeakMap`/`WeakSet` in JavaScript?

Topics
JavaScript

TL;DR

The primary difference between Map/Set and WeakMap/WeakSet in JavaScript lies in how they handle keys. Here's a breakdown:

Map vs. WeakMap

Maps allow any data type (strings, numbers, objects) as keys. The key-value pairs remain in memory as long as the Map object itself is referenced. Thus they are suitable for general-purpose key-value storage where you want to maintain references to both keys and values. Common use cases include storing user data, configuration settings, or relationships between objects.

WeakMaps only allow objects as keys. However, these object keys are held weakly. This means the garbage collector can remove them from memory even if the WeakMap itself still exists, as long as there are no other references to those objects. WeakMaps are ideal for scenarios where you want to associate data with objects without preventing those objects from being garbage collected. This can be useful for things like:

  • Caching data based on objects without preventing garbage collection of the objects themselves.
  • Storing private data associated with DOM nodes without affecting their lifecycle.

Set vs. WeakSet

Similar to Map, Sets allow any data type as keys. The elements within a Set must be unique. Sets are useful for storing unique values and checking for membership efficiently. Common use cases include removing duplicates from arrays or keeping track of completed tasks.

On the other hand, WeakSet only allows objects as elements, and these object elements are held weakly, similar to WeakMap keys. WeakSets are less commonly used, but applicable when you want a collection of unique objects without affecting their garbage collection. This might be necessary for:

  • Tracking DOM nodes that have been interacted with without affecting their memory management.
  • Implementing custom object weak references for specific use cases.

Here's a table summarizing the key differences:

FeatureMapWeakMapSetWeakSet
Key TypesAny data typeObjects (weak references)Any data type (unique)Objects (weak references, unique)
Garbage CollectionKeys and values are not garbage collectedKeys can be garbage collected if not referenced elsewhereElements are not garbage collectedElements can be garbage collected if not referenced elsewhere
Use CasesGeneral-purpose key-value storageCaching, private DOM node dataRemoving duplicates, membership checksObject weak references, custom use cases

Choosing between them

  • Use Map and Set for most scenarios where you need to store key-value pairs or unique elements and want to maintain references to both the keys/elements and the values.
  • Use WeakMap and WeakSet cautiously in specific situations where you want to associate data with objects without affecting their garbage collection. Be aware of the implications of weak references and potential memory leaks if not used correctly.

Map/Set vs WeakMap/WeakSet

The key differences between Map/Set and WeakMap/WeakSet in JavaScript are:

  1. Key types: Map and Set can have keys of any type (objects, primitive values, etc.), while WeakMap and WeakSet can only have objects as keys. Primitive values like strings or numbers are not allowed as keys in WeakMap and WeakSet.
  2. Memory management: The main difference lies in how they handle memory. Map and Set have strong references to their keys and values, which means they will prevent garbage collection of those values. On the other hand, WeakMap and WeakSet have weak references to their keys (objects), allowing those objects to be garbage collected if there are no other strong references to them.
  3. Key enumeration: Keys in Map and Set are enumerable (can be iterated over), while keys in WeakMap and WeakSet are not enumerable. This means you cannot get a list of keys or values from a WeakMap or WeakSet.
  4. size property: Map and Set have a size property that returns the number of elements, while WeakMap and WeakSet do not have a size property because their size can change due to garbage collection.
  5. Use cases: Map and Set are useful for general-purpose data structures and caching, while WeakMap and WeakSet are primarily used for storing metadata or additional data related to objects, without preventing those objects from being garbage collected.

Map and Set are regular data structures that maintain strong references to their keys and values, while WeakMap and WeakSet are designed for scenarios where you want to associate data with objects without preventing those objects from being garbage collected when they are no longer needed.

Use cases of WeakMap and WeakSet

Tracking active users

In a chat application, you might want to track which user objects are currently active without preventing garbage collection when the user logs out or the session expires. We use a WeakSet to track active user objects. When a user logs out or their session expires, the user object can be garbage-collected if there are no other references to it.

const activeUsers = new WeakSet();
// Function to mark a user as active
function markUserActive(user) {
activeUsers.add(user);
}
// Function to check if a user is active
function isUserActive(user) {
return activeUsers.has(user);
}
// Example usage
let user1 = { id: 1, name: 'Alice' };
let user2 = { id: 2, name: 'Bob' };
markUserActive(user1);
markUserActive(user2);
console.log(isUserActive(user1)); // true
console.log(isUserActive(user2)); // true
// Simulate user logging out
user1 = null;
// user1 is now eligible for garbage collection
console.log(isUserActive(user1)); // false

Detecting circular references

WeakSet provides a way of guarding against circular data structures by tracking which objects have already been processed.

// Create a WeakSet to track visited objects
const visited = new WeakSet();
// Function to traverse an object recursively
function traverse(obj) {
// Check if the object has already been visited
if (visited.has(obj)) {
return;
}
// Add the object to the visited set
visited.add(obj);
// Traverse the object's properties
for (let prop in obj) {
if (obj.hasOwnProperty(prop)) {
let value = obj[prop];
if (typeof value === 'object' && value !== null) {
traverse(value);
}
}
}
// Process the object
console.log(obj);
}
// Create an object with a circular reference
const obj = {
name: 'John',
age: 30,
friends: [
{ name: 'Alice', age: 25 },
{ name: 'Bob', age: 28 },
],
};
// Create a circular reference
obj.self = obj;
// Traverse the object
traverse(obj);

Further reading

Why might you want to create static class members in JavaScript?

Topics
JavaScriptOOP

TL;DR

Static class members (properties/methods) have a static keyword prepended. Such members cannot be directly accessed on instances of the class. Instead, they're accessed on the class itself.

class Car {
static noOfWheels = 4;
static compare() {
return 'Static method has been called.';
}
}
console.log(Car.noOfWheels); // 4

Static members are useful under the following scenarios:

  • Namespace organization: Static properties can be used to define constants or configuration values that are specific to a class. This helps organize related data within the class namespace and prevents naming conflicts with other variables. Examples include Math.PI, Math.SQRT2.
  • Helper functions: Static methods can be used as helper functions that operate on the class itself or its instances. This can improve code readability and maintainability by separating utility logic from the core functionality of the class. Examples of frequently used static methods include Object.assign(), Math.max().
  • Singleton pattern: In some rare cases, static properties and methods can be used to implement a singleton pattern, where only one instance of a class ever exists. However, this pattern can be tricky to manage and is generally discouraged in favor of more modern dependency injection techniques.

Static class members

Static class members (properties/methods) are not tied to a specific instance of a class and have the same value regardless of which instance is referring to it. Static properties are typically configuration variables and static methods are usually pure utility functions which do not depend on the state of the instance. Such properties have a static keyword prepended.

class Car {
static noOfWheels = 4;
static compare() {
return 'static method has been called.';
}
}
console.log(Car.noOfWheels); // Output: 4
console.log(Car.compare()); // Output: static method has been called.

Static members are not accessible by a specific instance of a class.

class Car {
static noOfWheels = 4;
static compare() {
return 'static method has been called.';
}
}
const car = new Car();
console.log(car.noOfWheels); // Output: undefined
console.log(car.compare()); // Error: TypeError: car.compare is not a function

The Math class in JavaScript is a good example of a common library that uses static members. The Math class in JavaScript is a built-in object that provides a collection of mathematical constants and functions. It is a static class, meaning that all of its properties and methods are static. Here's an example of how the Math class uses static members:

console.log(Math.PI); // Output: 3.141592653589793
console.log(Math.abs(-5)); // Output: 5
console.log(Math.max(1, 2, 3)); // Output: 3

In this example, Math.PI, Math.abs(), and Math.max() are all static members of the Math class. They can be accessed directly on the Math object without the need to create an instance of the class.

Reasons to use static class members

Utility functions

Static class members can be useful for defining utility functions that don't require any instance-specific (don't use this) data or behavior. For example, you might have an Arithmetic class with static methods for common mathematical operations.

class Arithmetic {
static add(a, b) {
return a + b;
}
static subtract(a, b) {
return a - b;
}
}
console.log(Arithmetic.add(2, 3)); // Output: 5
console.log(Arithmetic.subtract(5, 2)); // Output: 3

Singletons

Static class members can be used to implement the Singleton pattern, where you want to ensure that only one instance of a class exists throughout your application.

class Singleton {
static instance;
static getInstance() {
if (!this.instance) {
this.instance = new Singleton();
}
return this.instance;
}
}
const singleton1 = Singleton.getInstance();
const singleton2 = Singleton.getInstance();
console.log(singleton1 === singleton2); // Output: true

Configurations

Static class members can be used to store configuration or settings that are shared across all instances of a class. This can be useful for things like API keys, feature flags, or other global settings.

class Config {
static API_KEY = 'your-api-key';
static FEATURE_FLAG = true;
}
console.log(Config.API_KEY); // Output: 'your-api-key'
console.log(Config.FEATURE_FLAG); // Output: true

Performance

In some cases, using static class members can improve performance by reducing the amount of memory used by your application. This is because static class members are shared across all instances of a class, rather than being duplicated for each instance.

Further Reading

What are `Symbol`s used for in JavaScript?

Topics
JavaScript

TL;DR

Symbols in JavaScript are a new primitive data type introduced in ES6 (ECMAScript 2015). They are unique and immutable identifiers that are primarily used for object property keys to avoid name collisions. These values can be created using the Symbol(...) function, and each Symbol value is guaranteed to be unique, even if they have the same key/description. Symbol properties are not enumerable in for...in loops or Object.keys(), making them suitable for creating private/internal object state.

let sym1 = Symbol();
let sym2 = Symbol('myKey');
console.log(typeof sym1); // "symbol"
console.log(sym1 === sym2); // false, because each symbol is unique
let obj = {};
let sym = Symbol('uniqueKey');
obj[sym] = 'value';
console.log(obj[sym]); // "value"

Note: The Symbol() function must be called without the new keyword. It is not exactly a constructor because it can only be called as a function instead of with new Symbol().


Symbols in JavaScript

Symbols in JavaScript are a unique and immutable data type used primarily for object property keys to avoid name collisions.

Key characteristics

  • Uniqueness: Each Symbol value is unique, even if they have the same description.
  • Immutability: Symbol values are immutable, meaning their value cannot be changed.
  • Non-enumerable: Symbol properties are not included in for...in loops or Object.keys().

Creating Symbols

Symbols can be created using the Symbol() function:

const sym1 = Symbol();
const sym2 = Symbol('uniqueKey');
console.log(typeof sym1); // "symbol"
console.log(sym1 === sym2); // false, because each symbol is unique

The Symbol(..) function must be called without the new keyword.

Using Symbols as object property keys

Symbols can be used to add properties to an object without risk of name collision:

const obj = {};
const sym = Symbol('uniqueKey');
obj[sym] = 'value';
console.log(obj[sym]); // "value"

Symbols are not enumerable

  • Symbol properties are not included in for...in loops or Object.keys().
  • This makes them suitable for creating private/internal object state.
  • Use Object.getOwnPropertySymbols(obj) to get all symbol properties on an object.
const mySymbol = Symbol('privateProperty');
const obj = {
name: 'John',
[mySymbol]: 42,
};
console.log(Object.keys(obj)); // Output: ['name']
console.log(obj[mySymbol]); // Output: 42

Global Symbol registry

You can create global Symbols using Symbol.for('key'), which creates a new Symbol in the global registry if it doesn't exist, or returns the existing one. This allows you to reuse Symbols across different parts of your code base or even across different code bases.

const globalSym1 = Symbol.for('globalKey');
const globalSym2 = Symbol.for('globalKey');
console.log(globalSym1 === globalSym2); // true
const key = Symbol.keyFor(globalSym1);
console.log(key); // "globalKey"

Well-known Symbol

JavaScript includes several built-in Symbols, referred to as well-known Symbols.

  • Symbol.iterator: Defines the default iterator for an object.
  • Symbol.toStringTag: Used to create a string description for an object.
  • Symbol.hasInstance: Used to determine if an object is an instance of a constructor.

Symbol.iterator

let iterable = {
[Symbol.iterator]() {
let step = 0;
return {
next() {
step++;
if (step <= 5) {
return { value: step, done: false };
}
return { done: true };
},
};
},
};
for (let value of iterable) {
console.log(value); // 1, 2, 3, 4, 5
}

Symbol.toStringTag

let myObj = {
[Symbol.toStringTag]: 'MyCustomObject',
};
console.log(Object.prototype.toString.call(myObj)); // "[object MyCustomObject]"

Summary

Symbols are a powerful feature in JavaScript, especially useful for creating unique object properties and customizing object behavior. They provide a means to create hidden properties, preventing accidental access or modification, which is particularly beneficial in large-scale applications and libraries.

Further reading

What are server-sent events?

Topics
JavaScriptNetworking

TL;DR

Server-sent events (SSE) is a standard that allows a web page to receive automatic updates from a server via an HTTP connection. Server-sent events are used with EventSource instances that open a connection with a server and allow the client to receive events from the server. Connections created by server-sent events are persistent (similar to the WebSockets), however there are a few differences:

PropertyWebSocketEventSource
DirectionBi-directional – both client and server can exchange messagesUnidirectional – only server sends data
Data typeBinary and text dataOnly text
ProtocolWebSocket protocol (ws://)Regular HTTP (http://)

Creating an event source

const eventSource = new EventSource('/sse-stream');

Listening for events

// Fired when the connection is established.
eventSource.addEventListener('open', () => {
console.log('Connection opened');
});
// Fired when a message is received from the server.
eventSource.addEventListener('message', (event) => {
console.log('Received message:', event.data);
});
// Fired when an error occurs.
eventSource.addEventListener('error', (error) => {
console.error('Error occurred:', error);
});

Sending events from server

const express = require('express');
const app = express();
app.get('/sse-stream', (req, res) => {
// `Content-Type` need to be set to `text/event-stream`.
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
// Each message should be prefixed with data.
const sendEvent = (data) => res.write(`data: ${data}\n\n`);
sendEvent('Hello from server');
const intervalId = setInterval(() => sendEvent(new Date().toString()), 1000);
res.on('close', () => {
console.log('Client closed connection');
clearInterval(intervalId);
});
});
app.listen(3000, () => console.log('Server started on port 3000'));

In this example, the server sends a "Hello from server" message initially, and then sends the current date every second. The connection is kept alive until the client closes it.


What are Server-Sent Events?

Server-Sent Events (SSE) is a standard that allows a server to push updates to a web client over a single, long-lived HTTP connection. It enables real-time updates without the client having to constantly poll the server for new data.

How SSE works

  1. The client creates a new EventSource object, passing the URL of the server-side script that will generate the event stream:

    const eventSource = new EventSource('/event-stream');
  2. The server-side script sets the appropriate headers to indicate that it will be sending an event stream (Content-Type: text/event-stream), and then starts sending events to the client.

  3. Each event sent by the server follows a specific format, with fields like event, data, and id. For example:

    event: message
    data: Hello, world!
    event: update
    id: 123
    data: {"temperature": 25, "humidity": 60}
  4. On the client-side, the EventSource object receives these events and dispatches them as browser events, which can be handled using event listeners or the onmessage event handler:

    eventSource.onmessage = function (event) {
    console.log('Received message:', event.data);
    };
    eventSource.addEventListener('update', function (event) {
    console.log('Received update:', JSON.parse(event.data));
    });
  5. The EventSource object automatically handles reconnection if the connection is lost, and it can resume the event stream from the last received event ID using the Last-Event-ID HTTP header.

SSE features

  • Unidirectional: Only the server can send data to the client. For bidirectional communication, web sockets would be more appropriate.
  • Retry mechanism: The client will retry the connection if it fails, with the retry interval specified by the retry: field from the server.
  • Text-only data: SSE can only transmit text data, which means binary data needs to be encoded (e.g., Base64) before transmission. This can lead to increased overhead and inefficiency for applications that need to transmit large binary payloads.
  • Built-in browser support: Supported by most modern browsers without additional libraries.
  • Event types: SSE supports custom event types using the event: field, allowing categorization of messages.
  • Last-Event-Id: The client sends the Last-Event-Id header when reconnecting, allowing the server to resume the stream from the last received event. However, there is no built-in mechanism to replay missed events during the disconnection period. You may need to implement a mechanism to handle missed events, such as using the Last-Event-Id header.
  • Connection limitations: Browsers have a limit on the maximum number of concurrent SSE connections, typically around 6 per domain. This can be a bottleneck if you need to establish multiple SSE connections from the same client. Using HTTP/2 will mitigate this issue.

Implementing SSE in JavaScript

The following code demonstrates a minimal implementation of SSE on the client and the server:

  • The server sets the appropriate headers to establish an SSE connection.
  • Messages are sent to the client every 5 seconds.
  • The server cleans up the interval and ends the response when the client disconnects.

On the client:

// Create a new EventSource object
const eventSource = new EventSource('/sse');
// Event listener for receiving messages
eventSource.onmessage = function (event) {
console.log('New message:', event.data);
};
// Event listener for errors
eventSource.onerror = function (error) {
console.error('Error occurred:', error);
};
// Optional: Event listener for open connection
eventSource.onopen = function () {
console.log('Connection opened');
};

On the server:

const http = require('http');
http
.createServer((req, res) => {
if (req.url === '/sse') {
// Set headers for SSE
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
Connection: 'keep-alive',
});
// Function to send a message
const sendMessage = (message) => {
res.write(`data: ${message}\n\n`); // Messages are delimited with double line breaks.
};
// Send a message every 5 seconds
const intervalId = setInterval(() => {
sendMessage(`Current time: ${new Date().toLocaleTimeString()}`);
}, 5000);
// Handle client disconnect
req.on('close', () => {
clearInterval(intervalId);
res.end();
});
} else {
res.writeHead(404);
res.end();
}
})
.listen(8080, () => {
console.log('SSE server running on port 8080');
});

SSE vs WebSockets vs Long Polling

Most interview discussions of SSE start with the WebSockets comparison. To pick the right tool, you also need to know how SSE compares to the older long-polling pattern it largely replaced.

PropertyLong PollingServer-Sent EventsWebSockets
DirectionServer to client (response per request)Server to client (one connection, many messages)Bidirectional
TransportPlain HTTPPlain HTTP (text/event-stream)Upgraded TCP (ws:// or wss://)
Reconnect handlingManual (reissue request)Built-in (EventSource retries automatically)Manual (write your own backoff)
Message formatAnything (JSON, etc.)Text only (UTF-8)Text or binary (frames)
Browser supportUniversalAll evergreen browsersAll evergreen browsers
Server cost per connectionLow; connection closes between messagesModerate; one TCP connection held open per clientModerate; one TCP connection held open per client
Works through corporate proxies and firewallsYes (regular HTTP)Usually yes (regular HTTP)Sometimes blocked (Upgrade: websocket is rejected by some proxies)
Resume after disconnectManualBuilt-in via Last-Event-IDManual

A useful decision rule:

  • For bidirectional, low-latency, frequent client-to-server messages (chat, multiplayer, collaborative editing), use WebSockets.
  • For one-way notifications, dashboards, LLM streaming, or server progress updates, use SSE.
  • To support very old clients or restrictive networks where neither works, fall back to long polling.

Modern uses of SSE: LLM streaming and beyond

SSE has become a common transport for LLM streaming. The fit is straightforward: completions stream tokens one at a time from server to client, the connection is one-way, and the protocol works over plain HTTPS through most corporate proxies.

  • The OpenAI Chat Completions API returns text/event-stream when called with stream: true. Each data: chunk contains a JSON delta with the next token.
  • The Anthropic Messages API uses the same pattern: text/event-stream with data: chunks per delta.
  • Self-hosted servers like vLLM (which exposes an OpenAI-compatible API) also stream via SSE. Some others (such as Ollama) stream over newline-delimited JSON instead, so check the API docs before assuming SSE.

Beyond AI, SSE fits most "server has new data, push it to the user" use cases:

  • Live notifications such as new messages, mentions, and alerts.
  • Long-running job progress (deploys, exports, batch processing), where the server emits progress events as work proceeds.
  • Live status pages and dashboards for build status or server health.
  • Stock tickers and sports scores, where the client does not push data back.

When the use case is "client subscribes, server sends updates", SSE is usually simpler than WebSockets: you get auto-reconnect, message IDs, and HTTP/2 multiplexing for free.

Production gotchas with SSE

A few practical issues to be aware of:

  • HTTP/1.1 connection limits. Browsers cap concurrent connections per origin to roughly 6 over HTTP/1.1. A tab that opens an SSE stream and also makes regular API calls can hit the limit quickly, especially when the user opens multiple tabs (each tab opens its own SSE). Running over HTTP/2 or HTTP/3 multiplexes streams over a single connection (typically up to 100 concurrent streams by default), avoiding the per-origin limit.
  • Buffering proxies break streaming. Nginx, AWS ALB, and many reverse proxies buffer responses by default, so data: chunks accumulate on the proxy and arrive in one large piece. Disable buffering on SSE routes:
    • Nginx: send X-Accel-Buffering: no from the server, or set proxy_buffering off in the location block.
    • AWS ALB: response buffering is generally on; send data more frequently or use API Gateway with HTTP integration.
    • Cloudflare: usually streams correctly, but check that the route is not behind "Auto Minify" or response transformations.
  • Last-Event-ID on reconnect. When EventSource reconnects after a drop, it sends the last id: it received in a Last-Event-ID request header. The server should use this to resume from the right point. Otherwise, reconnects either replay everything (causing duplicates) or skip events the client missed.
  • Authentication. EventSource does not let you set custom request headers, so there is no Authorization header. Common workarounds: pass the token as a query string parameter (be careful, since this is visible in server logs), or use cookie-based auth (which EventSource does send). The newer fetch + ReadableStream approach lets you set headers if you need them.
  • Server timeouts. Many platforms (Heroku, App Engine, AWS Lambda via API Gateway) cap request duration. SSE connections can last for hours, which exceeds those limits. Either run on a platform that supports long-lived connections (Fly.io, Render, a self-hosted VPS) or have the client reconnect periodically.

Summary

Server-sent events provide an efficient and straightforward way to push updates from a server to a client in real-time. They are well-suited for applications that require continuous data streams but do not need full bidirectional communication, including the common pattern of streaming LLM responses. With built-in support in modern browsers, SSE is a reliable choice for many real-time web applications.

Further reading

Explain the concept of "hoisting" in JavaScript