JavaScript Interview Questions

190+ JavaScript interview questions and answers in quiz-style format, answered by ex-FAANG interviewers
Questions and solutions by ex-interviewers
Covers critical topics

Tired of scrolling through low-quality JavaScript interview questions? You’ve found the right place!

Our JavaScript interview questions are crafted by experienced ex-FAANG senior / staff engineers, not random unverified sources or AI.

With over 190+ questions covering everything from core JavaScript concepts to advanced JavaScript features (async / await, promises, etc.), you’ll be fully prepared.

Each quiz question comes with:

  • Concise answers (TL;DR): Clear and to-the-point solutions to help you respond confidently during interviews.
  • Comprehensive explanations: In-depth insights to ensure you fully understand the concepts and can elaborate when required. Don’t waste time elsewhere—start practicing with the best!
If you're looking for JavaScript coding questions -We've got you covered as well, with:
Javascript coding
  • 280+ JavaScript coding questions
  • In-browser coding workspace similar to real interview environment
  • Reference solutions from Big Tech Ex-interviewers
  • Automated test cases
  • Instantly preview your code for UI questions
Get Started
Join 50,000+ engineers

Explain the concept of "hoisting" in JavaScript

Topics
JavaScript

TL;DR

Hoisting is a JavaScript mechanism where variable and function declarations are moved ("hoisted") to the top of their containing scope during the compile phase.

  • Variable declarations (var): Declarations are hoisted, but not initializations. The value of the variable is undefined if accessed before initialization.
  • Variable declarations (let and const): Declarations are hoisted, but not initialized. Accessing them results in ReferenceError until the actual declaration is encountered.
  • Function expressions (var): Declarations are hoisted, but not initializations. The value of the variable is undefined if accessed before initialization.
  • Function declarations (function): Both declaration and definition are fully hoisted.
  • Class declarations (class): Declarations are hoisted, but not initialized. Accessing them results in ReferenceError until the actual declaration is encountered.
  • Import declarations (import): Declarations are hoisted, and side effects of importing the module are executed before the rest of the code.

The following behavior summarizes the result of accessing the variables before they are declared.

DeclarationAccessing before declaration
var fooundefined
let fooReferenceError
const fooReferenceError
class FooReferenceError
var foo = function() { ... }undefined
function foo() { ... }Normal
importNormal

Hoisting

Hoisting is a term used to explain the behavior of variable declarations in JavaScript code.

Variables declared or initialized with the var keyword will have their declaration "moved" up to the top of their containing scope during compilation, which we refer to as hoisting.

Only the declaration is hoisted, the initialization/assignment (if there is one), will stay where it is. Note that the declaration is not actually moved – the JavaScript engine parses the declarations during compilation and becomes aware of variables and their scopes, but it is easier to understand this behavior by visualizing the declarations as being "hoisted" to the top of their scope.

Let's explain with a few code samples. Note that the code for these examples should be executed within a module scope instead of being entered line by line into a REPL like the browser console.

Hoisting of variables declared using var

Hoisting is seen in action here as even though foo is declared and initialized after the first console.log(), the first console.log() prints the value of foo as undefined.

console.log(foo); // undefined
var foo = 1;
console.log(foo); // 1

You can visualize the code as:

var foo;
console.log(foo); // undefined
foo = 1;
console.log(foo); // 1

Hoisting of variables declared using let, const, and class

Variables declared via let, const, and class are hoisted as well. However, unlike var and function, they are not initialized and accessing them before the declaration will result in a ReferenceError exception. The variable is in a "temporal dead zone" from the start of the block until the declaration is processed.

y; // ReferenceError: Cannot access 'y' before initialization
let y = 'local';
z; // ReferenceError: Cannot access 'z' before initialization
const z = 'local';
Foo; // ReferenceError: Cannot access 'Foo' before initialization
class Foo {
constructor() {}
}

Hoisting of function expressions

Function expressions are functions written in the form of variable declarations. Since they are also declared using var, only the variable declaration is hoisted.

console.log(bar); // undefined
bar(); // Uncaught TypeError: bar is not a function
var bar = function () {
console.log('BARRRR');
};

Hoisting of function declarations

Function declarations use the function keyword. Unlike function expressions, function declarations have both the declaration and definition hoisted, thus they can be called even before they are declared.

console.log(foo); // [Function: foo]
foo(); // 'FOOOOO'
function foo() {
console.log('FOOOOO');
}

The same applies to generators (function*), async functions (async function), and async function generators (async function*).

Hoisting of import statements

Import declarations are hoisted. The identifiers the imports introduce are available in the entire module scope, and their side effects are produced before the rest of the module's code runs.

foo.doSomething(); // Works normally.
import foo from './modules/foo';

Under the hood

In reality, JavaScript creates all variables in the current scope before it even tries to execute the code. Variables created using var keyword will have the value of undefined where variables created using let and const keywords will be marked as <value unavailable>. Thus, accessing them will cause a ReferenceError preventing you to access them before initialization.

In ECMAScript specifications let and const declarations are explained as below:

The variables are created when their containing Environment Record is instantiated but may not be accessed in any way until the variable's LexicalBinding is evaluated.

However, this statement is a little different for the var keyword:

Var variables are created when their containing Environment Record is instantiated and are initialized to undefined when created.

Modern practices

In practice, modern code bases avoid using var and use let and const exclusively. It is recommended to declare and initialize your variables and import statements at the top of the containing scope/module to eliminate the mental overhead of tracking when a variable can be used.

ESLint is a static code analyzer that can find violations of such cases with the following rules:

  • no-use-before-define: This rule will warn when it encounters a reference to an identifier that has not yet been declared.
  • no-undef: This rule will warn when it encounters a reference to an identifier that has not yet been declared.

Further reading

What are the differences between JavaScript variables created using `let`, `var` or `const`?

Topics
JavaScript

TL;DR

In JavaScript, let, var, and const are all keywords used to declare variables, but they differ significantly in terms of scope, initialization rules, whether they can be redeclared or reassigned and the behavior when they are accessed before declaration:

Behaviorvarletconst
ScopeFunction or GlobalBlockBlock
InitializationOptionalOptionalRequired
RedeclarationYesNoNo
ReassignmentYesYesNo
Accessing before declarationundefinedReferenceErrorReferenceError

Differences in behavior

Let's look at the difference in behavior between var, let, and const.

Scope

Variables declared using the var keyword are scoped to the function in which they are created, or if created outside of any function, to the global object. let and const are block scoped, meaning they are only accessible within the nearest set of curly braces (function, if-else block, or for-loop).

function foo() {
// All variables are accessible within functions.
var bar = 1;
let baz = 2;
const qux = 3;
console.log(bar); // 1
console.log(baz); // 2
console.log(qux); // 3
}
foo(); // Prints each variable successfully
console.log(bar); // ReferenceError: bar is not defined
console.log(baz); // ReferenceError: baz is not defined
console.log(qux); // ReferenceError: qux is not defined

In the following example, bar is accessible outside of the if block but baz and quz are not.

if (true) {
var bar = 1;
let baz = 2;
const qux = 3;
}
// var variables are accessible anywhere in the function scope.
console.log(bar); // 1
// let and const variables are not accessible outside of the block they were defined in.
console.log(baz); // ReferenceError: baz is not defined
console.log(qux); // ReferenceError: qux is not defined

Initialization

var and let variables can be initialized without a value but const declarations must be initialized.

var foo; // Ok
let bar; // Ok
const baz; // SyntaxError: Missing initializer in const declaration

Redeclaration

Redeclaring a variable with var will not throw an error, but let and const will.

var foo = 1;
var foo = 2; // Ok
console.log(foo); // Should print 2, but SyntaxError from baz prevents the code executing.
let baz = 3;
let baz = 4; // Uncaught SyntaxError: Identifier 'baz' has already been declared

Reassignment

let and const differ in that var and let allow reassigning the variable's value while const does not.

var foo = 1;
foo = 2; // This is fine.
let bar = 3;
bar = 4; // This is fine.
const baz = 5;
baz = 6; // Uncaught TypeError: Assignment to constant variable.

Accessing before declaration

var ,let and const declared variables are all hoisted. var declared variables are auto-initialized with an undefined value. However, let and const variables are not initialized and accessing them before the declaration will result in a ReferenceError exception because they are in a "temporal dead zone" from the start of the block until the declaration is processed.

console.log(foo); // undefined
var foo = 'foo';
console.log(baz); // ReferenceError: Cannot access 'baz' before initialization
let baz = 'baz';
console.log(bar); // ReferenceError: Cannot access 'baz' before initialization
const bar = 'bar';

Notes

  • In modern JavaScript, it's generally recommended to use const by default for variables that don't need to be reassigned. This promotes immutability and prevents accidental changes.
  • Use let when you need to reassign a variable within its scope.
  • Avoid using var due to its potential for scoping issues and hoisting behavior.
  • If you need to target older browsers, write your code using let/const, and use a transpiler like Babel compile your code to older syntax.

Further reading

What is the difference between `==` and `===` in JavaScript?

Topics
JavaScript

TL;DR

== is the abstract equality operator while === is the strict equality operator. The == operator will compare for equality after doing any necessary type conversions. The === operator will not do type conversion, so if two values are not the same type === will simply return false.

Operator=====
Name(Loose) Equality operatorStrict equality operator
Type coercionYesNo
Compares value and typeNoYes

Equality operator (==)

The == operator checks for equality between two values but performs type coercion if the values are of different types. This means that JavaScript will attempt to convert the values to a common type before making the comparison.

console.log(42 == '42'); // true
console.log(0 == false); // true
console.log(null == undefined); // true
console.log([] == false); // true
console.log('' == false); // true

In these examples, JavaScript converts the operands to the same type before making the comparison. For example, 42 == '42' is true because the string '42' is converted to the number 42 before comparison.

However, when using ==, unintuitive results can happen:

console.log(1 == [1]); // true
console.log(0 == ''); // true
console.log(0 == '0'); // true
console.log('' == '0'); // false

As a general rule of thumb, never use the == operator, except for convenience when comparing against null or undefined, where a == null will return true if a is null or undefined.

var a = null;
console.log(a == null); // true
console.log(a == undefined); // true

Strict equality operator (===)

The === operator, also known as the strict equality operator, checks for equality between two values without performing type coercion. This means that both the value and the type must be the same for the comparison to return true.

console.log(42 === '42'); // false
console.log(0 === false); // false
console.log(null === undefined); // false
console.log([] === false); // false
console.log('' === false); // false

For these comparisons, no type conversion is performed, so the statement returns false if the types are different. For instance, 42 === '42' is false because the types (number and string) are different.

// Comparison with type coercion (==)
console.log(42 == '42'); // true
console.log(0 == false); // true
console.log(null == undefined); // true
// Strict comparison without type coercion (===)
console.log(42 === '42'); // false
console.log(0 === false); // false
console.log(null === undefined); // false

Bonus: Object.is()

There's one final value-comparison operation within JavaScript, that is the Object.is() static method. The only difference between Object.is() and === is how they treat of signed zeros and NaN values. The === operator (and the == operator) treats the number values -0 and +0 as equal, but treats NaN as not equal to each other.

Conclusion

  • Use == when you want to compare values with type coercion (and understand the implications of it). In practice, the only reasonable use case for the equality operator is to check for both null and undefined in a single comparison for convenience.
  • Use === when you want to ensure both the value and the type are the same, which is the safer and more predictable choice in most cases.

Notes

  • Using === (strict equality) is generally recommended to avoid the pitfalls of type coercion, which can lead to unexpected behavior and bugs in your code. It makes the intent of your comparisons clearer and ensures that you are comparing both the value and the type.
  • ESLint's eqeqeq rule enforces the use of strict equality operators === and !== and even provides an option to always enforce strict equality except when comparing with the null literal.

Further reading

What is the event loop in JavaScript runtimes?

What is the difference between call stack and task queue?
Topics
JavaScript

TL;DR

The event loop is a concept within the JavaScript runtime environment regarding how asynchronous operations are executed within JavaScript engines. It works as such:

  1. The JavaScript engine starts executing scripts, placing synchronous operations on the call stack.
  2. When an asynchronous operation is encountered (e.g., setTimeout(), HTTP request), it is offloaded to the respective Web API or Node.js API to handle the operation in the background.
  3. Once the asynchronous operation completes, its callback function is placed in the respective queues – task queues (also known as macrotask queues / callback queues) or microtask queues. We will refer to "task queue" as "macrotask queue" from here on to better differentiate from the microtask queue.
  4. The event loop continuously monitors the call stack and executes items on the call stack. If/when the call stack is empty:
    1. Microtask queue is processed. Microtasks include promise callbacks (then, catch, finally), MutationObserver callbacks, and calls to queueMicrotask(). The event loop takes the first callback from the microtask queue and pushes it to the call stack for execution. This repeats until the microtask queue is empty.
    2. Macrotask queue is processed. Macrotasks include web APIs like setTimeout(), HTTP requests, user interface event handlers like clicks, scrolls, etc. The event loop dequeues the first callback from the macrotask queue and pushes it onto the call stack for execution. However, after a macrotask queue callback is processed, the event loop does not proceed with the next macrotask yet! The event loop first checks the microtask queue. Checking the microtask queue is necessary as microtasks have higher priority than macrotask queue callbacks. The macrotask queue callback that was just executed could have added more microtasks!
      1. If the microtask queue is non-empty, process them as per the previous step.
      2. If the microtask queue is empty, the next macrotask queue callback is processed. This repeats until the macrotask queue is empty.
  5. This process continues indefinitely, allowing the JavaScript engine to handle both synchronous and asynchronous operations efficiently without blocking the call stack.

The unfortunate truth is that it is extremely hard to explain the event loop well using only text. We recommend checking out one of the following excellent videos explaining the event loop:

We recommend watching Lydia's video as it is the most modern and concise explanation standing at only 13 minutes long whereas the other videos are at least 30 minutes long. Her video is sufficient for the purpose of interviews.


Event loop in JavaScript

The event loop is the heart of JavaScript's asynchronous operation. It is a mechanism that handles the execution of code, allowing for asynchronous operations and ensuring that the single-threaded nature of JavaScript engines does not block the execution of the program.

Parts of the event loop

To understand it better we need to understand about all the parts of the system. These components are part of the event loop:

Call stack

Call stack keeps track of the functions being executed in a program. When a function is called, it is added to the top of the call stack. When the function completes, it is removed from the call stack. This allows the program to keep track of where it is in the execution of a function and return to the correct location when the function completes. As the name suggests it is a Stack data structure which follows last-in-first-out.

Web APIs/Node.js APIs

Asynchronous operations like setTimeout(), HTTP requests, file I/O, etc., are handled by Web APIs (in the browser) or C++ APIs (in Node.js). These APIs are not part of the JavaScript engine and run on separate threads, allowing them to execute concurrently without blocking the call stack.

Task queue / Macrotask queue / Callback queue

The task queue, also known as the macrotask queue / callback queue / event queue, is a queue that holds tasks that need to be executed. These tasks are typically asynchronous operations, such as callbacks passed to web APIs (setTimeout(), setInterval(), HTTP requests, etc.), and user interface event handlers like clicks, scrolls, etc.

Microtasks queue

Microtasks are tasks that have a higher priority than macrotasks and are executed immediately after the currently executing script is completed and before the next macrotask is executed. Microtasks are usually used for more immediate, lightweight operations that should be executed as soon as possible after the current operation completes. There is a dedicated microtask queue for microtasks. Microtasks include promises callbacks (then(), catch(), and finally()), await statements, queueMicrotask(), and MutationObserver callbacks.

Event loop order

  1. The JavaScript engine starts executing scripts, placing synchronous operations on the call stack.
  2. When an asynchronous operation is encountered (e.g., setTimeout(), HTTP request), it is offloaded to the respective Web API or Node.js API to handle the operation in the background.
  3. Once the asynchronous operation completes, its callback function is placed in the respective queues – task queues (also known as macrotask queues / callback queues) or microtask queues. We will refer to "task queue" as "macrotask queue" from here on to better differentiate from the microtask queue.
  4. The event loop continuously monitors the call stack and executes items on the call stack. If/when the call stack is empty:
    1. Microtask queue is processed. The event loop takes the first callback from the microtask queue and pushes it to the call stack for execution. This repeats until the microtask queue is empty.
    2. Macrotask queue is processed. The event loop dequeues the first callback from the macrotask queue and pushes it onto the call stack for execution. However, after a macrotask queue callback is processed, the event loop does not proceed with the next macrotask yet! The event loop first checks the microtask queue. Checking the microtask queue is necessary as microtasks have higher priority than macrotask queue callbacks. The macrotask queue callback that was just executed could have added more microtasks!
      1. If the microtask queue is non-empty, process them as per the previous step.
      2. If the microtask queue is empty, the next macrotask queue callback is processed. This repeats until the macrotask queue is empty.
  5. This process continues indefinitely, allowing the JavaScript engine to handle both synchronous and asynchronous operations efficiently without blocking the call stack.

Example

The following code logs some statements using a combination of normal execution, macrotasks, and microtasks.

console.log('Start');
setTimeout(() => {
console.log('Timeout 1');
}, 0);
Promise.resolve().then(() => {
console.log('Promise 1');
});
setTimeout(() => {
console.log('Timeout 2');
}, 0);
console.log('End');
// Console output:
// Start
// End
// Promise 1
// Timeout 1
// Timeout 2

Explanation of the output:

  1. Start and End are logged first because they are part of the initial script.
  2. Promise 1 is logged next because promises are microtasks and microtasks are executed immediately after the items on the call stack.
  3. Timeout 1 and Timeout 2 are logged last because they are macrotasks and are processed after the microtasks.

Further reading and resources

Explain event delegation in JavaScript

Topics
Web APIsJavaScript

TL;DR

Event delegation is a technique in JavaScript where a single event listener is attached to a parent element instead of attaching event listeners to multiple child elements. When an event occurs on a child element, the event bubbles up the DOM tree, and the parent element's event listener handles the event based on the target element.

Event delegation provides the following benefits:

  • Improved performance: Attaching a single event listener is more efficient than attaching multiple event listeners to individual elements, especially for large or dynamic lists. This reduces memory usage and improves overall performance.
  • Simplified event handling: With event delegation, you only need to write the event handling logic once in the parent element's event listener. This makes the code more maintainable and easier to update.
  • Dynamic element support: Event delegation automatically handles events for dynamically added or removed elements within the parent element. There's no need to manually attach or remove event listeners when the DOM structure changes

However, do note that:

  • It is important to identify the target element that triggered the event.
  • Not all events can be delegated because they are not bubbled. Non-bubbling events include: focus, blur, scroll, mouseenter, mouseleave, resize, etc.

Event delegation

Event delegation is a design pattern in JavaScript used to efficiently manage and handle events on multiple child elements by attaching a single event listener to a common ancestor element. This pattern is particularly valuable in scenarios where you have a large number of similar elements, such as list items, and want to optimize event handling.

How event delegation works

  1. Attach a listener to a common ancestor: Instead of attaching individual event listeners to each child element, you attach a single event listener to a common ancestor element higher in the DOM hierarchy.
  2. Event bubbling: When an event occurs on a child element, it bubbles up through the DOM tree to the common ancestor element. During this propagation, the event listener on the common ancestor can intercept and handle the event.
  3. Determine the target: Within the event listener, you can inspect the event object to identify the actual target of the event (the child element that triggered the event). You can use properties like event.target or event.currentTarget to determine which specific child element was interacted with.
  4. Perform action based on target: Based on the target element, you can perform the desired action or execute code specific to that element. This allows you to handle events for multiple child elements with a single event listener.

Benefits of event delegation

  1. Efficiency: Event delegation reduces the number of event listeners, improving memory usage and performance, especially when dealing with a large number of elements.
  2. Dynamic elements: It works seamlessly with dynamically added or removed child elements, as the common ancestor continues to listen for events on them.

Example

Here's a simple example:

// HTML:
// <ul id="item-list">
// <li>Item 1</li>
// <li>Item 2</li>
// <li>Item 3</li>
// </ul>
const itemList = document.getElementById('item-list');
itemList.addEventListener('click', (event) => {
if (event.target.tagName === 'LI') {
console.log(`Clicked on ${event.target.textContent}`);
}
});

In this example, a single click event listener is attached to the <ul> element. When a click event occurs on an <li> element, the event bubbles up to the <ul> element, where the event listener checks the target's tag name to identify whether a list item was clicked. It's crucial to check the identity of the event.target as there can be other kinds of elements in the DOM tree.

Use cases

Event delegation is commonly used in scenarios like:

Handling dynamic content in single-page applications

// HTML:
// <div id="button-container">
// <button>Button 1</button>
// <button>Button 2</button>
// </div>
// <button id="add-button">Add Button</button>
const buttonContainer = document.getElementById('button-container');
const addButton = document.getElementById('add-button');
buttonContainer.addEventListener('click', (event) => {
if (event.target.tagName === 'BUTTON') {
console.log(`Clicked on ${event.target.textContent}`);
}
});
addButton.addEventListener('click', () => {
const newButton = document.createElement('button');
newButton.textContent = `Button ${buttonContainer.children.length + 1}`;
buttonContainer.appendChild(newButton);
});

In this example, a click event listener is attached to the <div> container. When a new button is added dynamically and clicked, the event listener on the container handles the click event.

Simplifying code by avoiding the need to attach and remove event listeners for elements that change

// HTML:
// <form id="user-form">
// <input type="text" name="username" placeholder="Username">
// <input type="email" name="email" placeholder="Email">
// <input type="password" name="password" placeholder="Password">
// </form>
const userForm = document.getElementById('user-form');
userForm.addEventListener('input', (event) => {
const { name, value } = event.target;
console.log(`Changed ${name}: ${value}`);
});

In this example, a single input event listener is attached to the form element. It can respond to input changes for all child input elements, simplifying the code by eliminating the need for individual listeners on each <input> element.

Pitfalls

Do note that event delegation come with certain pitfalls:

  • Incorrect target handling: Ensure correct identification of the event target to avoid unintended actions.
  • Not all events can be delegated/bubbled: Not all events can be delegated because they are not bubbled. Non-bubbling events include: focus, blur, scroll, mouseenter, mouseleave, resize, etc.
  • Event overhead: While event delegation is generally more efficient, there needs to be complex logic written within the root event listener to identify the triggering element and respond appropriately. This can introduce overhead and can be potentially more complex if not managed properly.

Event delegation in JavaScript frameworks

In React, event handlers are attached to the React root's DOM container into which the React tree is rendered. Even though onClick is added to child elements, the actual event listeners are attached to the root DOM node, leveraging event delegation to optimize event handling and improve performance.

When an event occurs, React's event listener captures it and determines which React component rendered the target element based on its internal bookkeeping. React then dispatches the event to the appropriate component's event handler by calling the handler function with a synthetic event object. This synthetic event object wraps the native browser event, providing a consistent interface across different browsers and capturing information about the event.

By using event delegation, React avoids attaching individual event handlers to each component instance, which would create significant overhead, especially for large component trees. Instead, React leverages the browser's native event bubbling mechanism to capture events at the root and distribute them to the appropriate components.

Further reading

Explain how `this` works in JavaScript

Topics
JavaScriptOOP

TL;DR

There's no simple explanation for this; it is one of the most confusing concepts in JavaScript because it's behavior differs from many other programming languages. The one-liner explanation of the this keyword is that it is a dynamic reference to the context in which a function is executed.

A longer explanation is that this follows these rules:

  1. If the new keyword is used when calling the function, meaning the function was used as a function constructor, the this inside the function is the newly-created object instance.
  2. If this is used in a class constructor, the this inside the constructor is the newly-created object instance.
  3. If apply(), call(), or bind() is used to call/create a function, this inside the function is the object that is passed in as the argument.
  4. If a function is called as a method (e.g. obj.method()) — this is the object that the function is a property of.
  5. If a function is invoked as a free function invocation, meaning it was invoked without any of the conditions present above, this is the global object. In the browser, the global object is the window object. If in strict mode ('use strict';), this will be undefined instead of the global object.
  6. If multiple of the above rules apply, the rule that is higher wins and will set the this value.
  7. If the function is an ES2015 arrow function, it ignores all the rules above and receives the this value of its surrounding scope at the time it is created.

For an in-depth explanation, do check out Arnav Aggrawal's article on Medium.


this keyword

In JavaScript, this is a keyword that refers to the current execution context of a function or script. It's a fundamental concept in JavaScript, and understanding how this works is crucial for building robust and maintainable applications.

Used globally

In the global scope, this refers to the global object, which is the window object in a web browser or the global object in a Node.js environment.

console.log(this); // In a browser, this will log the window object (for non-strict mode).

Within a regular function call

When a function is called in the global context or as a standalone function, this refers to the global object (in non-strict mode) or undefined (in strict mode).

function showThis() {
console.log(this);
}
showThis(); // In non-strict mode: Window (global object). In strict mode: undefined.

Within a method call

When a function is called as a method of an object, this refers to the object that the method is called on.

const obj = {
name: 'John',
showThis: function () {
console.log(this);
},
};
obj.showThis(); // { name: 'John', showThis: ƒ }

Note that if you do the following, it is as good as a regular function call and not a method call. this has lost its context and no longer points to obj.

const obj = {
name: 'John',
showThis: function () {
console.log(this);
},
};
const showThisStandalone = obj.showThis;
showThisStandalone(); // In non-strict mode: Window (global object). In strict mode: undefined.

Within a function constructor

When a function is used as a constructor (called with the new keyword), this refers to the newly-created instance. In the following example, this refers to the Person object being created, and the name property is set on that object.

function Person(name) {
this.name = name;
}
const person = new Person('John');
console.log(person.name); // "John"

Within class constructor and methods

In ES2015 classes, this behaves as it does in object methods. It refers to the instance of the class.

class Person {
constructor(name) {
this.name = name;
}
showThis() {
console.log(this);
}
}
const person = new Person('John');
person.showThis(); // Person {name: 'John'}
const showThisStandalone = person.showThis;
showThisStandalone(); // `undefined` because in JavaScript class bodies, all methods are strict mode by default, even if you don't add 'use strict'

Explicitly binding this

You can use bind(), call(), or apply() to explicitly set the value of this for a function.

Using the call() and apply() methods allow you to explicitly set the value of this when calling the function.

function showThis() {
console.log(this);
}
const obj = { name: 'John' };
showThis.call(obj); // { name: 'John' }
showThis.apply(obj); // { name: 'John' }

The bind() method creates a new function with this bound to the specified value.

function showThis() {
console.log(this);
}
const obj = { name: 'John' };
const boundFunc = showThis.bind(obj);
boundFunc(); // { name: 'John' }

Within arrow functions

Arrow functions do not have their own this context. Instead, the this is lexically scoped, which means it inherits the this value from its surrounding scope at the time they are defined.

In this example, this refers to the global object (window or global), because the arrow function is not bound to the person object.

const person = {
firstName: 'John',
sayHello: () => {
console.log(`Hello, my name is ${this.firstName}!`);
},
};
person.sayHello(); // "Hello, my name is undefined!"

In the following example, the this in the arrow function will be the this value of its enclosing context, so it depends on how showThis() is called.

const obj = {
name: 'John',
showThis: function () {
const arrowFunc = () => {
console.log(this);
};
arrowFunc();
},
};
obj.showThis(); // { name: 'John', showThis: ƒ }
const showThisStandalone = obj.showThis;
showThisStandalone(); // In non-strict mode: Window (global object). In strict mode: undefined.

Therefore, the this value in arrow functions cannot be set by bind(), apply() or call() methods, nor does it point to the current object in object methods.

const obj = {
name: 'Alice',
regularFunction: function () {
console.log('Regular function:', this.name);
},
arrowFunction: () => {
console.log('Arrow function:', this.name);
},
};
const anotherObj = {
name: 'Bob',
};
// Using call/apply/bind with a regular function
obj.regularFunction.call(anotherObj); // Regular function: Bob
obj.regularFunction.apply(anotherObj); // Regular function: Bob
const boundRegularFunction = obj.regularFunction.bind(anotherObj);
boundRegularFunction(); // Regular function: Bob
// Using call/apply/bind with an arrow function, `this` refers to the global scope and cannot be modified.
obj.arrowFunction.call(anotherObj); // Arrow function: window/undefined (depending if strict mode)
obj.arrowFunction.apply(anotherObj); // Arrow function: window/undefined (depending if strict mode)
const boundArrowFunction = obj.arrowFunction.bind(anotherObj);
boundArrowFunction(); // Arrow function: window/undefined (depending if strict mode)

Within event handlers

When a function is called as a DOM event handler, this refers to the element that triggered the event. In this example, this refers to the <button> element that was clicked.

<button id="my-button" onclick="console.log(this)">Click me</button>
<!-- Logs the button element -->

When setting an event handler using JavaScript, this also refers to the element that received the event.

document.getElementById('my-button').addEventListener('click', function () {
console.log(this); // Logs the button element
});

As mentioned above, ES2015 introduces arrow functions which uses the enclosing lexical scope. This is usually convenient, but does prevent the caller from defining the this context via .call/.apply/.bind. One of the consequences is that DOM event handlers will not properly bind this in your event handler functions if you define the callback parameters to .addEventListener() using arrow functions.

document.getElementById('my-button').addEventListener('click', () => {
console.log(this); // Window / undefined (depending on whether strict mode) instead of the button element.
});

In summary, this in JavaScript refers to the current execution context of a function or script, and its value can change depending on the context in which it is used. Understanding how this works is essential for building robust and maintainable JavaScript applications.

Further reading

Describe the difference between a cookie, `sessionStorage` and `localStorage` in browsers

Topics
Web APIsJavaScript

TL;DR

All of the following are mechanisms of storing data on the client, the user's browser in this case. localStorage and sessionStorage both implement the Web Storage API interface.

  • Cookies: Suitable for server-client communication, small storage capacity, can be persistent or session-based, domain-specific. Sent to the server on every request.
  • localStorage: Suitable for long-term storage, data persists even after the browser is closed, accessible across all tabs and windows of the same origin, highest storage capacity among the three.
  • sessionStorage: Suitable for temporary data within a single page session, data is cleared when the tab or window is closed, has a higher storage capacity compared to cookies.

Here's a table summarizing the 3 client storage mechanisms.

PropertyCookielocalStoragesessionStorage
InitiatorClient or server. Server can use Set-Cookie headerClientClient
LifespanAs specifiedUntil deletedUntil tab is closed
Persistent across browser sessionsIf a future expiry date is setYesNo
Sent to server with every HTTP requestYes, sent via Cookie headerNoNo
Total capacity (per domain)4kb5MB5MB
AccessAcross windows/tabsAcross windows/tabsSame tab
SecurityJavaScript cannot access HttpOnly cookiesNoneNone

Storage on the web

Cookies, localStorage, and sessionStorage, are all storage mechanisms on the client (web browser). It is useful to store data on the client for client-only state like access tokens, themes, personalized layouts, so that users can have a consistent experience on a website across tabs and usage sessions.

These client-side storage mechanisms have the following common properties:

  • This means the clients can read and modify the values (except for HttpOnly cookies).
  • Key-value based storage.
  • They are only able to store values as strings. Non-strings will have to be serialized into a string (e.g. JSON.stringify()) in order to be stored.

Use cases for each storage mechanism

Since cookies have a relatively low maximum size, it is not advisable to store all your client-side data within cookies. The distinguishing properties about cookies are that cookies are sent to the server on every HTTP request so the low maximum size is a feature that prevents your HTTP requests from being too large due to cookies. Automatic expiry of cookies is a useful feature as well.

With that in mind, the best kind of data to store within cookies is small pieces of data that needs to be transmitted to the server, such as auth tokens, session IDs, analytics tracking IDs, GDPR cookie consent, language preferences that are important for authentication, authorization, and rendering on the server. These values are sometimes sensitive and can benefit from the HttpOnly, Secure, and Expires/Max-Age capabilities that cookies provide.

localStorage and sessionStorage both implement the Web Storage API interface. Web Storages have a generous total capacity of 5MB, so storage size is usually not a concern. The key difference is that values stored in Web Storage are not automatically sent along HTTP requests.

While you can manually include values from Web Storage when making AJAX/fetch() requests, the browser does not include them in the initial request / first load of the page. Hence Web Storage should not be used to store data that is relied on by the server for the initial rendering of the page if server-side rendering is being used (typically authentication/authorization-related information). localStorage is most suitable for user preferences data that do not expire, like themes and layouts (if it is not important for the server to render the final layout). sessionStorage is most suitable for temporary data that only needs to be accessible within the current browsing session, such as form data (useful to preserve data during accidental reloads).

The following sections dive deeper into each client storage mechanism.

Cookies

Cookies are used to store small pieces of data on the client side that can be sent back to the server with every HTTP request.

  • Storage capacity: Limited to around 4KB for all cookies.
  • Lifespan: Cookies can have a specific expiration date set using the Expires or Max-Age attributes. If no expiration date is set, the cookie is deleted when the browser is closed (session cookie).
  • Access: Cookies are domain-specific and can be shared across different pages and subdomains within the same domain.
  • Security: Cookies can be marked as HttpOnly to prevent access from JavaScript, reducing the risk of XSS attacks. They can also be secured with the Secure flag to ensure they are sent only when HTTPS is used.
// Set a cookie for the name/key `auth_token` with an expiry.
document.cookie =
'auth_token=abc123def; expires=Fri, 31 Dec 2024 23:59:59 GMT; path=/';
// Read all cookies. There's no way to read specific cookies using `document.cookie`.
// You have to parse the string yourself.
console.log(document.cookie); // auth_token=abc123def
// Delete the cookie with the name/key `auth_token` by setting an
// expiry date in the past. The value doesn't matter.
document.cookie = 'auth_token=; expires=Thu, 01 Jan 1970 00:00:00 GMT; path=/';

It is a pain to read/write to cookies. document.cookie returns a single string containing all the key/value pairs delimited by ; and you have to parse the string yourself. The js-cookie npm library provides a simple and lightweight API for reading/writing cookies in JavaScript.

A modern native way of accessing cookies is via the Cookie Store API which is only available on HTTPS pages.

// Set a cookie. More options are available too.
cookieStore.set('auth_token', 'abc123def');
// Async method to access a single cookie and do something with it.
cookieStore.get('auth_token').then(...);
// Async method to get all cookies.
cookieStore.getAll().then(...);
// Async method to delete a single cookie.
cookieStore.delete('auth_token').then(() =>
console.log('Cookie deleted')
);

The CookieStore API is relatively new and may not be supported in all browsers (supported in latest Chrome and Edge as of June 2024). Refer to caniuse.com for the latest compatibility.

localStorage

localStorage is used for storing data that persists even after the browser is closed and reopened. It is designed for long-term storage of data.

  • Storage capacity: Typically around 5MB per origin (varies by browser).
  • Lifespan: Data in localStorage persists until explicitly deleted by the user or the application.
  • Access: Data is accessible within all tabs and windows of the same origin.
  • Security: All JavaScript on the page have access to values within localStorage.
// Set a value in localStorage.
localStorage.setItem('key', 'value');
// Get a value from localStorage.
console.log(localStorage.getItem('key'));
// Remove a value from localStorage.
localStorage.removeItem('key');
// Clear all data in localStorage.
localStorage.clear();

sessionStorage

sessionStorage is used to store data for the duration of the page session. It is designed for temporary storage of data.

  • Storage Capacity: Typically around 5MB per origin (varies by browser).
  • Lifespan: Data in sessionStorage is cleared when the page session ends (i.e., when the browser or tab is closed). Reloading the page does not destroy data within sessionStorage.
  • Access: Data is accessible only within the current tab (or browsing context). Different tabs share different sessionStorage objects even if they belong to the same browser window. In this context, window refers to a browser window that can contain multiple tabs.
  • Security: All JavaScript on the same page have access to values within sessionStorage for that page.
// Set a value in sessionStorage.
sessionStorage.setItem('key', 'value');
// Get a value from sessionStorage.
console.log(sessionStorage.getItem('key'));
// Remove a value from sessionStorage.
sessionStorage.removeItem('key');
// Clear all data in sessionStorage.
sessionStorage.clear();

Notes

There are also other client-side storage mechanisms like IndexedDB which is more powerful than the above-mentioned technologies but more complicated to use.

References

Describe the difference between `<script>`, `<script async>` and `<script defer>`

Topics
HTMLJavaScript

TL;DR

All of these ways (<script>, <script async>, and <script defer>) are used to load and execute JavaScript files in an HTML document, but they differ in how the browser handles loading and execution of the script:

  • <script> is the default way of including JavaScript. The browser blocks HTML parsing while the script is being downloaded and executed. The browser will not continue rendering the page until the script has finished executing.
  • <script async> downloads the script asynchronously, in parallel with parsing the HTML. Executes the script as soon as it is available, potentially interrupting the HTML parsing. <script async> do not wait for each other and execute in no particular order.
  • <script defer> downloads the script asynchronously, in parallel with parsing the HTML. However, the execution of the script is deferred until HTML parsing is complete, in the order they appear in the HTML.

Here's a table summarizing the 3 ways of loading <script>s in a HTML document.

Feature<script><script async><script defer>
Parsing behaviorBlocks HTML parsingRuns parallel to parsingRuns parallel to parsing
Execution orderIn order of appearanceNot guaranteedIn order of appearance
DOM dependencyNoNoYes (waits for DOM)

What <script> tags are for

<script> tags are used to include JavaScript on a web page. The async and defer attributes are used to change how/when the loading and execution of the script happens.

<script>

For normal <script> tags without any async or defer, when they are encountered, HTML parsing is blocked, the script is fetched and executed immediately. HTML parsing resumes after the script is executed. This can block rendering of the page if the script is large.

Use <script> for critical scripts that the page relies on to render properly.

<!doctype html>
<html>
<head>
<title>Regular Script</title>
</head>
<body>
<!-- Content before the script -->
<h1>Regular Script Example</h1>
<p>This content will be rendered before the script executes.</p>
<!-- Regular script -->
<script src="regular.js"></script>
<!-- Content after the script -->
<p>This content will be rendered after the script executes.</p>
</body>
</html>

<script async>

In <script async>, the browser downloads the script file asynchronously (in parallel with HTML parsing) and executes it as soon as it is available (potentially before HTML parsing completes). The execution will not necessarily be executed in the order in which it appears in the HTML document. This can improve perceived performance because the browser doesn't wait for the script to download before continuing to render the page.

Use <script async> when the script is independent of any other scripts on the page, for example, analytics and ads scripts.

<!doctype html>
<html>
<head>
<title>Async Script</title>
</head>
<body>
<!-- Content before the script -->
<h1>Async Script Example</h1>
<p>This content will be rendered before the async script executes.</p>
<!-- Async script -->
<script async src="async.js"></script>
<!-- Content after the script -->
<p>
This content may be rendered before or after the async script executes.
</p>
</body>
</html>

<script defer>

Similar to <script async>, <script defer> also downloads the script in parallel to HTML parsing but the script is only executed when the document has been fully parsed and before firing DOMContentLoaded. If there are multiple of them, each deferred script is executed in the order they appeared in the HTML document.

If a script relies on a fully-parsed DOM, the defer attribute will be useful in ensuring that the HTML is fully parsed before executing.

<!doctype html>
<html>
<head>
<title>Deferred Script</title>
</head>
<body>
<!-- Content before the script -->
<h1>Deferred Script Example</h1>
<p>This content will be rendered before the deferred script executes.</p>
<!-- Deferred script -->
<script defer src="deferred.js"></script>
<!-- Content after the script -->
<p>This content will be rendered before the deferred script executes.</p>
</body>
</html>

Notes

  • The async attribute should be used for scripts that are not critical to the initial rendering of the page and do not depend on each other, while the defer attribute should be used for scripts that depend on / is depended on by another script.
  • The async and defer attributes are ignored for scripts that have no src attribute.
  • <script>s with defer or async that contain document.write() will be ignored with a message like "A call to document.write() from an asynchronously-loaded external script was ignored".
  • Even though async and defer help to make script downloading asynchronous, the scripts are still eventually executed on the main thread. If these scripts are computationally intensive, it can result in laggy/frozen UI. Partytown is a library that helps relocate script executions into a web worker and off the main thread, which is great for third-party scripts where you do not have control over the code.

Further reading

What's the difference between a JavaScript variable that is: `null`, `undefined` or undeclared?

How would you go about checking for any of these states?
Topics
JavaScript

TL;DR

TraitnullundefinedUndeclared
MeaningExplicitly set by the developer to indicate that a variable has no valueVariable has been declared but not assigned a valueVariable has not been declared at all
Type (via typeof operator)'object''undefined''undefined'
Equality Comparisonnull == undefined is trueundefined == null is trueThrows a ReferenceError

Undeclared

Undeclared variables are created when you assign a value to an identifier that is not previously created using var, let or const. Undeclared variables will be defined globally, outside of the current scope. In strict mode, a ReferenceError will be thrown when you try to assign to an undeclared variable. Undeclared variables are bad in the same way that global variables are bad. Avoid them at all cost! To check for them, wrap its usage in a try/catch block.

function foo() {
x = 1; // Throws a ReferenceError in strict mode
}
foo();
console.log(x); // 1 (if not in strict mode)

Using the typeof operator on undeclared variables will give 'undefined'.

console.log(typeof y === 'undefined'); // true

undefined

A variable that is undefined is a variable that has been declared, but not assigned a value. It is of type undefined. If a function does not return a value, and its result is assigned to a variable, that variable will also have the value undefined. To check for it, compare using the strict equality (===) operator or typeof which will give the 'undefined' string. Note that you should not be using the loose equality operator (==) to check, as it will also return true if the value is null.

let foo;
console.log(foo); // undefined
console.log(foo === undefined); // true
console.log(typeof foo === 'undefined'); // true
console.log(foo == null); // true. Wrong, don't use this to check if a value is undefined!
function bar() {} // Returns undefined if there is nothing returned.
let baz = bar();
console.log(baz); // undefined

null

A variable that is null will have been explicitly assigned to the null value. It represents no value and is different from undefined in the sense that it has been explicitly assigned. To check for null, simply compare using the strict equality operator. Note that like the above, you should not be using the loose equality operator (==) to check, as it will also return true if the value is undefined.

const foo = null;
console.log(foo === null); // true
console.log(typeof foo === 'object'); // true
console.log(foo == undefined); // true. Wrong, don't use this to check if a value is null!

Notes

  • As a good habit, never leave your variables undeclared or unassigned. Explicitly assign null to them after declaring if you don't intend to use them yet.
  • Always explicitly declare variables before using them to prevent errors.
  • Using some static analysis tooling in your workflow (e.g. ESLint, TypeScript Compiler), will enable checks that you are not referencing undeclared variables.

Practice

Practice implementing type utilities that check for null and undefined on GreatFrontEnd.

Further Reading

What's the difference between `.call` and `.apply` in JavaScript?

Topics
JavaScript

TL;DR

.call and .apply are both used to invoke functions with a specific this context and arguments. The primary difference lies in how they accept arguments:

  • .call(thisArg, arg1, arg2, ...): Takes arguments individually.
  • .apply(thisArg, [argsArray]): Takes arguments as an array.

Assuming we have a function add, the function can be invoked using .call and .apply in the following manner:

function add(a, b) {
return a + b;
}
console.log(add.call(null, 1, 2)); // 3
console.log(add.apply(null, [1, 2])); // 3

Call vs Apply

Both .call and .apply are used to invoke functions and the first parameter will be used as the value of this within the function. However, .call takes in comma-separated arguments as the next arguments while .apply takes in an array of arguments as the next argument.

An easy way to remember this is C for call and comma-separated and A for apply and an array of arguments.

function add(a, b) {
return a + b;
}
console.log(add.call(null, 1, 2)); // 3
console.log(add.apply(null, [1, 2])); // 3

With ES6 syntax, we can invoke call using an array along with the spread operator for the arguments.

function add(a, b) {
return a + b;
}
console.log(add.call(null, ...[1, 2])); // 3

Use cases

Context management

.call and .apply can set the this context explicitly when invoking methods on different objects.

const person = {
name: 'John',
greet() {
console.log(`Hello, my name is ${this.name}`);
},
};
const anotherPerson = { name: 'Alice' };
person.greet.call(anotherPerson); // Hello, my name is Alice
person.greet.apply(anotherPerson); // Hello, my name is Alice

Function borrowing

Both .call and .apply allow borrowing methods from one object and using them in the context of another. This is useful when passing functions as arguments (callbacks) and the original this context is lost. .call and .apply allow the function to be invoked with the intended this value.

function greet() {
console.log(`Hello, my name is ${this.name}`);
}
const person1 = { name: 'John' };
const person2 = { name: 'Alice' };
greet.call(person1); // Hello, my name is John
greet.apply(person2); // Hello, my name is Alice

Alternative syntax to call methods on objects

.apply can be used with object methods by passing the object as the first argument followed by the usual parameters.

const arr1 = [1, 2, 3];
const arr2 = [4, 5, 6];
Array.prototype.push.apply(arr1, arr2); // Same as arr1.push(4, 5, 6)
console.log(arr1); // [1, 2, 3, 4, 5, 6]

Deconstructing the above:

  1. The first object, arr1 will be used as the this value.
  2. .push() is called on arr1 using arr2 as arguments as an array because it's using .apply().
  3. Array.prototype.push.apply(arr1, arr2) is equivalent to arr1.push(...arr2).

It may not be obvious, but Array.prototype.push.apply(arr1, arr2) causes modifications to arr1. It's clearer to call methods using the OOP-centric way instead where possible.

Follow-Up Questions

  • How do .call and .apply differ from Function.prototype.bind?

Practice

Practice implementing your own Function.prototype.call method and Function.prototype.apply method on GreatFrontEnd.

Further Reading

Explain `Function.prototype.bind` in JavaScript

Topics
JavaScriptOOP

TL;DR

Function.prototype.bind is a method in JavaScript that allows you to create a new function with a specific this value and optional initial arguments. It's primary purpose is to:

  • Binding this value to preserve context: The primary purpose of bind is to bind the this value of a function to a specific object. When you call func.bind(thisArg), it creates a new function with the same body as func, but with this permanently bound to thisArg.
  • Partial application of arguments: bind also allows you to pre-specify arguments for the new function. Any arguments passed to bind after thisArg will be prepended to the arguments list when the new function is called.
  • Method borrowing: bind allows you to borrow methods from one object and apply them to another object, even if they were not originally designed to work with that object.

The bind method is particularly useful in scenarios where you need to ensure that a function is called with a specific this context, such as in event handlers, callbacks, or method borrowing.


Function.prototype.bind

Function.prototype.bind allows you to create a new function with a specific this context and, optionally, preset arguments. bind() is most useful for preserving the value of this in methods of classes that you want to pass into other functions.

bind was frequently used on legacy React class component methods which were not defined using arrow functions.

const john = {
age: 42,
getAge: function () {
return this.age;
},
};
console.log(john.getAge()); // 42
const unboundGetAge = john.getAge;
console.log(unboundGetAge()); // undefined
const boundGetAge = john.getAge.bind(john);
console.log(boundGetAge()); // 42
const mary = { age: 21 };
const boundGetAgeMary = john.getAge.bind(mary);
console.log(boundGetAgeMary()); // 21

In the example above, when the getAge method is called without a calling object (as unboundGetAge), the value is undefined because the value of this within getAge() becomes the global object. boundGetAge() has its this bound to john, hence it is able to obtain the age of john.

We can even use getAge on another object which is not john! boundGetAgeMary returns the age of mary.

Use cases

Here are some common scenarios where bind is frequently used:

Preserving context and fixing the this value in callbacks

When you pass a function as a callback, the this value inside the function can be unpredictable because it is determined by the execution context. Using bind() helps ensure that the correct this value is maintained.

class Person {
constructor(firstName) {
this.firstName = firstName;
}
greet() {
console.log(`Hello, my name is ${this.firstName}`);
}
}
const john = new Person('John');
// Without bind(), `this` inside the callback will be the global object
setTimeout(john.greet, 1000); // Output: "Hello, my name is undefined"
// Using bind() to fix the `this` value
setTimeout(john.greet.bind(john), 2000); // Output: "Hello, my name is John"

You can also use arrow functions to define class methods for this purpose instead of using bind. Arrow functions have the this value bound to its lexical context.

class Person {
constructor(name) {
this.name = name;
}
greet = () => {
console.log(`Hello, my name is ${this.name}`);
};
}
const john = new Person('John Doe');
setTimeout(john.greet, 1000); // Output: "Hello, my name is John Doe"

Partial application of functions (currying)

bind can be used to create a new function with some arguments pre-set. This is known as partial application or currying.

function multiply(a, b) {
return a * b;
}
// Using bind() to create a new function with some arguments pre-set
const multiplyBy5 = multiply.bind(null, 5);
console.log(multiplyBy5(3)); // Output: 15

Method borrowing

bind allows you to borrow methods from one object and apply them to another object, even if they were not originally designed to work with that object. This can be handy when you need to reuse functionality across different objects

const person = {
name: 'John',
greet: function () {
console.log(`Hello, ${this.name}!`);
},
};
const greetPerson = person.greet.bind({ name: 'Alice' });
greetPerson(); // Output: Hello, Alice!

Practice

Try implementing your own Function.prototype.bind() method on GreatFrontEnd.

Further Reading

What advantage is there for using the JavaScript arrow syntax for a method in a constructor?

Topics
JavaScript

TL;DR

The main advantage of using an arrow function as a method inside a constructor is that the value of this gets set at the time of the function creation and can't change after that. When the constructor is used to create a new object, this will always refer to that object.

For example, let's say we have a Person constructor that takes a first name as an argument has two methods to console.log() that name, one as a regular function and one as an arrow function:

const Person = function (name) {
this.firstName = name;
this.sayName1 = function () {
console.log(this.firstName);
};
this.sayName2 = () => {
console.log(this.firstName);
};
};
const john = new Person('John');
const dave = new Person('Dave');
john.sayName1(); // John
john.sayName2(); // John
// The regular function can have its `this` value changed, but the arrow function cannot
john.sayName1.call(dave); // Dave (because `this` is now the dave object)
john.sayName2.call(dave); // John
john.sayName1.apply(dave); // Dave (because `this` is now the dave object)
john.sayName2.apply(dave); // John
john.sayName1.bind(dave)(); // Dave (because `this` is now the dave object)
john.sayName2.bind(dave)(); // John
const sayNameFromWindow1 = john.sayName1;
sayNameFromWindow1(); // undefined (because `this` is now the window object)
const sayNameFromWindow2 = john.sayName2;
sayNameFromWindow2(); // John

The main takeaway here is that this can be changed for a normal function, but this always stays the same for an arrow function. So even if you are passing around your arrow function to different parts of your application, you wouldn't have to worry about the value of this changing.


Arrow functions

Arrow functions are introduced in ES2015 and it provides a concise way to write functions in Javascript. One of the key features of arrow function is that it lexically bind the this value, which means that it takes the this value from enclosing scope.

Syntax

Arrow functions use the => syntax instead of the function keyword. The basic syntax is:

const myFunction = (arg1, arg2, ...argN) => {
// function body
};

If the function body has only one expression, you can omit the curly braces and the return keyword:

const myFunction = (arg1, arg2, ...argN) => expression;

Examples

// Arrow function with parameters
const multiply = (x, y) => x * y;
console.log(multiply(2, 3)); // Output: 6
// Arrow function with no parameters
const sayHello = () => 'Hello, World!';
console.log(sayHello()); // Output: 'Hello, World!'

Advantages

  • Concise: Arrow functions provide a more concise syntax, especially for short functions.
  • Implicit return: They have an implicit return for single-line functions.
  • Value of this is predictable: Arrow functions lexically bind the this value, inheriting it from the enclosing scope.

Limitations

Arrow functions cannot be used as constructors and will throw an error when used with the new keyword.

const Foo = () => {};
const foo = new Foo(); // TypeError: Foo is not a constructor

They also do not have arguments keyword; the arguments have to be obtained from using the rest operator (...) in the arguments.

const arrowFunction = (...args) => {
console.log(arguments); // Throws a ReferenceError
console.log(args); // [1, 2, 3]
};
arrowFunction(1, 2, 3);

Since arrow functions do not have their own this, they are not suitable for defining methods in an object. Traditional function expressions or function declarations should be used instead.

const obj = {
value: 42,
getValue: () => this.value, // `this` does not refer to `obj`
};
console.log(obj.getValue()); // undefined

Why arrow functions are useful

One of the most notable features of arrow functions is their behavior with this. Unlike regular functions, arrow functions do not have their own this. Instead, they inherit this from the parent scope at the time they are defined. This makes arrow functions particularly useful for scenarios like event handlers, callbacks, and methods in classes.

Arrow functions inside function constructors

const Person = function (name) {
this.firstName = name;
this.sayName1 = function () {
console.log(this.firstName);
};
this.sayName2 = () => {
console.log(this.firstName);
};
};
const john = new Person('John');
const dave = new Person('Dave');
john.sayName1(); // John
john.sayName2(); // John
// The regular function can have its `this` value changed, but the arrow function cannot
john.sayName1.call(dave); // Dave (because `this` is now the dave object)
john.sayName2.call(dave); // John
john.sayName1.apply(dave); // Dave (because `this` is now the dave object)
john.sayName2.apply(dave); // John
john.sayName1.bind(dave)(); // Dave (because `this` is now the dave object)
john.sayName2.bind(dave)(); // John
const sayNameFromWindow1 = john.sayName1;
sayNameFromWindow1(); // undefined (because `this` is now the window object)
const sayNameFromWindow2 = john.sayName2;
sayNameFromWindow2(); // John

Arrow functions in event handlers

const button = document.getElementById('myButton');
button.addEventListener('click', function () {
console.log(this); // Output: Button
console.log(this === button); // Output: true
});
button.addEventListener('click', () => {
console.log(this); // Output: Window
console.log(this === window); // Output: true
});

This can be particularly helpful in React class components. If you define a class method for something such as a click handler using a normal function, and then you pass that click handler down into a child component as a prop, you will need to also bind this in the constructor of the parent component. If you instead use an arrow function, there is no need to bind this, as the method will automatically get its this value from its enclosing lexical context. See this article for an excellent demonstration and sample code.

Further reading

Explain how prototypal inheritance works in JavaScript

Topics
JavaScriptOOP

TL;DR

Prototypical inheritance in JavaScript is a way for objects to inherit properties and methods from other objects. Every JavaScript object has a special hidden property called [[Prototype]] (commonly accessed via __proto__ or using Object.getPrototypeOf()) that is a reference to another object, which is called the object's "prototype".

When a property is accessed on an object and if the property is not found on that object, the JavaScript engine looks at the object's __proto__, and the __proto__'s __proto__ and so on, until it finds the property defined on one of the __proto__s or until it reaches the end of the prototype chain.

This behavior simulates classical inheritance, but it is really more of delegation than inheritance.

Here's an example of prototypal inheritance:

// Parent object constructor.
function Animal(name) {
this.name = name;
}
// Add a method to the parent object's prototype.
Animal.prototype.makeSound = function () {
console.log('The ' + this.constructor.name + ' makes a sound.');
};
// Child object constructor.
function Dog(name) {
Animal.call(this, name); // Call the parent constructor.
}
// Set the child object's prototype to be the parent's prototype.
Object.setPrototypeOf(Dog.prototype, Animal.prototype);
// Add a method to the child object's prototype.
Dog.prototype.bark = function () {
console.log('Woof!');
};
// Create a new instance of Dog.
const bolt = new Dog('Bolt');
// Call methods on the child object.
console.log(bolt.name); // "Bolt"
bolt.makeSound(); // "The Dog makes a sound."
bolt.bark(); // "Woof!"

Things to note are:

  • .makeSound is not defined on Dog, so the JavaScript engine goes up the prototype chain and finds .makeSound on the inherited Animal.
  • Using Object.create() to build the inheritance chain is no longer recommended. Use Object.setPrototypeOf() instead.

Prototypical Inheritance in Javascript

Prototypical inheritance is a feature in JavaScript used to create objects that inherit properties and methods from other objects. Instead of a class-based inheritance model, JavaScript uses a prototype-based model, where objects can directly inherit from other objects.

Key Concepts

  1. Prototypes : Every object in Javascript has a prototype, which is another object. When you create an object using an object literal or a constructor function, the new object is linked to the prototype of its constructor function or the Object.prototype if no prototype is specified. This is commonly referenced using __proto__ or [[Prototype]]. You can also get the prototype by using inbuilt method Object.getPrototypeOf() and you can set the prototype of an object via Object.setPrototypeOf().
// Define a constructor function
function Person(name, age) {
this.name = name;
this.age = age;
}
// Add a method to the prototype
Person.prototype.sayHello = function () {
console.log(`Hello, my name is ${this.name} and I am ${this.age} years old.`);
};
// Create a new object using the constructor function
let john = new Person('John', 30);
// The new object has access to the methods defined on the prototype
john.sayHello(); // "Hello, my name is John and I am 30 years old."
// The prototype of the new object is the prototype of the constructor function
console.log(john.__proto__ === Person.prototype); // true
// You can also get the prototype using Object.getPrototypeOf()
console.log(Object.getPrototypeOf(john) === Person.prototype); // true
// You can set the prototype of an object using Object.setPrototypeOf()
let newProto = {
sayGoodbye: function () {
console.log(`Goodbye, my name is ${this.name}`);
},
};
Object.setPrototypeOf(john, newProto);
// Now john has access to the methods defined on the new prototype
john.sayGoodbye(); // "Goodbye, my name is John"
// But no longer has access to the methods defined on the old prototype
console.log(john.sayHello); // undefined
  1. Prototype chain: When a property or method is accessed on an object, JavaScript first looks for it on the object itself. If it doesn't find it there, it looks at the object's prototype, and then the prototype's prototype, and so on, until it either finds the property or reaches the end of the chain (i.e., null).

  2. Constructor functions: JavaScript provides constructor functions to create objects. When a function is used as a constructor with the new keyword, the new object's prototype ([[Prototype]]) is set to the constructor's prototype property.

// Define a constructor function
function Animal(name) {
this.name = name;
}
// Add a method to the prototype
Animal.prototype.sayName = function () {
console.log(`My name is ${this.name}`);
};
// Define a new constructor function
function Dog(name, breed) {
Animal.call(this, name);
this.breed = breed;
}
// Set the prototype of Dog to inherit from Animal's prototype
Object.setPrototypeOf(Dog.prototype, Animal.prototype);
// Add a method to the Dog prototype
Dog.prototype.bark = function () {
console.log('Woof!');
};
// Create a new object using the Dog constructor function
let fido = new Dog('Fido', 'Labrador');
// The new object has access to the methods defined on its own prototype and the Animal prototype
fido.bark(); // "Woof!"
fido.sayName(); // "My name is Fido"
// If we try to access a method that doesn't exist on the Dog prototype or the Animal prototype, JavaScript will return undefined
console.log(fido.fly); // undefined
  1. Object.create(): This method creates a new object whose internal [[Prototype]] is directly linked to the specified prototype object. It's the most direct way to create an object that inherits from another specific object, without involving constructor functions. If you create a object via Object.create(null) it will not inherit any properties from Object.prototype. This means the object will not have any built-in properties or methods like toString(), hasOwnProperty(),
// Define a prototype object
let proto = {
greet: function () {
console.log(`Hello, my name is ${this.name}`);
},
};
// Use `Object.create()` to create a new object with the specified prototype
let person = Object.create(proto);
person.name = 'John';
// The new object has access to the methods defined on the prototype
person.greet(); // "Hello, my name is John"
// Check if the object has a property
console.log(person.hasOwnProperty('name')); // true
// Create an object that does not inherit from Object.prototype
let animal = Object.create(null);
animal.name = 'Rocky';
// The new object does not have any built-in properties or methods
console.log(animal.toString); // undefined
console.log(animal.hasOwnProperty); // undefined
// But you can still add and access custom properties
animal.describe = function () {
console.log(`Name of the animal is ${this.name}`);
};
animal.describe(); // "Name of the animal is Rocky"

Resources

Difference between: `function Person(){}`, `const person = Person()`, and `const person = new Person()` in JavaScript?

Topics
JavaScriptOOP

TL;DR

  • function Person(){}: A function declaration in JavaScript. It can be used as a regular function or as a constructor.
  • const person = Person(): Calls Person as a regular function, not a constructor. If Person is intended to be a constructor, this will lead to unexpected behavior.
  • const person = new Person(): Creates a new instance of Person, correctly utilizing the constructor function to initialize the new object.
Aspectfunction Person(){}const person = Person()const person = new Person()
TypeFunction declarationFunction callConstructor call
UsageDefines a functionInvokes Person as a regular functionCreates a new instance of Person
Instance CreationNo instance createdNo instance createdNew instance created
Common MistakeN/AMisusing as constructor leading to undefinedNone (when used correctly)

Function declaration

function Person(){} is a standard function declaration in JavaScript. When written in PascalCase, it follows the convention for functions intended to be used as constructors.

function Person(name) {
this.name = name;
}

This code defines a function named Person that takes a parameter name and assigns it to the name property of the object created from this constructor function. When the this keyword is used in a constructor, it refers to the newly created object.

Function call

const person = Person() simply invoke the function's code. When you invoke Person as a regular function (i.e., without the new keyword), the function does not behave as a constructor. Instead, it executes its code and returns undefined if no return value is specified and that gets assigned to the variable intended as the instance. Invoking as such is a common mistake if the function is intended to be used as a constructor.

function Person(name) {
this.name = name;
}
const person = Person('John'); // Throws error in strict mode
console.log(person); // undefined
console.log(person.name); // Uncaught TypeError: Cannot read property 'name' of undefined

In this case, Person('John') does not create a new object. The person variable is assigned undefined because the Person function does not explicitly return a value. Attempting to access person.name throws an error because person is undefined.

Constructor call

const person = new Person() creates an instance of the Person object using the new operator, which inherits from Person.prototype. An alternative would be to use Object.create, such as: Object.create(Person.prototype) and Person.call(person, 'John') initializes the object.

function Person(name) {
this.name = name;
}
const person = new Person('John');
console.log(person); // Person { name: 'John' }
console.log(person.name); // 'John'
// Alternative
const person1 = Object.create(Person.prototype);
Person.call(person1, 'John');
console.log(person1); // Person { name: 'John' }
console.log(person1.name); // 'John'

In this case, new Person('John') creates a new object, and this within Person refers to this new object. The name property is correctly set on the new object. The person variable is assigned the new instance of Person with the name property set to 'John'. And for the alternative object creation, Object.create(Person.prototype) creates a new object with Person.prototype as its prototype. Person.call(person, 'John') initializes the object, setting the name property.

Follow-Up Questions

  • What are the differences between function constructors and ES6 class syntax?
  • What are some common use cases for Object.create?

Further reading

Explain the differences on the usage of `foo` between `function foo() {}` and `var foo = function() {}` in JavaScript

Topics
JavaScript

TL;DR

function foo() {} a function declaration while the var foo = function() {} is a function expression. The key difference is that function declarations have its body hoisted but the bodies of function expressions are not (they have the same hoisting behavior as var-declared variables).

If you try to invoke a function expression before it is declared, you will get an Uncaught TypeError: XXX is not a function error.

Function declarations can be called in the enclosing scope even before they are declared.

foo(); // 'FOOOOO'
function foo() {
console.log('FOOOOO');
}

Function expressions if called before they are declared will result in an error.

foo(); // Uncaught TypeError: foo is not a function
var foo = function () {
console.log('FOOOOO');
};

Another key difference is in the scope of the function name. Function expressions can be named by defining it after the function and before the parenthesis. However when using named function expressions, the function name is only accessible within the function itself. Trying to access it outside will result in an error or undefined.

const myFunc = function namedFunc() {
console.log(namedFunc); // Works
};
myFunc(); // Runs the function and logs the function reference
console.log(namedFunc); // ReferenceError: namedFunc is not defined

Note: The examples uses var due to legacy reasons. Function expressions can be defined using let and const and the key difference is in the hoisting behavior of those keywords.


Function declarations

A function declaration is a statement that defines a function with a name. It is typically used to declare a function that can be called multiple times throughout the enclosing scope.

function foo() {
console.log('FOOOOO');
}
foo(); // 'FOOOOO'

Function expressions

A function expression is an expression that defines a function and assigns it to a variable. It is often used when a function is needed only once or in a specific context.

var foo = function () {
console.log('FOOOOO');
};
foo(); // 'FOOOOO'

Note: The examples uses var due to legacy reasons. Function expressions can be defined using let and const and the key difference is in the hoisting behavior of those keywords.

Key differences

Hoisting

The key difference is that function declarations have its body hoisted but the bodies of function expressions are not (they have the same hoisting behavior as var-declared variables). For more explanation on hoisting, refer to the quiz question on hoisting. If you try to invoke a function expression before it is defined, you will get an Uncaught TypeError: XXX is not a function error.

Function declarations:

foo(); // 'FOOOOO'
function foo() {
console.log('FOOOOO');
}

Function expressions:

foo(); // Uncaught TypeError: foo is not a function
var foo = function () {
console.log('FOOOOO');
};

Name scope

Function expressions can be named by defining it after the function and before the parenthesis. However when using named function expressions, the function name is only accessible within the function itself. Trying to access it outside will result in undefined and calling it will result in an error.

const myFunc = function namedFunc() {
console.log(namedFunc); // Works
};
myFunc(); // Runs the function and logs the function reference
console.log(namedFunc); // ReferenceError: namedFunc is not defined

When to use each

  • Function declarations:
    • When you want to create a function on the global scope and make it available throughout the enclosing scope.
    • If a function is reusable and needs to be called multiple times.
  • Function expressions:
    • If a function is only needed once or in a specific context.
    • Use to limit the function availability to subsequent code and keep the enclosing scope clean.

In general, it's preferable to use function declarations to avoid the mental overhead of determining if a function can be called. The practical usages of function expressions is quite rare.

Further reading

What's a typical use case for anonymous functions in JavaScript?

Topics
JavaScript

TL;DR

Anonymous function in Javascript is a function that does not have any name associated with it. They are typically used as arguments to other functions or assigned to variables.

const arr = [-1, 0, 5, 6];
// The filter method is passed an anonymous function.
arr.filter((x) => x > 1); // [5, 6]

They are often used as arguments to other functions, known as higher-order functions, which can take functions as input and return a function as output. Anonymous functions can access variables from the outer scope, a concept known as closures, allowing them to "close over" and remember the environment in which they were created.

// Encapsulating Code
(function () {
// Some code here.
})();
// Callbacks
setTimeout(function () {
console.log('Hello world!');
}, 1000);
// Functional programming constructs
const arr = [1, 2, 3];
const double = arr.map(function (el) {
return el * 2;
});
console.log(double); // [2, 4, 6]

Anonymous functions

Anonymous functions provide a more concise way to define functions, especially for simple operations or callbacks. Besides that, they can also be used in the following scenarios

Immediate execution

Anonymous functions are commonly used in Immediately Invoked Function Expressions (IIFEs) to encapsulate code within a local scope. This prevents variables declared within the function from leaking to the global scope and polluting the global namespace.

// This is an IIFE
(function () {
var x = 10;
console.log(x); // 10
})();
// x is not accessible here
console.log(typeof x); // undefined

In the above example, the IIFE creates a local scope for the variable x. As a result, x is not accessible outside the IIFE, thus preventing it from leaking into the global scope.

Callbacks

Anonymous functions can be used as callbacks that are used once and do not need to be used anywhere else. The code will seem more self-contained and readable when handlers are defined right inside the code calling them, rather than having to search elsewhere to find the function body.

setTimeout(() => {
console.log('Hello world!');
}, 1000);

Higher-order functions

It is used as arguments to functional programming constructs like Higher-order functions or Lodash (similar to callbacks). Higher-order functions take other functions as arguments or return them as results. Anonymous functions are often used with higher-order functions like map(), filter(), and reduce().

const arr = [1, 2, 3];
const double = arr.map((el) => {
return el * 2;
});
console.log(double); // [2, 4, 6]

Event Handling

In React, anonymous functions are widely used for defining callback functions inline for handling events and passing callbacks as props.

function App() {
return <button onClick={() => console.log('Clicked!')}>Click Me</button>;
}

Follow-Up Questions

  • How do anonymous functions differ from named functions?
  • Can you explain the difference between arrow functions and anonymous functions?

What are the various ways to create objects in JavaScript?

Topics
JavaScript

TL;DR

Creating objects in JavaScript offers several methods:

  • Object literals ({}): Simplest and most popular approach. Define key-value pairs within curly braces.
  • Object() constructor: Use new Object() with dot notation to add properties.
  • Object.create(): Create new objects using existing objects as prototypes, inheriting properties and methods.
  • Constructor functions: Define blueprints for objects using functions, creating instances with new.
  • ES2015 classes: Structured syntax similar to other languages, using class and constructor keywords.

Objects in JavaScript

Creating objects in JavaScript involves several methods. Here are the various ways to create objects in JavaScript:

Object literals ({})

This is the simplest and most popular way to create objects in JavaScript. It involves defining a collection of key-value pairs within curly braces ({}). It can be used when you need to create a single object with a fixed set of properties.

const person = {
firstName: 'John',
lastName: 'Doe',
age: 50,
eyeColor: 'blue',
};
console.log(person); // {firstName: "John", lastName: "Doe", age: 50, eyeColor: "blue"}

Object() constructor

This method involves using the new keyword with the built-in Object constructor to create an object. You can then add properties to the object using dot notation. It can be used when you need to create an object from a primitive value or to create an empty object.

const person = new Object();
person.firstName = 'John';
person.lastName = 'Doe';
console.log(person); // {firstName: "John", lastName: "Doe"}

Object.create() Method

This method allows you to create a new object using an existing object as a prototype. The new object inherits properties and methods from the prototype object. It can be used when you need to create a new object with a specific prototype.

// Object.create() Method
const personPrototype = {
greet() {
console.log(
`Hello, my name is ${this.name} and I'm ${this.age} years old.`,
);
},
};
const person = Object.create(personPrototype);
person.name = 'John';
person.age = 30;
person.greet(); // Output: Hello, my name is John and I'm 30 years old.

An object without a prototype can be created by doing Object.create(null).

ES2015 classes

Classes provide a more structured and familiar syntax (similar to other programming languages) for creating objects. They define a blueprint and use methods to interact with the object's properties. It can be used when you need to create complex objects with inheritance and encapsulation.

class Person {
constructor(name, age) {
this.name = name;
this.age = age;
}
greet = function () {
console.log(
`Hello, my name is ${this.name} and I'm ${this.age} years old.`,
);
};
}
const person1 = new Person('John', 30);
const person2 = new Person('Alice', 25);
person1.greet(); // Output: Hello, my name is John and I'm 30 years old.
person2.greet(); // Output: Hello, my name is Alice and I'm 25 years old.

Constructor functions

Constructor functions are used to create reusable blueprints for objects. They define the properties and behaviors shared by all objects of that type. You use the new keyword to create instances of the object. It can be used when you need to create multiple objects with similar properties and methods.

However, now that ES2015 classes are readily supported in modern browsers, there's little reason to use constructor functions to create objects.

// Constructor function
function Person(name, age) {
this.name = name;
this.age = age;
this.greet = function () {
console.log(
`Hello, my name is ${this.name} and I'm ${this.age} years old.`,
);
};
}
const person1 = new Person('John', 30);
const person2 = new Person('Alice', 25);
person1.greet(); // Output: Hello, my name is John and I'm 30 years old.
person2.greet(); // Output: Hello, my name is Alice and I'm 25 years old.

Further reading

What is a closure in JavaScript, and how/why would you use one?

Topics
ClosureJavaScript

TL;DR

In the book "You Don't Know JS" (YDKJS) by Kyle Simpson, a closure is defined as follows:

Closure is when a function is able to remember and access its lexical scope even when that function is executing outside its lexical scope

In simple terms, functions have access to variables that were in their scope at the time of their creation. This is what we call the function's lexical scope. A closure is a function that retains access to these variables even after the outer function has finished executing. This is like the function has a memory of its original environment.

function outerFunction() {
const outerVar = 'I am outside of innerFunction';
function innerFunction() {
console.log(outerVar); // `innerFunction` can still access `outerVar`.
}
return innerFunction;
}
const inner = outerFunction(); // `inner` now holds a reference to `innerFunction`.
inner(); // "I am outside of innerFunction"
// Even though `outerFunction` has completed execution, `inner` still has access to variables defined inside `outerFunction`.

Key points to remember:

  • Closure occurs when an inner function has access to variables in its outer (lexical) scope, even when the outer function has finished executing.
  • Closure allows a function to remember the environment in which it was created, even if that environment is no longer present.
  • Closures are used extensively in JavaScript, such as in callbacks, event handlers, and asynchronous functions.

Understanding JavaScript closures

In JavaScript, a closure is a function that captures the lexical scope in which it was declared, allowing it to access and manipulate variables from an outer scope even after that scope has been closed.

Here's how closures work:

  1. Lexical scoping: JavaScript uses lexical scoping, meaning a function's access to variables is determined by its actual location within the source code.
  2. Function creation: When a function is created, it keeps a reference to its lexical scope. This scope contains all the local variables that were in-scope at the time the closure was created.
  3. Maintaining state: Closures are often used to maintain state in a secure way because the variables captured by the closure are not accessible outside the function.

ES6 syntax and closures

With ES6, closures can be created using arrow functions, which provide a more concise syntax and lexically bind the this value. Here's an example:

const createCounter = () => {
let count = 0;
return () => {
count += 1;
return count;
};
};
const counter = createCounter();
console.log(counter()); // Outputs: 1
console.log(counter()); // Outputs: 2

Closures in React

Closures are everywhere. Below code shows a simple example of increasing a counter on a button click. In this code, handleClick forms a closure. It has access to it's outer scope variable count and setCount

import React, { useState } from 'react';
function Counter() {
// Define a state variable using the useState hook
const [count, setCount] = useState(0);
// This handleClick function is a closure
function handleClick() {
// It can access the 'count' state variable
setCount(count + 1);
}
return (
<div>
<p>Count: {count}</p>
<button onClick={handleClick}>Increment</button>
</div>
);
}
function App() {
return (
<div>
<h1>Counter App</h1>
<Counter />
</div>
);
}
export default App;

Why use closures?

Using closures provide the following benefits:

  1. Data encapsulation: Closures provide a way to create private variables and functions that can't be accessed from outside the closure. This is useful for hiding implementation details and maintaining state in an encapsulated way.
  2. Functional programming: Closures are fundamental in functional programming paradigms, where they are used to create functions that can be passed around and invoked later, retaining access to the scope in which they were created, e.g. partial applications or currying.
  3. Event handlers and callbacks: In JavaScript, closures are often used in event handlers and callbacks to maintain state or access variables that were in scope when the handler or callback was defined.
  4. Module patterns: Closures enable the module pattern in JavaScript, allowing the creation of modules with private and public parts.

Further reading

What is the definition of a higher-order function in JavaScript?

Topics
JavaScript

TL;DR

A higher-order function is any function that takes one or more functions as arguments, which it uses to operate on some data, and/or returns a function as a result.

Higher-order functions are meant to abstract some operation that is performed repeatedly. The classic example of this is Array.prototype.map(), which takes an array and a function as arguments. Array.prototype.map() then uses this function to transform each item in the array, returning a new array with the transformed data. Other popular examples in JavaScript are Array.prototype.forEach(), Array.prototype.filter(), and Array.prototype.reduce(). A higher-order function doesn't just need to be manipulating arrays as there are many use cases for returning a function from another function. Function.prototype.bind() is an example that returns another function.

Imagine a scenario where we have an array of names that we need to transform to uppercase. The imperative way will be as such:

const names = ['irish', 'daisy', 'anna'];
function transformNamesToUppercase(names) {
const results = [];
for (let i = 0; i < names.length; i++) {
results.push(names[i].toUpperCase());
}
return results;
}
console.log(transformNamesToUppercase(names)); // ['IRISH', 'DAISY', 'ANNA']

Using Array.prototype.map(transformerFn) makes the code shorter and more declarative.

const names = ['irish', 'daisy', 'anna'];
function transformNamesToUppercase(names) {
return names.map((name) => name.toUpperCase());
}
console.log(transformNamesToUppercase(names)); // ['IRISH', 'DAISY', 'ANNA']

Higher order functions

A higher-order function is a function that takes another function as an argument or returns a function as its result.

Functions as arguments

A higher-order function can take another function as an argument and execute it.

function greet(name) {
return `Hello, ${name}!`;
}
function greetName(greeter, name) {
console.log(greeter(name));
}
greetName(greet, 'Alice'); // Output: Hello, Alice!

In this example, the greetName function is higher-order function because it takes another function (greet) as an argument and uses it to generate a greeting for the given name.

Functions as return values

A higher-order function can return another function.

function multiplier(factor) {
return function (num) {
return num * factor;
};
}
const double = multiplier(2);
const triple = multiplier(3);
console.log(double(5)); // Output: 10
console.log(triple(5)); // Output: 15

In this example, the multiplier function returns a new function that multiplies any number by the specified factor. The returned function is a closure that remembers the factor value from the outer function. The multiplier function is a higher-order function because it returns another function.

Practical examples

  1. Logging decorator: A higher-order function that adds logging functionality to another function:
function withLogging(fn) {
return function (...args) {
console.log(`Calling ${fn.name} with arguments`, args);
return fn.apply(this, args);
};
}
function add(a, b) {
return a + b;
}
const loggedAdd = withLogging(add);
console.log(loggedAdd(2, 3));
// Output:
// Calling add with arguments [2, 3]
// 5

The withLogging function is a higher-order function that takes a function fn as an argument and returns a new function that logs the function call before executing the original function

  1. Memoization: A higher-order function that caches the results of a function to avoid redundant computations:
function memoize(fn) {
const cache = new Map();
return function (...args) {
const key = JSON.stringify(args);
if (cache.has(key)) {
return cache.get(key);
}
const result = fn.apply(this, args);
cache.set(key, result);
return result;
};
}
function fibonacci(n) {
if (n <= 1) return n;
return fibonacci(n - 1) + fibonacci(n - 2);
}
const memoizedFibonacci = memoize(fibonacci);
console.log(memoizedFibonacci(10)); // Output: 55

The memoize function is a higher-order function that takes a function fn as an argument and returns a new function that caches the results of the original function based on its arguments.

  1. Lodash: Lodash is a utility library that provides a wide range of functions for working with arrays, objects, strings, and more, most of which are higher-order functions.
import _ from 'lodash';
const numbers = [1, 2, 3, 4, 5];
// Filter array
const evenNumbers = _.filter(numbers, (n) => n % 2 === 0); // [2, 4]
// Map array
const doubledNumbers = _.map(numbers, (n) => n * 2); // [2, 4, 6, 8, 10]
// Find the maximum value
const maxValue = _.max(numbers); // 5
// Sum all values
const sum = _.sum(numbers); // 15

Further reading

What are the differences between JavaScript ES2015 classes and ES5 function constructors?

Topics
JavaScriptOOP

TL;DR

ES2015 introduces a new way of creating classes, which provides a more intuitive and concise way to define and work with objects and inheritance compared to the ES5 function constructor syntax. Here's an example of each:

// ES5 function constructor
function Person(name) {
this.name = name;
}
// ES2015 Class
class Person {
constructor(name) {
this.name = name;
}
}

For simple constructors, they look pretty similar. The main difference in the constructor comes when using inheritance. If we want to create a Student class that subclasses Person and add a studentId field, this is what we have to do.

// ES5 inheritance
// Superclass
function Person1(name) {
this.name = name;
}
// Subclass
function Student1(name, studentId) {
// Call constructor of superclass to initialize superclass-derived members.
Person1.call(this, name);
// Initialize subclass's own members.
this.studentId = studentId;
}
Student1.prototype = Object.create(Person1.prototype);
Student1.prototype.constructor = Student1;
const student1 = new Student1('John', 1234);
console.log(student1.name, student1.studentId); // "John" 1234
// ES2015 inheritance
// Superclass
class Person2 {
constructor(name) {
this.name = name;
}
}
// Subclass
class Student2 extends Person2 {
constructor(name, studentId) {
super(name);
this.studentId = studentId;
}
}
const student2 = new Student2('Alice', 5678);
console.log(student2.name, student2.studentId); // "Alice" 5678

It's much more verbose to use inheritance in ES5 and the ES2015 version is easier to understand and remember.

Comparison of ES5 function constructors vs ES2015 classes

FeatureES5 Function ConstructorES2015 Class
SyntaxUses function constructors and prototypesUses class keyword
ConstructorFunction with properties assigned using thisconstructor method inside the class
Method DefinitionDefined on the prototypeDefined inside the class body
Static MethodsAdded directly to the constructor functionDefined using the static keyword
InheritanceUses Object.create() and manually sets prototype chainUses extends keyword and super function
ReadabilityLess intuitive and more verboseMore concise and intuitive

ES5 function constructor vs ES2015 classes

ES5 function constructors and ES2015 classes are two different ways of defining classes in JavaScript. They both serve the same purpose, but they have different syntax and behavior.

ES5 function constructors

In ES5, you define a class-like structure using a function constructor and prototypes. Here's an example:

// ES5 function constructor
function Person(name, age) {
this.name = name;
this.age = age;
}
Person.prototype.greet = function () {
console.log(
'Hello, my name is ' + this.name + ' and I am ' + this.age + ' years old.',
);
};
// Creating an instance
var person1 = new Person('John', 30);
person1.greet(); // Hello, my name is John and I am 30 years old.

ES2015 classes

ES2015 introduced the class syntax, which simplifies the definition of classes and supports more features such as static methods and subclassing. Here's the same example using ES2015:

// ES2015 Class
class Person {
constructor(name, age) {
this.name = name;
this.age = age;
}
greet() {
console.log(
`Hello, my name is ${this.name} and I am ${this.age} years old.`,
);
}
}
// Creating an instance
const person1 = new Person('John', 30);
person1.greet(); // Hello, my name is John and I am 30 years old.

Key Differences

  1. Syntax and Readability:

    • ES5: Uses function constructors and prototypes, which can be less intuitive and harder to read.
    • ES2015: Uses the class keyword, making the code more concise and easier to understand.
  2. Static Methods:

    • ES5: Static methods are added directly to the constructor function.
    • ES2015: Static methods are defined within the class using the static keyword.
    // ES5
    function Person1(name, age) {
    this.name = name;
    this.age = age;
    }
    Person1.sayHi = function () {
    console.log('Hi from ES5!');
    };
    Person1.sayHi(); // Hi from ES5!
    // ES2015
    class Person2 {
    static sayHi() {
    console.log('Hi from ES2015!');
    }
    }
    Person2.sayHi(); // Hi from ES2015!
  3. Inheritance

    • ES5: Inheritance is achieved using Object.create() and manually setting the prototype chain.
    • ES2015: Inheritance is much simpler and more intuitive with the extends keyword.
    // ES5 Inheritance
    // ES5 function constructor
    function Person1(name, age) {
    this.name = name;
    this.age = age;
    }
    Person1.prototype.greet = function () {
    console.log(
    `Hello, my name is ${this.name} and I am ${this.age} years old.`,
    );
    };
    function Student1(name, age, grade) {
    Person1.call(this, name, age);
    this.grade = grade;
    }
    Student1.prototype = Object.create(Person1.prototype);
    Student1.prototype.constructor = Student1;
    Student1.prototype.study = function () {
    console.log(this.name + ' is studying.');
    };
    var student1 = new Student1('John', 22, 'B+');
    student1.greet(); // Hello, my name is John and I am 22 years old.
    student1.study(); // John is studying.
    // ES2015 Inheritance
    // ES2015 Class
    class Person2 {
    constructor(name, age) {
    this.name = name;
    this.age = age;
    }
    greet() {
    console.log(
    `Hello, my name is ${this.name} and I am ${this.age} years old.`,
    );
    }
    }
    class Student2 extends Person2 {
    constructor(name, age, grade) {
    super(name, age);
    this.grade = grade;
    }
    study() {
    console.log(`${this.name} is studying.`);
    }
    }
    const student2 = new Student2('Alice', 20, 'A');
    student2.greet(); // Hello, my name is Alice and I am 20 years old.
    student2.study(); // Alice is studying.
  4. super calls:

    • ES5: Manually call the parent constructor function.
    • ES2015: Use the super keyword to call the parent class's constructor and methods.

Conclusion

While both ES5 and ES2015 approaches can achieve the same functionality, ES2015 classes provide a clearer and more concise way to define and work with object-oriented constructs in JavaScript, which makes the code easier to write, read, and maintain. If you are working with modern JavaScript, it is generally recommended to use ES2015 classes over ES5 function constructors.

Resources

Describe event bubbling in JavaScript and browsers

Topics
Web APIsJavaScript

TL;DR

Event bubbling is a DOM event propagation mechanism where an event (e.g. a click), starts at the target element and bubbles up to the root of the document. This allows ancestor elements to also respond to the event.

Event bubbling is essential for event delegation, where a single event handler manages events for multiple child elements, enhancing performance and code simplicity. While convenient, failing to manage event propagation properly can lead to unintended behavior, such as multiple handlers firing for a single event.


What is event bubbling?

Event bubbling is a propagation mechanism in the DOM (Document Object Model) where an event, such as a click or a keyboard event, is first triggered on the target element that initiated the event and then propagates upward (bubbles) through the DOM tree to the root of the document.

Note: even before the event bubbling phase happens is the event capturing phase which is the opposite of bubbling where the event goes down from the document root to the target element.

Bubbling phase

During the bubbling phase, the event starts at the target element and bubbles up through its ancestors in the DOM hierarchy. This means that the event handlers attached to the target element and its ancestors can all potentially receive and respond to the event.

Here's an example using modern ES6 syntax to demonstrate event bubbling:

// HTML:
// <div id="parent">
// <button id="child">Click me!</button>
// </div>
const parentDiv = document.createElement('div');
parentDiv.id = 'parent';
const button = document.createElement('button');
button.id = 'child';
parentDiv.appendChild(button);
document.body.appendChild(parentDiv);
const parent = document.getElementById('parent');
const child = document.getElementById('child');
parent.addEventListener('click', () => {
console.log('Parent element clicked');
});
child.addEventListener('click', () => {
console.log('Child element clicked');
});
// Simulate clicking the button:
child.click();

When you click the "Click me!" button, both the child and parent event handlers will be triggered due to the event bubbling.

Stopping the bubbling

Event bubbling can be stopped during the bubbling phase using the stopPropagation() method. If an event handler calls stopPropagation(), it prevents the event from further bubbling up the DOM tree, ensuring that only the handlers of the elements up to that point in the hierarchy are executed.

// HTML:
// <div id="parent">
// <button id="child">Click me!</button>
// </div>
const parentDiv = document.createElement('div');
parentDiv.id = 'parent';
const button = document.createElement('button');
button.id = 'child';
parentDiv.appendChild(button);
document.body.appendChild(parentDiv);
const parent = document.getElementById('parent');
const child = document.getElementById('child');
parent.addEventListener('click', () => {
console.log('Parent element clicked');
});
child.addEventListener('click', (event) => {
console.log('Child element clicked');
event.stopPropagation(); // Stops propagation to parent
});
// Simulate clicking the button:
child.click();

Event delegation

Event bubbling is the basis for a technique called event delegation, where you attach a single event handler to a common ancestor of multiple elements and use event delegation to handle events for those elements efficiently. This is particularly useful when you have a large number of similar elements, like a list of items, and you want to avoid attaching individual event handlers to each item.

parent.addEventListener('click', (event) => {
if (event.target && event.target.id === 'child') {
console.log('Child element clicked');
}
});

Benefits

  • Cleaner code: Reduced number of event listeners improves code readability and maintainability.
  • Efficient event handling: Minimizes performance overhead by attaching fewer listeners.
  • Flexibility: Allows handling events happening on child elements without directly attaching listeners to them.

Pitfalls

  • Accidental event handling: Be mindful that parent elements might unintentionally capture events meant for children. Use event.target to identify the specific element that triggered the event.
  • Event order: Events bubble up in a specific order. If multiple parents have event listeners, their order of execution depends on the DOM hierarchy.
  • Over-delegation: While delegating events to a common ancestor is efficient, attaching a listener too high in the DOM tree might capture unintended events.

Use cases

Here are some practical ways to use event bubbling to write better code.

Reducing code with event delegation

Imagine you have a product list with numerous items, each with a "Buy Now" button. Traditionally, you might attach a separate click event listener to each button:

// HTML:
// <ul id="product-list">
// <li><button id="item1-buy">Buy Now</button></li>
// <li><button id="item2-buy">Buy Now</button></li>
// </ul>
const item1Buy = document.getElementById('item1-buy');
const item2Buy = document.getElementById('item2-buy');
item1Buy.addEventListener('click', handleBuyClick);
item2Buy.addEventListener('click', handleBuyClick);
// ... repeat for each item ...
function handleBuyClick(event) {
console.log('Buy button clicked for item:', event.target.id);
}

This approach becomes cumbersome as the number of items grows. Here's how event bubbling can simplify things:

// HTML:
// <ul id="product-list">
// <li><button id="item1-buy">Buy Now</button></li>
// <li><button id="item2-buy">Buy Now</button></li>
// </ul>
const productList = document.getElementById('product-list');
productList.addEventListener('click', handleBuyClick);
function handleBuyClick(event) {
// Check if the clicked element is a button within the list
if (event.target.tagName.toLowerCase() === 'button') {
console.log('Buy button clicked for item:', event.target.id);
}
}

By attaching the listener to the parent (productList) and checking the clicked element (event.target) within the handler, you achieve the same functionality with less code. This approach scales well when the items are dynamic as no new event handlers have to be added or removed when the list of items change.

Dropdown menus

Consider a dropdown menu where clicking anywhere on the menu element (parent) should close it. With event bubbling, you can achieve this with a single listener:

// HTML:
// <div id="dropdown">
// <button>Open Menu</button>
// <ul>
// <li>Item 1</li>
// <li>Item 2</li>
// </ul>
// </div>
const dropdown = document.getElementById('dropdown');
dropdown.addEventListener('click', handleDropdownClick);
function handleDropdownClick(event) {
// Close the dropdown if clicked outside the button
if (event.target !== dropdown.querySelector('button')) {
console.log('Dropdown closed');
// Your logic to hide the dropdown content
}
}

Here, the click event bubbles up from the clicked element (button or list item) to the dropdown element. The handler checks if the clicked element is not the <button> and closes the menu accordingly.

Accordion menus

Imagine an accordion menu where clicking a section header (parent) expands or collapses the content section (child) below it. Event bubbling makes this straightforward:

// HTML:
// <div class="accordion">
// <div class="header">Section 1</div>
// <div class="content">Content for Section 1</div>
// <div class="header">Section 2</div>
// <div class="content">Content for Section 2</div>
// </div>
const accordion = document.querySelector('.accordion');
accordion.addEventListener('click', handleAccordionClick);
function handleAccordionClick(event) {
// Check if clicked element is a header
if (event.target.classList.contains('header')) {
const content = event.target.nextElementSibling;
content.classList.toggle('active'); // Toggle display of content
}
}

By attaching the listener to the accordion element, clicking on any header triggers the event. The handler checks if the clicked element is a header and toggles the visibility of the corresponding content section.

Further reading

Describe event capturing in JavaScript and browsers

Topics
Web APIsJavaScript

TL;DR

Event capturing is a lesser-used counterpart to event bubbling in the DOM event propagation mechanism. It follows the opposite order, where an event triggers first on the ancestor element and then travels down to the target element.

Event capturing is rarely used as compared to event bubbling, but it can be used in specific scenarios where you need to intercept events at a higher level before they reach the target element. It is disabled by default but can be enabled through an option on addEventListener().


What is event capturing?

Event capturing is a propagation mechanism in the DOM (Document Object Model) where an event, such as a click or a keyboard event, is first triggered at the root of the document and then flows down through the DOM tree to the target element.

Capturing has a higher priority than bubbling, meaning that capturing event handlers are executed before bubbling event handlers, as shown by the phases of event propagation:

  • Capturing phase: The event moves down towards the target element
  • Target phase: The event reaches the target element
  • Bubbling phase: The event bubbles up from the target element

Note that event capturing is disabled by default. To enable it you have to pass the capture option into addEventListener().

Capturing phase

During the capturing phase, the event starts at the document root and propagates down to the target element. Any event listeners on ancestor elements in this path will be triggered before the target element's handler. But note that event capturing can't happen until the third argument of addEventListener() is set to true as shown below (default value is false).

Here's an example using modern ES2015 syntax to demonstrate event capturing:

// HTML:
// <div id="parent">
// <button id="child">Click me!</button>
// </div>
const parent = document.getElementById('parent');
const child = document.getElementById('child');
parent.addEventListener(
'click',
() => {
console.log('Parent element clicked (capturing)');
},
true, // Set third argument to true for capturing
);
child.addEventListener('click', () => {
console.log('Child element clicked');
});

When you click the "Click me!" button, it will trigger the parent element's capturing handler first, followed by the child element's handler.

Stopping propagation

Event propagation can be stopped during the capturing phase using the stopPropagation() method. This prevents the event from traveling further down the DOM tree.

// HTML:
// <div id="parent">
// <button id="child">Click me!</button>
// </div>
const parent = document.getElementById('parent');
const child = document.getElementById('child');
parent.addEventListener(
'click',
(event) => {
console.log('Parent element clicked (capturing)');
event.stopPropagation(); // Stop event propagation
},
true,
);
child.addEventListener('click', () => {
console.log('Child element clicked');
});

As a result of stopping event propagation, just the parent event listener will now be called when you click the "Click Me!" button, and the child event listener will never be called because the event propagation has stopped at the parent element.

Uses of event capturing

Event capturing is rarely used as compared to event bubbling, but it can be used in specific scenarios where you need to intercept events at a higher level before they reach the target element.

  • Stopping event bubbling: Imagine you have a nested element (like a button) inside a container element. Clicking the button might also trigger a click event on the container. By using enabling event capturing on the container's event listener, you can capture the click event there and prevent it from traveling down to the button, potentially causing unintended behavior.
  • Custom dropdown menus:: When building custom dropdown menus, you might want to capture clicks outside the menu element to close the menu. Using capture: true on the document object allows you to listen for clicks anywhere on the page and close the menu if the click happens outside its boundaries.
  • Efficiency in certain scenarios:: In some situations, event capturing can be slightly more efficient than relying on bubbling. This is because the event doesn't need to propagate through all child elements before reaching the handler. However, the performance difference is usually negligible for most web applications.

Further reading

What is the difference between `mouseenter` and `mouseover` event in JavaScript and browsers?

Topics
Web APIsHTMLJavaScript

TL;DR

The main difference lies in the bubbling behavior of mouseenter and mouseover events. mouseenter does not bubble while mouseover bubbles.

mouseenter events do not bubble. The mouseenter event is triggered only when the mouse pointer enters the element itself, not its descendants. If a parent element has child elements, and the mouse pointer enters child elements, the mouseenter event will not be triggered on the parent element again, it's only triggered once upon entry of parent element without regard for its contents. If both parent and child have mouseenter listeners attached and the mouse pointer moves from the parent element to the child element, mouseenter will only fire for the child.

mouseover events bubble up the DOM tree. The mouseover event is triggered when the mouse pointer enters the element or one of its descendants. If a parent element has child elements, and the mouse pointer enters child elements, the mouseover event will be triggered on the parent element again as well. If the parent element has multiple child elements, this can result in multiple event callbacks fired. If there are child elements, and the mouse pointer moves from the parent element to the child element, mouseover will fire for both the parent and the child.

Propertymouseentermouseover
BubblingNoYes
TriggerOnly when entering itselfWhen entering itself and when entering descendants

mouseenter event:

  • Does not bubble: The mouseenter event does not bubble. It is only triggered when the mouse pointer enters the element to which the event listener is attached, not when it enters any child elements.
  • Triggered once: The mouseenter event is triggered only once when the mouse pointer enters the element, making it more predictable and easier to manage in certain scenarios.

A use case for mouseenter is when you want to detect the mouse entering an element without worrying about child elements triggering the event multiple times.

mouseover Event:

  • Bubbles up the DOM: The mouseover event bubbles up through the DOM. This means that if you have an event listener on a parent element, it will also trigger when the mouse pointer moves over any child elements.
  • Triggered multiple times: The mouseover event is triggered every time the mouse pointer moves over an element or any of its child elements. This can lead to multiple triggers if you have nested elements.

A use case for mouseover is when you want to detect when the mouse enters an element or any of its children and are okay with the events triggering multiple times.

Example

Here's an example demonstrating the difference between mouseover and mouseenter events:

<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Mouse Events Example</title>
<style>
.parent {
width: 200px;
height: 200px;
background-color: lightblue;
padding: 20px;
}
.child {
width: 100px;
height: 100px;
background-color: lightcoral;
}
</style>
</head>
<body>
<div class="parent">
Parent Element
<div class="child">Child Element</div>
</div>
<script>
const parent = document.querySelector('.parent');
const child = document.querySelector('.child');
// Mouseover event on parent.
parent.addEventListener('mouseover', () => {
console.log('Mouseover on parent');
});
// Mouseenter event on parent.
parent.addEventListener('mouseenter', () => {
console.log('Mouseenter on parent');
});
// Mouseover event on child.
child.addEventListener('mouseover', () => {
console.log('Mouseover on child');
});
// Mouseenter event on child.
child.addEventListener('mouseenter', () => {
console.log('Mouseenter on child');
});
</script>
</body>
</html>

Expected behavior

  • When the mouse enters the parent element:
    • The mouseover event on the parent will trigger.
    • The mouseenter event on the parent will trigger.
  • When the mouse enters the child element:
    • The mouseover event on the parent will trigger again because mouseover bubbles up from the child.
    • The mouseover event on the child will trigger.
    • The mouseenter event on the child will trigger.
    • The mouseenter event on the parent will not trigger again because mouseenter does not bubble.

Further reading

What is `'use strict';` in JavaScript for?

What are the advantages and disadvantages to using it?
Topics
JavaScript

TL;DR

'use strict' is a statement used to enable strict mode to entire scripts or individual functions. Strict mode is a way to opt into a restricted variant of JavaScript.

Advantages

  • Makes it impossible to accidentally create global variables.
  • Makes assignments which would otherwise silently fail to throw an exception.
  • Makes attempts to delete undeletable properties throw an exception (where before the attempt would simply have no effect).
  • Requires that function parameter names be unique.
  • this is undefined in the global context.
  • It catches some common coding bloopers, throwing exceptions.
  • It disables features that are confusing or poorly thought out.

Disadvantages

  • Many missing features that some developers might be used to.
  • No more access to function.caller and function.arguments.
  • Concatenation of scripts written in different strict modes might cause issues.

Overall, the benefits outweigh the disadvantages and there is not really a need to rely on the features that strict mode prohibits. We should all be using strict mode by default.


What is "use strict" in JavaScript?

In essence, "use strict" is a directive introduced in ECMAScript 5 (ES5) that signals to the JavaScript engine that the code it surrounds should be executed in "strict mode". Strict mode imposes stricter parsing and error handling rules, essentially making your code more secure and less error-prone.

When you use "use strict", it helps you to write cleaner code, like preventing you from using undeclared variables. It can also make your code more secure because it disallows some potentially insecure actions.

How to use strict mode

  1. Global Scope: To enable strict mode globally, add the directive at the beginning of the JavaScript file:

    'use strict';
    // any code in this file will be run in strict mode
    function add(a, b) {
    return a + b;
    }
  2. Local Scope: To enable strict mode within a function, add the directive at the beginning of the function:

    function myFunction() {
    'use strict';
    // this will tell JavaScript engine to use strict mode only for the `myFunction`
    // Anything that is outside of the scope of this function will be treated as non-strict mode unless specified to use strict mode
    }

Key features of strict mode

  1. Error prevention : Strict mode prevents common errors such as:
    • Using undeclared variables.
    • Assigning values to non-writable properties.
    • Using non-existent properties or variables.
    • Deleting undeletable properties.
    • Using reserved keywords as identifiers.
    • Duplicating parameter names in functions.
  2. Improved security: Strict mode helps in writing more secure code by:
    • Preventing the use of deprecated features like arguments.caller and arguments.callee.
    • Restricting the use of eval() to prevent variable declarations in the calling scope.
  3. Compatibility : Strict mode ensures compatibility with future versions of JavaScript by preventing the use of reserved keywords as identifiers.

Examples

  1. Preventing accidental creation of global variables:

    // Without strict mode
    function defineNumber() {
    count = 123;
    }
    defineNumber();
    console.log(count); // logs: 123
    'use strict'; // With strict mode
    function strictFunc() {
    'use strict';
    strictVar = 123; // ReferenceError: strictVar is not defined
    }
    strictFunc();
    console.log(strictVar); // ReferenceError: strictVar is not defined
  2. Making assignments which would otherwise silently fail to throw an exception:

    // Without strict mode
    NaN = 'foo'; // This fails silently
    console.log(NaN); // logs: NaN
    'use strict'; // With strict mode
    NaN = 'foo'; // Uncaught TypeError: Cannot assign to read only property 'NaN' of object '#<Window>'
  3. Making attempts to delete undeletable properties throw an error in strict mode:

    // Without strict mode
    delete Object.prototype; // This fails silently
    'use strict'; // With strict mode
    delete Object.prototype; // TypeError: Cannot delete property 'prototype' of function Object() { [native code] }

Is it "strictly" necessary?

Adding 'use strict' in JavaScript is still beneficial and recommended, but it is no longer strictly necessary in all cases:

  1. Modules: The entire contents of JavaScript modules are automatically in strict mode, without needing the 'use strict' statement. This applies to ES6 modules as well as Node.js CommonJS modules.
  2. Classes: Code within class definitions is also automatically in strict mode, even without 'use strict'.

While 'use strict' is no longer mandatory in all contexts due to the automatic strict mode enforcement in modules and classes, it is still widely recommended as a best practice, especially for core JavaScript files, libraries, and when working with older browser environments or legacy code.

Notes

  1. Placement: The 'use strict' directive must be placed at the beginning of the file or function. Placing it anywhere else will not have any effect.
  2. Compatibility: Strict mode is supported by all modern browsers except Internet Explorer 9 and lower.
  3. Irreversible: There is no way to cancel 'use strict' after it's being set.

Further reading

Explain the difference between synchronous and asynchronous functions in JavaScript

Topics
AsyncJavaScript

TL;DR

Synchronous functions are blocking while asynchronous functions are not. In synchronous functions, statements complete before the next statement is run. As a result, programs containing only synchronous code are evaluated exactly in order of the statements. The execution of the program is paused if one of the statements take a very long time.

function sum(a, b) {
console.log('Inside sum function');
return a + b;
}
const result = sum(2, 3); // The program waits for sum() to complete before assigning the result
console.log('Result: ', result); // Output: 5

Asynchronous functions usually accept a callback as a parameter and execution continue on to the next line immediately after the asynchronous function is invoked. The callback is only invoked when the asynchronous operation is complete and the call stack is empty. Heavy duty operations such as loading data from a web server or querying a database should be done asynchronously so that the main thread can continue executing other operations instead of blocking until that long operation to complete (in the case of browsers, the UI will freeze).

function fetchData(callback) {
setTimeout(() => {
const data = { name: 'John', age: 30 };
callback(data); // Calling the callback function with data
}, 2000); // Simulating a 2-second delay
}
console.log('Fetching data...');
fetchData((data) => {
console.log(data); // Output: { name: 'John', age: 30 } (after 2 seconds)
});
console.log('Call made to fetch data'); // This will print before the data is fetched

Synchronous vs asynchronous functions

In JavaScript, the concepts of synchronous and asynchronous functions are fundamental to understanding how code execution is managed, particularly in the context of handling operations like I/O tasks, API calls, and other time-consuming processes.

Synchronous functions

Synchronous functions execute in a sequential order, one after the other. Each operation must wait for the previous one to complete before moving on to the next.

  • Synchronous code is blocking, meaning the program execution halts until the current operation finishes.
  • It follows a strict sequence, executing instructions line by line.
  • Synchronous functions are easier to understand and debug since the flow is predictable.

Synchronous function examples

  1. Reading files synchronously: When reading a file from the file system using the synchronous readFileSync method from the fs module in Node.js, the program execution is blocked until the entire file is read. This can cause performance issues, especially for large files or when reading multiple files sequentially

    const fs = require('fs');
    const data = fs.readFileSync('large-file.txt', 'utf8');
    console.log(data); // Execution is blocked until the file is read.
    console.log('End of the program');
  2. Looping over large datasets: Iterating over a large array or dataset synchronously can freeze the user interface or browser tab until the operation completes, leading to an unresponsive application.

    const largeArray = new Array(1_000_000).fill(0);
    // Blocks the main thread until the million operations are completed.
    const result = largeArray.map((num) => num * 2);
    console.log(result);

Asynchronous functions

Asynchronous functions do not block the execution of the program. They allow other operations to continue while waiting for a response or completion of a time-consuming task.

  • Asynchronous code is non-blocking, allowing the program to keep running without waiting for a specific operation to finish.
  • It enables concurrent execution, improving performance and responsiveness.
  • Asynchronous functions are commonly used for tasks like network requests, file I/O, and timers.

Asynchronous function examples

  1. Network requests: Making network requests, such as fetching data from an API or sending data to a server, is typically done asynchronously. This allows the application to remain responsive while waiting for the response, preventing the user interface from freezing

    console.log('Start of the program'); // This will be printed first as program starts here
    fetch('https://jsonplaceholder.typicode.com/todos/1')
    .then((response) => response.json())
    .then((data) => {
    console.log(data);
    /** Process the data without blocking the main thread
    * and printed at the end if fetch call succeeds
    */
    })
    .catch((error) => console.error(error));
    console.log('End of program'); // This will be printed before the fetch callback
  2. User input and events: Handling user input events, such as clicks, key presses, or mouse movements, is inherently asynchronous. The application needs to respond to these events without blocking the main thread, ensuring a smooth user experience.

    const button = document.getElementById('myButton');
    button.addEventListener('click', () => {
    // Handle the click event asynchronously
    console.log('Button clicked');
    });
  3. Timers and Animations: Timers (setTimeout(), setInterval()) and animations (e.g., requestAnimationFrame()) are asynchronous operations that allow the application to schedule tasks or update animations without blocking the main thread.

    setTimeout(() => {
    console.log('This message is delayed by 2 seconds');
    }, 2000);
    setInterval(() => {
    console.log('Current time:', new Date().toLocaleString());
    }, 2000); // Interval runs every 2 seconds

By using asynchronous functions and operations, JavaScript can handle time-consuming tasks without freezing the user interface or blocking the main thread.

It is important to note that async functions do not run on a different thread. They still run on the main thread. However, it is possible to achieve parallelism in JavaScript by using Web workers

Achieving parallelism in JavaScript via web workers

Web workers allow you to spawn separate background threads that can perform CPU-intensive tasks in parallel with the main thread. These worker threads can communicate with the main thread via message passing, but they do not have direct access to the DOM or other browser APIs.

// main.js
const worker = new Worker('worker.js');
worker.onmessage = function (event) {
console.log('Result from worker:', event.data);
};
worker.postMessage('Start computation');
// worker.js
self.onmessage = function (event) {
const result = performHeavyComputation();
self.postMessage(result);
};
function performHeavyComputation() {
// CPU-intensive computation
return 'Computation result';
}

In this example, the main thread creates a new web worker and sends it a message to start a computation. The worker performs the heavy computation in parallel with the main thread and sends the result back via postMessage().

Event loop

The async nature of JavaScript is powered by a JavaScript engine's event loop allowing concurrent operations even though JavaScript is single-threaded. It's an important concept to understand so we highly recommend going through that topic as well.

Further reading

What are the pros and cons of using Promises instead of callbacks in JavaScript?

Topics
AsyncJavaScript

TL;DR

Promises offer a cleaner alternative to callbacks, helping to avoid callback hell and making asynchronous code more readable. They facilitate writing sequential and parallel asynchronous operations with ease. However, using promises may introduce slightly more complex code.


Pros

Avoid callback hell which can be unreadable.

Callback hell, also known as the "pyramid of doom," is a phenomenon that occurs when you have multiple nested callbacks in your code. This can lead to code that is difficult to read, maintain, and debug. Here's an example of callback hell:

function getFirstData(callback) {
setTimeout(() => {
callback({ id: 1, title: 'First Data' });
}, 1000);
}
function getSecondData(data, callback) {
setTimeout(() => {
callback({ id: data.id, title: data.title + ' Second Data' });
}, 1000);
}
function getThirdData(data, callback) {
setTimeout(() => {
callback({ id: data.id, title: data.title + ' Third Data' });
}, 1000);
}
// Callback hell
getFirstData((data) => {
getSecondData(data, (data) => {
getThirdData(data, (result) => {
console.log(result); // Output: {id: 1, title: "First Data Second Data Third Data"}
});
});
});

Promises address the problem of callback hell by providing a more linear and readable structure for your code.

// Example of sequential asynchronous code using setTimeout and Promises
function getFirstData() {
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve({ id: 1, title: 'First Data' });
}, 1000);
});
}
function getSecondData(data) {
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve({ id: data.id, title: data.title + ' Second Data' });
}, 1000);
});
}
function getThirdData(data) {
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve({ id: data.id, title: data.title + ' Third Data' });
}, 1000);
});
}
getFirstData()
.then(getSecondData)
.then(getThirdData)
.then((data) => {
console.log(data); // Output: {id: 1, title: "First Data Second Data Third Data"}
})
.catch((error) => console.error('Error:', error));

Makes it easy to write sequential asynchronous code that is readable with .then().

In the above code example, we use .then() method to chain these Promises together, allowing the code to execute sequentially. It provides a cleaner and more manageable way to handle asynchronous operations in JavaScript.

Makes it easy to write parallel asynchronous code with Promise.all().

Both Promise.all() and callbacks can be used to write parallel asynchronous code. However, Promise.all() provides a more concise and readable way to handle multiple Promises, especially when dealing with complex asynchronous workflows.

function getData1() {
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve({ id: 1, title: 'Data 1' });
}, 1000);
});
}
function getData2() {
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve({ id: 2, title: 'Data 2' });
}, 1000);
});
}
function getData3() {
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve({ id: 3, title: 'Data 3' });
}, 1000);
});
}
Promise.all([getData1(), getData2(), getData3()])
.then((results) => {
console.log(results); // Output: [{ id: 1, title: 'Data 1' }, { id: 2, title: 'Data 2' }, { id: 3, title: 'Data 3' }]
})
.catch((error) => {
console.error('Error:', error);
});

Easier error handling with .catch() and guaranteed cleanup with .finally()

Promises make error handling more straightforward by allowing you to catch errors at the end of a chain using .catch(), instead of manually checking for errors in every callback. This leads to cleaner and more maintainable code.

Additionally, .finally() lets you run code after the Promise settles, whether it was successful or failed, which is great for cleanup tasks like hiding spinners or resetting UI states.

function getFirstData() {
return new Promise((resolve) => {
setTimeout(() => {
resolve({ id: 1, title: 'First Data' });
}, 1000);
});
}
function getSecondData(data) {
return new Promise((resolve) => {
setTimeout(() => {
resolve({ id: data.id, title: data.title + ' -> Second Data' });
}, 1000);
});
}
getFirstData()
.then(getSecondData)
.then((data) => {
console.log('Success:', data);
})
.catch((error) => {
console.error('Error:', error);
})
.finally(() => {
console.log('This runs no matter what');
});

With promises, these scenarios which are present in callbacks-only coding, will not happen:

  • Call the callback too early
  • Call the callback too late (or never)
  • Call the callback too few or too many times
  • Fail to pass along any necessary environment/parameters
  • Swallow any errors/exceptions that may happen

Cons

  • Slightly more complex code (debatable).

Practice

Further reading

Explain AJAX in as much detail as possible

Topics
JavaScriptNetworking

TL;DR

AJAX (Asynchronous JavaScript and XML) facilitates asynchronous communication between the client and server, enabling dynamic updates to web pages without reloading. It uses techniques like XMLHttpRequest or the fetch() API to send and receive data in the background. In modern web applications, the fetch() API is more commonly used to implement AJAX.

Using XMLHttpRequest

let xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
if (xhr.readyState === XMLHttpRequest.DONE) {
if (xhr.status === 200) {
console.log(xhr.responseText);
} else {
console.error('Request failed: ' + xhr.status);
}
}
};
xhr.open('GET', 'https://jsonplaceholder.typicode.com/todos/1', true);
xhr.send();

Using fetch()

fetch('https://jsonplaceholder.typicode.com/todos/1')
.then((response) => {
if (!response.ok) {
throw new Error('Network response was not ok');
}
return response.json();
})
.then((data) => console.log(data))
.catch((error) => console.error('Fetch error:', error));

AJAX (Asynchronous JavaScript and XML)

AJAX (asynchronous JavaScript and XML) is a set of web development techniques using many web technologies on the client side to create asynchronous web applications. Unlike traditional web applications where every user interaction triggers a full page reload, with AJAX, web applications can send data to and retrieve from a server asynchronously (in the background) without interfering with the display and behavior of the existing page. By decoupling the data interchange layer from the presentation layer, AJAX allows for web pages, and by extension web applications, to change content dynamically without the need to reload the entire page. In practice, modern implementations commonly use JSON instead of XML, due to the advantages of JSON being native to JavaScript.

Traditionally, AJAX was implemented using the XMLHttpRequest API, but the fetch() API is more suitable and easier to use for modern web applications.

XMLHttpRequest API

Here's a basic example of how it can be used:

let xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
if (xhr.readyState === XMLHttpRequest.DONE) {
if (xhr.status === 200) {
console.log(xhr.responseText);
} else {
console.error('Request failed: ' + xhr.status);
}
}
};
xhr.open('GET', 'https://jsonplaceholder.typicode.com/todos/1', true);
xhr.send();

fetch() API

Alternatively, the fetch() API provides a modern, promise-based approach to making AJAX requests. It is more commonly used in modern web applications.

Here's how you can use it:

fetch('https://jsonplaceholder.typicode.com/todos/1')
.then((response) => {
if (!response.ok) {
throw new Error('Network response was not ok');
}
return response.json();
})
.then((data) => console.log(data))
.catch((error) => console.error('Fetch error:', error));

How does AJAX work?

In modern browsers, AJAX is done using the fetch() API instead of XMLHTTPRequest, so we will explain how the fetch() API works instead:

  1. Making a request: The fetch() function initiates an asynchronous request to fetch a resource from a URL. It takes one mandatory argument – the URL of the resource to fetch, and optionally accepts a second argument - an options object that allows configuring the HTTP request with options like the HTTP method, headers, body, etc.

    fetch('https://api.example.com/data', {
    method: 'GET', // or 'POST', 'PUT', 'DELETE', etc.
    headers: {
    'Content-Type': 'application/json',
    },
    });
  2. Return a promise: The fetch() function returns a Promise that resolves to a Response object representing the response from the server. This Promise needs to be handled using .then() or async/await.

  3. Handling the response: The Response object provides methods to define how the body content should be handled, such as .json() for parsing JSON data, .text() for plain text, .blob() for binary data, etc.

    fetch('https://jsonplaceholder.typicode.com/todos/1')
    .then((response) => response.json())
    .then((data) => console.log(data))
    .catch((error) => console.error('Error:', error));
  4. Asynchronous nature The fetch API is asynchronous, allowing the browser to continue executing other tasks while waiting for the server response. This prevents blocking the main thread and provides a better user experience. The then() and catch() callbacks are put onto the microtask queue when executed as part of the event loop.

  5. Request options The optional second argument to fetch() allows configuring various aspects of the request, such as the HTTP method, headers, body, credentials, caching behavior, and more.

  6. Error handling Errors during the request, such as network failures or invalid responses, are caught and propagated through the Promise chain using the .catch() method or try/catch blocks with async/await.

The fetch() API provides a modern, Promise-based approach to making HTTP requests in JavaScript, replacing the older XMLHttpRequest API. It offers a simpler and more flexible way to interact with APIs and fetch resources from servers, while integrating advanced HTTP concepts like CORS and other extensions.

Advantages and disadvantages of AJAX

While useful, using AJAX also comes with some considerations. Read more about the advantages and disadvantages of AJAX.

Further reading

What are the advantages and disadvantages of using AJAX?

Topics
JavaScriptNetworking

TL;DR

AJAX (Asynchronous JavaScript and XML) is a technique in JavaScript that allows web pages to send and retrieve data asynchronously from servers without refreshing or reloading the entire page.

Advantages

  • Smoother user experience: Updates happen without full page reloads, like in mail and chat applications.
  • Lighter server Load: Only necessary data is fetched via AJAX, reducing server load and improving perceived performance of webpages.
  • Maintains client state: User interactions and any client states are persisted within the page.

Disadvantages

  • Reliance on JavaScript: If disabled, Ajax functionality breaks.
  • Bookmarking issues: Dynamic content makes bookmarking specific page states difficult.
  • SEO Challenges: Search engines may struggle to index dynamic content.
  • Performance Concerns: Processing Ajax data on low-end devices can be slow.

AJAX (Asynchronous JavaScript and XML)

AJAX (Asynchronous JavaScript and XML) is a technique in JavaScript that allows web pages to send and retrieve data asynchronously from servers without refreshing or reloading the entire page. When it was first created, it revolutionized web development and resulted in a smoother and more responsive user experience. AJAX is explained in detail in this question.

Here's a breakdown of AJAX's pros and cons:

Advantages

  • Enhanced user experience: AJAX allows for partial page updates without full reloads. This creates a smoother and more responsive feel for users, as they don't have to wait for the entire page to refresh for every interaction.
  • Reduced server load and bandwidth usage: By exchanging only specific data with the server, AJAX minimizes the amount of data transferred. This leads to faster loading times and reduced server strain, especially for frequently updated content.
  • Improved performance: Faster data exchange and partial page updates contribute to a quicker web application overall. Users perceive the application as more responsive and efficient.
  • Dynamic content updates, preserving client-only state: AJAX enables real-time data updates without full page reloads, preserving client-only state like form inputs and scroll positions. This is ideal for features like live chat, stock tickers, or collaborative editing.
  • Form validation: AJAX can be used for client-side form validation that requires back end interactions (e.g. checking for duplicate usernames), providing immediate feedback to users without requiring a form submission request. This improves the user experience and avoids unnecessary full page reloads for invalid submissions.

Disadvantages

  • Increased complexity: Developing AJAX-powered applications can be more complex compared to traditional web development. It requires handling asynchronous communication and potential race conditions between requests and responses. Since pages are not reloaded, parts of the page can be outdated overtime and can be confusing.
  • Dependency on JavaScript: AJAX relies on JavaScript to function. Users with JavaScript disabled or unsupported browsers won't experience the full functionality of the application. A fallback mechanism (graceful degradation) is necessary to ensure basic functionality for these users.
  • Security concerns: AJAX introduces new security considerations like Cross-Site Scripting (XSS) vulnerabilities (if servers return directly HTML markup) if not implemented carefully. Proper data validation and sanitization are crucial to prevent security risks.
  • Browser support: Older browsers might not fully support AJAX features. Developers need to consider compatibility when building with AJAX to ensure a good user experience across different browsers.
  • SEO challenges: Search engines might have difficulty indexing content dynamically loaded through AJAX. Developers need to employ techniques like server-side rendering or proper content embedding to ensure search engine visibility.
  • Navigation problems: AJAX can interfere with the browser's back and forward navigation buttons, as well as bookmarking, since the URL may not change with asynchronous updates.
  • State management: Maintaining the application state and ensuring proper navigation can be challenging with AJAX, requiring additional techniques such as the History API or URL hash fragments.

While AJAX offers significant advantages in terms of user experience, performance, and functionality, it also introduces complexities and potential drawbacks related to development, SEO, browser compatibility, security, and navigation.

Further reading

What are the differences between `XMLHttpRequest` and `fetch()` in JavaScript and browsers?

Topics
JavaScriptNetworking

TL;DR

XMLHttpRequest (XHR) and fetch() API are both used for asynchronous HTTP requests in JavaScript (AJAX). fetch() offers a cleaner syntax, promise-based approach, and more modern feature set compared to XHR. However, there are some differences:

  • XMLHttpRequest event callbacks, while fetch() utilizes promise chaining.
  • fetch() provides more flexibility in headers and request bodies.
  • fetch() support cleaner error handling with catch().
  • Handling caching with XMLHttpRequest is difficult but caching is supported by fetch() by default in the options.cache object (cache value of second parameter) to fetch() or Request().
  • fetch() requires an AbortController for cancelation, while for XMLHttpRequest, it provides abort() property.
  • XMLHttpRequest has good support for progress tracking, which fetch() lacks.
  • XMLHttpRequest is only available in the browser and not natively supported in Node.js environments. On the other hand fetch() is part of the JavaScript language and is supported on all modern JavaScript runtimes.

These days fetch() is preferred for its cleaner syntax and modern features.


XMLHttpRequest vs fetch()

Both XMLHttpRequest (XHR) and fetch() are ways to make asynchronous HTTP requests in JavaScript. However, they differ significantly in syntax, promise handling, and feature set.

Syntax and usage

XMLHttpRequest is event-driven and requires attaching event listeners to handle response/error states. The basic syntax for creating an XMLHttpRequest object and sending a request is as follows:

const xhr = new XMLHttpRequest();
xhr.open('GET', 'https://jsonplaceholder.typicode.com/todos/1', true);
xhr.responseType = 'json';
xhr.onload = function () {
if (xhr.status === 200) {
console.log(xhr.response);
}
};
xhr.send();

xhr is an instance of the XMLHttpRequest class. The open method is used to specify the request method, URL, and whether the request should be asynchronous. The onload event is used to handle the response, and the send method is used to send the request.

fetch() provides a more straightforward and intuitive way of making HTTP requests. It is Promise-based and returns a promise that resolves with the response or rejects with an error. The basic syntax for making a GET request using fetch() is as follows:

fetch('https://jsonplaceholder.typicode.com/todos/1')
.then((response) => response.text())
.then((data) => console.log(data));

Request headers

Both XMLHttpRequest and fetch() support setting request headers. However, fetch() provides more flexibility in terms of setting headers, as it supports custom headers and allows for more complex header configurations.

XMLHttpRequest supports setting request headers using the setRequestHeader method:

xhr.setRequestHeader('Content-Type', 'application/json');
xhr.setRequestHeader('Authorization', 'Bearer YOUR_TOKEN');

For fetch(), headers are passed as an object in the second argument to fetch():

fetch('https://jsonplaceholder.typicode.com/todos/1', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: 'Bearer YOUR_TOKEN',
},
body: JSON.stringify({
name: 'John Doe',
age: 30,
}),
});

Request body

Both XMLHttpRequest and fetch() support sending request bodies. However, fetch() provides more flexibility in terms of sending request bodies, as it supports sending JSON data, form data, and more.

XMLHttpRequest supports sending request bodies using the send method:

const xhr = new XMLHttpRequest();
xhr.open('POST', 'https://jsonplaceholder.typicode.com/todos/1', true);
xhr.send(
JSON.stringify({
name: 'John Doe',
age: 30,
}),
);

fetch() supports sending request bodies using the body property in the second argument to fetch():

fetch('https://jsonplaceholder.typicode.com/todos/1', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
name: 'John Doe',
age: 30,
}),
});

Response handling

XMLHttpRequest provides a responseType property to set the response format that we are expecting. responseType is 'text' by default but it support types likes 'text', 'arraybuffer', 'blob', 'document' and 'json'.

const xhr = new XMLHttpRequest();
xhr.open('GET', 'https://jsonplaceholder.typicode.com/todos/1', true);
xhr.responseType = 'json'; // or 'text', 'blob', 'arraybuffer'
xhr.onload = function () {
if (xhr.status === 200) {
console.log(xhr.response);
}
};
xhr.send();

On the other hand, fetch() provides a unified Response object with then method for accessing data.

// JSON data
fetch('https://jsonplaceholder.typicode.com/todos/1')
.then((response) => response.json())
.then((data) => console.log(data));
// Text data
fetch('https://jsonplaceholder.typicode.com/todos/1')
.then((response) => response.text())
.then((data) => console.log(data));

Error handling

Both support error handling but fetch() provides more flexibility in terms of error handling, as it supports handling errors using the .catch() method.

XMLHttpRequest supports error handling using the onerror event:

const xhr = new XMLHttpRequest();
xhr.open('GET', 'https://jsonplaceholder.typicod.com/todos/1', true); // Typo in URL
xhr.responseType = 'json';
xhr.onload = function () {
if (xhr.status === 200) {
console.log(xhr.response);
}
};
xhr.onerror = function () {
console.error('Error occurred');
};
xhr.send();

fetch() supports error handling using the catch() method on the returned Promise:

fetch('https://jsonplaceholder.typicod.com/todos/1') // Typo in URL
.then((response) => response.json())
.then((data) => console.log(data))
.catch((error) => console.error('Error occurred: ' + error));

Caching control

Handling caching with XMLHttpRequest is difficult, and you might need to add a random value to the query string in order to get around the browser cache. Caching is supported by fetch() by default in the second parameter of the options object:

const res = await fetch('https://jsonplaceholder.typicode.com/todos/1', {
method: 'GET',
cache: 'default',
});

Other values for the cache option include default, no-store, reload, no-cache, force-cache, and only-if-cached.

Cancelation

In-flight XMLHttpRequests can be canceled by running the XMLHttpRequest's abort() method. An abort handler can be attached by assigning to the .onabort property if necessary:

const xhr = new XMLHttpRequest();
xhr.open('GET', 'https://jsonplaceholder.typicode.com/todos/1');
xhr.send();
// ...
xhr.onabort = () => console.log('aborted');
xhr.abort();

Aborting a fetch() requires creating an AbortController object and passing it to as the signal property of the options object when calling fetch().

const controller = new AbortController();
const signal = controller.signal;
fetch('https://jsonplaceholder.typicode.com/todos/1', { signal })
.then((response) => response.json())
.then((data) => console.log(data))
.catch((error) => console.error('Error occurred: ' + error));
// Abort request.
controller.abort();

Progress support

XMLHttpRequest supports tracking the progress of requests by attaching a handler to the XMLHttpRequest object's progress event. This is especially useful when uploading large files such as videos to track the progress of the upload.

const xhr = new XMLHttpRequest();
// The callback is passed a `ProgressEvent`.
xhr.upload.onprogress = (event) => {
console.log(Math.round((event.loaded / event.total) * 100) + '%');
};

The callback assigned to onprogress is passed a ProgressEvent:

  • The loaded field on the ProgressEvent is a 64-bit integer indicating the amount of work already performed (bytes uploaded/downloaded) by the underlying process.
  • The total field on the ProgressEvent is a 64-bit integer representing the total amount of work that the underlying process is in the progress of performing. When downloading resources, this is the Content-Length value of the HTTP response.

On the other hand, the fetch() API does not offer any convenient way to track upload progress. It can be implemented by monitoring the body of the Response object as a fraction of the Content-Length header, but it's quite complicated.

Choosing between XMLHttpRequest and fetch()

In modern development scenarios, fetch() is the preferred choice due to its cleaner syntax, promise-based approach, and improved handling of features like error handling, headers, and CORS.

Further reading

How do you abort a web request using `AbortController` in JavaScript?

Topics
JavaScriptNetworking

TL;DR

AbortController is used to cancel ongoing asynchronous operations like fetch requests.

const controller = new AbortController();
const signal = controller.signal;
fetch('https://jsonplaceholder.typicode.com/todos/1', { signal })
.then((response) => {
// Handle response
})
.catch((error) => {
if (error.name === 'AbortError') {
console.log('Request aborted');
} else {
console.error('Error:', error);
}
});
// Call abort() to abort the request
controller.abort();

Aborting web requests is useful for:

  • Canceling requests based on user actions.
  • Prioritizing the latest requests in scenarios with multiple simultaneous requests.
  • Canceling requests that are no longer needed, e.g. after the user has navigated away from the page.

AbortControllers

AbortController allows graceful cancelation of ongoing asynchronous operations like fetch requests. It offers a mechanism to signal to the underlying network layer that the request is no longer required, preventing unnecessary resource consumption and improving user experience.

Using AbortControllers

Using AbortControllers involve the following steps:

  1. Create an AbortController instance: Initialize an AbortController instance, which creates a signal that can be used to abort requests.
  2. Pass the signal to the request: Pass the signal to the request, typically through the signal property in the request options.
  3. Abort the request: Call the abort() method on the AbortController instance to cancel the ongoing request.

Here is an example of how to use AbortControllers with the fetch() API:

const controller = new AbortController();
const signal = controller.signal;
fetch('https://jsonplaceholder.typicode.com/todos/1', { signal })
.then((response) => {
// Handle response
})
.catch((error) => {
if (error.name === 'AbortError') {
console.log('Request aborted');
} else {
console.error('Error:', error);
}
});
// Call abort() to abort the request
controller.abort();

Use cases

Canceling a fetch() request on a user action

Cancel requests that take too long or are no longer relevant due to user interactions (e.g., user cancels uploading of a huge file).

// HTML: <button id='cancel-button'>Cancel upload</button>
const btn = document.createElement('button');
btn.id = 'cancel-button';
btn.innerHTML = 'Cancel upload';
document.body.appendChild(btn);
const controller = new AbortController();
const signal = controller.signal;
fetch('https://jsonplaceholder.typicode.com/todos/1', { signal })
.then((response) => {
// Handle successful response
})
.catch((error) => {
if (error.name === 'AbortError') {
console.log('Request canceled');
} else {
console.error('Network or other error:', error);
}
});
document.getElementById('cancel-button').addEventListener('click', () => {
controller.abort();
});
document.getElementById('cancel-button').click(); // Simulate clicking the cancel button

When you click the "Cancel upload" button, in-flight request will be aborted.

Prioritizing latest requests in a race condition

In scenarios where multiple requests are initiated for the same data, use AbortController to prioritize the latest request and abort earlier ones.

let latestController = null; // Keeps track of the latest controller
function fetchData(url) {
if (latestController) {
latestController.abort(); // Abort any previous request
}
const controller = new AbortController();
latestController = controller;
const signal = controller.signal;
fetch(url, { signal })
.then((response) => response.json())
.then((data) => console.log('Fetched data:', data))
.catch((error) => {
if (error.name === 'AbortError') {
console.log('Request canceled');
} else {
console.error('Network or other error:', error);
}
});
}
fetchData('https://jsonplaceholder.typicode.com/posts/1');
// Simulate race conditions with new requests that quickly cancel the previous one
setTimeout(() => {
fetchData('https://jsonplaceholder.typicode.com/posts/2');
}, 5);
setTimeout(() => {
fetchData('https://jsonplaceholder.typicode.com/posts/3');
}, 5);
// Only the last request should (posts/3) will be allowed to complete

In this example, when the fetchData() function is called multiple times triggering multiple fetch requests, AbortControllers will cancel all the previous requests except the latest request. This is common in scenarios like type-ahead search or infinite scrolling, where new requests are triggered frequently.

Canceling requests that are no longer needed

In situations where the user has navigated away from the page, aborting the request can prevent unnecessary operations (e.g. success callback handling), and freeing up resources by lowering the likelihood of memory leaks.

Notes

  • AbortControllers is not fetch()-specific, it can be used to abort other asynchronous tasks as well.
  • A singular AbortContoller instance can be reused on multiple async tasks and cancel all of them at once.
  • Calling abort() on AbortControllers does not send any notification or signal to the server. The server is unaware of the cancelation and will continue processing the request until it completes or times out.

Further reading

What are JavaScript polyfills for?

Topics
JavaScript

TL;DR

Polyfills in JavaScript are pieces of code that provide modern functionality to older browsers that lack native support for those features. They bridge the gap between the JavaScript language features and APIs available in modern browsers and the limited capabilities of older browser versions.

They can be implemented manually or included through libraries and are often used in conjunction with feature detection.

Common use cases include:

  • New JavaScript Methods: For example, Array.prototype.includes(), Object.assign(), etc.
  • New APIs: Such as fetch(), Promise, IntersectionObserver, etc. Modern browsers support these now but for a long time they have to be polyfilled.

Libraries and services for polyfills:

  • core-js: A modular standard library for JavaScript which includes polyfills for a wide range of ECMAScript features.

    import 'core-js/actual/array/flat-map'; // With this, Array.prototype.flatMap is available to be used.
    [1, 2].flatMap((it) => [it, it]); // => [1, 1, 2, 2]
  • Polyfill.io: A service that provides polyfills based on the features and user agents specified in the request.

    <script src="https://polyfill.io/v3/polyfill.min.js"></script>

Polyfills in JavaScript

Polyfills in JavaScript are pieces of code (usually JavaScript) that provide modern functionality on older browsers that do not natively support it. They enable developers to use newer features of the language and APIs while maintaining compatibility with older environments.

How polyfills work

Polyfills detect if a feature or API is missing in a browser and provide a custom implementation of that feature using existing JavaScript capabilities. This allows developers to write code using the latest JavaScript features and APIs without worrying about browser compatibility issues.

For example, let's consider the Array.prototype.includes() method, which determines if an array includes a specific element. This method is not supported in older browsers like Internet Explorer 11. To address this, we can use a polyfill:

// Polyfill for Array.prototype.includes()
if (!Array.prototype.includes) {
Array.prototype.includes = function (searchElement) {
for (var i = 0; i < this.length; i++) {
if (this[i] === searchElement) {
return true;
}
}
return false;
};
}
console.log([1, 2, 3].includes(2)); // true
console.log([1, 2, 3].includes(4)); // false

By including this polyfill, we can safely use Array.prototype.includes() even in browsers that don't support it natively.

Implementing polyfills

  1. Identify the missing feature: Determine if the feature is compatible with the target browsers or detect its presence using feature detection methods like typeof, in, or window.
  2. Write the fallback implementation: Develop the fallback implementation that provides similar functionality, either using a pre-existing polyfill library or pure JavaScript code.
  3. Test the polyfill: Thoroughly test the polyfill to ensure it functions as intended across different contexts and browsers.
  4. Implement the polyfill: Enclose the code that uses the missing feature in an if statement that checks for feature support. If not supported, run the polyfill code instead.

Considerations

  • Selective loading: Polyfills should only be loaded for browsers that need them to optimize performance.
  • Feature detection: Perform feature detection before applying a polyfill to avoid overwriting native implementations or applying unnecessary polyfills.
  • Size and performance: Polyfills can increase the JavaScript bundle size, so minification and compression techniques should be used to mitigate this impact.
  • Existing libraries: Consider using existing libraries and tools that offer comprehensive polyfill solutions for multiple features, handling feature detection, conditional loading, and fallbacks efficiently

Libraries and services for polyfills

  • core-js: A modular standard library for JavaScript which includes polyfills for a wide range of ECMAScript features.

    import 'core-js/actual/array/flat-map'; // With this, Array.prototype.flatMap is available to be used.
    [1, 2].flatMap((it) => [it, it]); // => [1, 1, 2, 2]
  • Polyfill.io: A service that provides polyfills based on the features and user agents specified in the request.

    <script src="https://polyfill.io/v3/polyfill.min.js"></script>

Further reading

Why is extending built-in JavaScript objects not a good idea?

Topics
JavaScriptOOP

TL;DR

Extending a built-in/native JavaScript object means adding properties/functions to its prototype. While this may seem like a good idea at first, it is dangerous in practice. Imagine your code uses a few libraries that both extend the Array.prototype by adding the same contains method, the implementations will overwrite each other and your code will have unpredictable behavior if these two methods do not work the same way.

The only time you may want to extend a native object is when you want to create a polyfill, essentially providing your own implementation for a method that is part of the JavaScript specification but might not exist in the user's browser due to it being an older browser.


Extending JavaScript

In JavaScript it's very easy to extend a built-in/native object. You can simply extend a built-in object by adding properties and functions to its prototype.

String.prototype.reverseString = function () {
return this.split('').reverse().join('');
};
console.log('hello world'.reverseString()); // Outputs 'dlrow olleh'
// Instead of extending the built-in object, write a pure utility function to do it.
function reverseString(str) {
return str.split('').reverse().join('');
}
console.log(reverseString('hello world')); // Outputs 'dlrow olleh'

Disadvantages

Extending built-in JavaScript objects is essentially modifying the global scope and it's not a good idea because:

  1. Future-proofing: If a browser decides to implement its own version of a method, your custom extension might get overridden silently, leading to unexpected behavior or conflicts.
  2. Collisions: Adding custom methods to built-in objects can lead to collisions with future browser implementations or other libraries, causing unexpected behavior or errors.
  3. Maintenance and debugging: When extending built-in objects, it can be difficult for other developers to understand the changes made, making maintenance and debugging more challenging.
  4. Performance: Extending built-in objects can potentially impact performance, especially if the extensions are not optimized for the specific use case.
  5. Security: In some cases, extending built-in objects can introduce security vulnerabilities if not done correctly, such as adding enumerable properties that can be exploited by malicious code.
  6. Compatibility: Custom extensions to built-in objects may not be compatible with all browsers or environments, leading to issues with cross-browser compatibility.
  7. Namespace clashes: Extending built-in objects can lead to namespace clashes if multiple libraries or scripts extend the same object in different ways, causing conflicts and unexpected behavior.

We dive deeper into why it is a bad idea to modify the global scope.

It is not recommended to extend built-in objects due to these potential issues and instead suggest using composition or creating custom classes and utility functions to achieve the desired functionality.

Alternatives to extending built-in objects

Instead of extending built-in objects, do the following instead:

  1. Create custom utility functions: For simple tasks, creating small utility functions specific to your needs can be a cleaner and more maintainable solution.
  2. Use libraries and frameworks: Many libraries and frameworks provide their own helper methods and extensions, eliminating the need to modify built-in objects directly.

Polyfilling as a valid reason

One valid reason to extend built-in objects is to implement polyfills for the latest ECMAScript standard and proposals. core-js is a popular library that is present on most popular websites. It not only polyfills missing features but also fixes incorrect or non-compliant implementations of JavaScript features in various browsers and runtimes.

import 'core-js/actual/array/flat-map'; // With this, Array.prototype.flatMap is available to be used.
[1, 2].flatMap((it) => [it, it]); // => [1, 1, 2, 2]

Further reading

Why is it, in general, a good idea to leave the global JavaScript scope of a website as-is and never touch it?

Topics
JavaScript

TL;DR

JavaScript that is executed in the browser has access to the global scope (the window object). In general it's a good software engineering practice to not pollute the global namespace unless you are working on a feature that truly needs to be global – it is needed by the entire page. Several reasons to avoid touching the global scope:

  • Naming conflicts: Sharing the global scope across scripts can cause conflicts and bugs when new global variables or changes are introduced.
  • Cluttered global namespace: Keeping the global namespace minimal avoids making the codebase hard to manage and maintain.
  • Scope leaks: Unintentional references to global variables in closures or event handlers can cause memory leaks and performance issues.
  • Modularity and encapsulation: Good design promotes keeping variables and functions within their specific scopes, enhancing organization, reusability, and maintainability.
  • Security concerns: Global variables are accessible by all scripts, including potentially malicious ones, posing security risks, especially if sensitive data is stored there.
  • Compatibility and portability: Heavy reliance on global variables reduces code portability and integration ease with other libraries or frameworks.

Follow these best practices to avoid global scope pollution:

  • Use local variables: Declare variables within functions or blocks using var, let, or const to limit their scope.
  • Pass variables as function parameters: Maintain encapsulation by passing variables as parameters instead of accessing them globally.
  • Use immediately invoked function expressions (IIFE): Create new scopes with IIFEs to prevent adding variables to the global scope.
  • Use modules: Encapsulate code with module systems to maintain separate scopes and manageability.

What is the global scope?

In the browser, the global scope is the top-level context where variables, functions, and objects are accessible from anywhere in the code. The global scope is represented by the window object. Any variables or functions declared outside of any function or block (that is not within any module) are added to the window object and can be accessed globally.

For example:

// Assuming this is run in the global scope and not within a module.
var globalVariable = 'I am global';
function globalFunction() {
console.log('I am a global function');
}
console.log(window.globalVariable); // 'I am global'
window.globalFunction(); // 'I am a global function'

In this example, globalVariable and globalFunction are added to the window object and can be accessed from anywhere in the global context.

Pitfalls of global scope

In general, it's a good software engineering practice to not pollute the global namespace unless you are working on a feature that truly needs to be global – it is needed by the entire page. There are many reasons to avoid touching the global scope:

  • Naming conflicts: The global scope is shared across all scripts on a web page. If you introduce new global variables or modify existing ones, you risk causing naming conflicts with other scripts or libraries used on the same page. This can lead to unexpected behavior and difficult-to-debug issues.
  • Cluttered global namespace: The global namespace should be kept as clean and minimal as possible. Adding unnecessary global variables or functions can clutter the namespace and make it harder to manage and maintain the codebase over time.
  • Scope leaks: When working with closures or event handlers, it's easy to accidentally create unintended references to global variables, leading to memory leaks and performance issues. By avoiding global variables altogether, you can prevent these types of scope leaks.
  • Modularity and encapsulation: One of the principles of good software design is modularity and encapsulation. By keeping variables and functions within their respective scopes (e.g., module, function, or block scope), you promote better code organization, reusability, and maintainability.
  • Security concerns: Global variables can be accessed and modified by any script running on the page, including potentially malicious scripts. It is quite common for websites to load third-party scripts and in the event someone's network is compromised, it can pose security risks, especially if sensitive data is stored in global variables. However, in the first place you should not expose any sensitive data on the client.
  • Compatibility and portability: By relying heavily on global variables, your code becomes less portable and more dependent on the specific environment it was written for. This can make it harder to integrate with other libraries or frameworks, or to run the code in different environments (e.g., server-side vs browser).

Here's an example of global scope being used.

// Assuming this is run in the global scope, not within a module.
let count = 0;
function incrementCount() {
count++;
console.log(count);
}
function decrementCount() {
count--;
console.log(count);
}
incrementCount(); // Output: 1
decrementCount(); // Output: 0

In this example, count, incrementCount, and decrementCount are defined on the global scope. Any script on the page can access and modify the count, as well as all variables on window.

Avoiding global scope pollution

By now we hope that you're convinced that it's not a good idea to define variables on the global scope. To avoid polluting the global scope, it is recommended to follow best practices such as:

  • Use local variables: Declare variables within functions or blocks to limit their scope and prevent them from being accessed globally. Use var, let, or const to declare variables within a specific scope, ensuring they are not accidentally made global.
  • Pass variables as function parameters:: Instead of accessing variables directly from the outer scope, pass them as parameters to functions to maintain encapsulation and avoid global scope pollution.
  • Use modules: Utilize module systems to encapsulate your code and prevent global scope pollution. Each module has its own scope, making it easier to manage and maintain your code.
  • Use immediately invoked function expressions (IIFE): If modules are not available, wrap your code in an IIFE to create a new scope, preventing variables from being added to the global scope unless you explicitly expose them.
// Assuming this is run in the global scope, not within a module.
(function () {
let count = 0;
window.incrementCount = function () {
count++;
console.log(count);
};
window.decrementCount = function () {
count--;
console.log(count);
};
})();
incrementCount(); // Output: 1
decrementCount(); // Output: 0

In this example, count is not accessible in the global scope. It can only be accessed and modified by the incrementCount and decrementCount functions. These functions are exposed to the global scope by attaching them to the window object, but they still have access to the count variable in their parent scope. This provides a way to encapsulate the underlying data and only expose the necessary operations – no direct manipulation of the value is allowed.


Further reading

Explain the differences between CommonJS modules and ES modules in JavaScript

Topics
JavaScript

TL;DR

In JavaScript, modules are reusable pieces of code that encapsulate functionality, making it easier to manage, maintain, and structure your applications. Modules allow you to break down your code into smaller, manageable parts, each with its own scope.

CommonJS is an older module system that was initially designed for server-side JavaScript development with Node.js. It uses the require() function to load modules and the module.exports or exports object to define the exports of a module.

// my-module.js
const value = 42;
module.exports = { value };
// main.js
const myModule = require('./my-module.js');
console.log(myModule.value); // 42

ES Modules (ECMAScript Modules) are the standardized module system introduced in ES6 (ECMAScript 2015). They use the import and export statements to handle module dependencies.

// my-module.js
export const value = 42;
// main.js
import { value } from './my-module.js';
console.log(value); // 42

CommonJS vs ES modules

FeatureCommonJSES modules
Module Syntaxrequire() for importing module.exports for exportingimport for importing export for exporting
EnvironmentPrimarily used in Node.js for server-side developmentDesigned for both browser and server-side JavaScript (Node.js)
LoadingSynchronous loading of modulesAsynchronous loading of modules
StructureDynamic imports, can be conditionally calledStatic imports/exports at the top level
File extensions.js (default).mjs or .js (with type: "module" in package.json)
Browser supportNot natively supported in browsersNatively supported in modern browsers
OptimizationLimited optimization due to dynamic natureAllows for optimizations like tree-shaking due to static structure
CompatibilityWidely used in existing Node.js codebases and librariesNewer standard, but gaining adoption in modern projects

Modules in Javascript

Modules in JavaScript are a way to organize and encapsulate code into reusable and maintainable units. They allow developers to break down their codebase into smaller, self-contained pieces, promoting code reuse, separation of concerns, and better organization. There are two main module systems in JavaScript: CommonJS and ES modules.

CommonJS

CommonJS is an older module system that was initially designed for server-side JavaScript development with Node.js. It uses the require function to load modules and the module.exports or exports object to define the exports of a module.

  • Syntax: Modules are included using require() and exported using module.exports.
  • Environment: Primarily used in Node.js.
  • Execution: Modules are loaded synchronously.
  • Modules are loaded dynamically at runtime.
// my-module.js
const value = 42;
module.exports = { value };
// main.js
const myModule = require('./my-module.js');
console.log(myModule.value); // 42

ES Modules

ES Modules (ECMAScript Modules) are the standardized module system introduced in ES6 (ECMAScript 2015). They use the import and export statements to handle module dependencies.

  • Syntax: Modules are imported using import and exported using export.
  • Environment: Can be used in both browser environments and Node.js (with certain configurations).
  • Execution: Modules are loaded asynchronously.
  • Support: Introduced in ES2015, now widely supported in modern browsers and Node.js.
  • Modules are loaded statically at compile-time.
  • Enables better performance due to static analysis and tree-shaking.
// my-module.js
export const value = 42;
// main.js
import { value } from './my-module.js';
console.log(value); // 42

Summary

While CommonJS was the default module system in Node.js initially, ES modules are now the recommended approach for new projects, as they provide better tooling, performance, and ecosystem compatibility. However, CommonJS modules are still widely used in existing code bases and libraries especially for legacy dependencies.

Further reading

What are the various data types in JavaScript?

Topics
JavaScript

TL;DR

In JavaScript, data types can be categorized into primitive and non-primitive types:

Primitive data types

  • Number: Represents both integers and floating-point numbers.
  • String: Represents sequences of characters.
  • Boolean: Represents true or false values.
  • Undefined: A variable that has been declared but not assigned a value.
  • Null: Represents the intentional absence of any object value.
  • Symbol: A unique and immutable value used as object property keys. Read more in our deep dive on Symbols
  • BigInt: Represents integers with arbitrary precision.

Non-primitive (Reference) data types

  • Object: Used to store collections of data.
  • Array: An ordered collection of data.
  • Function: A callable object.
  • Date: Represents dates and times.
  • RegExp: Represents regular expressions.
  • Map: A collection of keyed data items.
  • Set: A collection of unique values.

The primitive types store a single value, while non-primitive types can store collections of data or complex entities.


Data types in JavaScript

JavaScript, like many programming languages, has a variety of data types to represent different kinds of data. The main data types in JavaScript can be divided into two categories: primitive and non-primitive (reference) types.

Primitive data types

  1. Number: Represents both integer and floating-point numbers. JavaScript only has one type of number.
let age = 25;
let price = 99.99;
console.log(price); // 99.99
  1. String: Represents sequences of characters. Strings can be enclosed in single quotes, double quotes, or backticks (for template literals).
let myName = 'John Doe';
let greeting = 'Hello, world!';
let message = `Welcome, ${myName}!`;
console.log(message); // "Welcome, John Doe!"
  1. Boolean: Represents logical entities and can have two values: true or false.
let isActive = true;
let isOver18 = false;
console.log(isOver18); // false
  1. Undefined: A variable that has been declared but not assigned a value is of type undefined.
let user;
console.log(user); // undefined
  1. Null: Represents the intentional absence of any object value. It is a primitive value and is treated as a falsy value.
let user = null;
console.log(user); // null
if (!user) {
console.log('user is a falsy value');
}
  1. Symbol: A unique and immutable primitive value, typically used as the key of an object property.
let sym1 = Symbol();
let sym2 = Symbol('description');
console.log(sym1); // Symbol()
console.log(sym2); // Symbol(description)
  1. BigInt: Used for representing integers with arbitrary precision, useful for working with very large numbers.
let bigNumber = BigInt(9007199254740991);
let anotherBigNumber = 1234567890123456789012345678901234567890n;
console.log(bigNumber); // 9007199254740991n
console.log(anotherBigNumber); // 1234567890123456789012345678901234567890n

Non-primitive (reference) data types

  1. Object: It is used to store collections of data and more complex entities. Objects are created using curly braces {}.
let person = {
name: 'Alice',
age: 30,
};
console.log(person); // {name: "Alice", age: 30}
  1. Array: A special type of object used for storing ordered collections of data. Arrays are created using square brackets [].
let numbers = [1, 2, 3, 4, 5];
console.log(numbers);
  1. Function: Functions in JavaScript are objects. They can be defined using function declarations or expressions.
function greet() {
console.log('Hello!');
}
let add = function (a, b) {
return a + b;
};
greet(); // "Hello!"
console.log(add(2, 3)); // 5
  1. Date: Represents dates and times. The Date object is used to work with dates.
let today = new Date().toLocaleTimeString();
console.log(today);
  1. RegExp: Represents regular expressions, which are patterns used to match character combinations in strings.
let pattern = /abc/;
let str = '123abc456';
console.log(pattern.test(str)); // true
  1. Map: A collection of keyed data items, similar to an object but allows keys of any type.
let map = new Map();
map.set('key1', 'value1');
console.log(map);
  1. Set: A collection of unique values.
let set = new Set();
set.add(1);
set.add(2);
console.log(set); // { 1, 2 }

Determining data types

JavaScript is a dynamically-typed language, which means variables can hold values of different data types over time. The typeof operator can be used to determine the data type of a value or variable.

console.log(typeof 42); // "number"
console.log(typeof 'hello'); // "string"
console.log(typeof true); // "boolean"
console.log(typeof undefined); // "undefined"
console.log(typeof null); // "object" (this is a historical bug in JavaScript)
console.log(typeof Symbol()); // "symbol"
console.log(typeof BigInt(123)); // "bigint"
console.log(typeof {}); // "object"
console.log(typeof []); // "object"
console.log(typeof function () {}); // "function"

Pitfalls

Type coercion

JavaScript often performs type coercion, converting values from one type to another, which can lead to unexpected results.

let result = '5' + 2;
console.log(result, typeof result); // "52 string" (string concatenation)
let difference = '5' - 2;
console.log(difference, typeof difference); // 3 "number" (numeric subtraction)

In the first example, since strings can be concatenated with the + operator, the number is converted into a string and the two strings are concatenated together. In the second example, strings cannot work with the minus operator (-), but two numbers can be minused, so the string is first converted into a number and the result is the difference.

Further reading

What language constructs do you use for iterating over object properties and array items in JavaScript?

Topics
JavaScript

TL;DR

There are multiple ways to iterate over object properties as well as arrays in JavaScript:

for...in loop

The for...in loop iterates over all enumerable properties of an object, including inherited enumerable properties. So it is important to have a check if you only want to iterate over object's own properties

const obj = {
a: 1,
b: 2,
c: 3,
};
for (const key in obj) {
// To avoid iterating over inherited properties
if (Object.hasOwn(obj, key)) {
console.log(`${key}: ${obj[key]}`);
}
}

Object.keys()

Object.keys() returns an array of the object's own enumerable property names. You can then use a for...of loop or forEach to iterate over this array.

const obj = {
a: 1,
b: 2,
c: 3,
};
Object.keys(obj).forEach((key) => {
console.log(`${key}: ${obj[key]}`);
});

Most common ways to iterate over array are using for loop and Array.prototype.forEach method.

Using for loop

let array = [1, 2, 3, 4, 5, 6];
for (let index = 0; index < array.length; index++) {
console.log(array[index]);
}

Using Array.prototype.forEach method

let array = [1, 2, 3, 4, 5, 6];
array.forEach((number, index) => {
console.log(`${number} at index ${index}`);
});

Using for...of

This method is the newest and most convenient way to iterate over arrays. It automatically iterates over each element without requiring you to manage the index.

const numbers = [1, 2, 3, 4, 5];
for (const number of numbers) {
console.log(number);
}

There are also other inbuilt methods available which are suitable for specific scenarios for example:

  • Array.prototype.filter: You can use the filter method to create a new array containing only the elements that satisfy a certain condition.
  • Array.prototype.map: You can use the map method to create a new array based on the existing one, transforming each element with a provided function.
  • Array.prototype.reduce: You can use the reduce method to combine all elements into a single value by repeatedly calling a function that takes two arguments: the accumulated value and the current element.

Iterating over objects

Iterating over object properties and array is very common in JavaScript and we have various ways to achieve this. Here are some of the ways to do it:

for...in statement

This loop iterates over all enumerable properties of an object, including those inherited from its prototype chain.

const obj = {
status: 'working',
hoursWorked: 3,
};
for (const property in obj) {
console.log(property);
}

Since for...in statement iterates over all the object's enumerable properties (including inherited enumerable properties). Hence most of the time you should check whether the property exists on directly on the object via Object.hasOwn(object, property) before using it.

const obj = {
status: 'working',
hoursWorked: 3,
};
for (const property in obj) {
if (Object.hasOwn(obj, property)) {
console.log(property);
}
}

Note that obj.hasOwnProperty() is not recommended because it doesn't work for objects created using Object.create(null). It is recommended to use Object.hasOwn() in newer browsers, or use the good old Object.prototype.hasOwnProperty.call(object, key).

Object.keys()

Object.keys() is a static method that will return an array of all the enumerable property names of the object that you pass it. Since Object.keys() returns an array, you can also use the array iteration approaches listed below to iterate through it.

const obj = {
status: 'working',
hoursWorked: 3,
};
Object.keys(obj).forEach((property) => {
console.log(property);
});

Object.entries():

This method returns an array of an object's enumerable properties in [key, value] pairs.

const obj = { a: 1, b: 2, c: 3 };
Object.entries(obj).forEach(([key, value]) => {
console.log(`${key}: ${value}`);
});

Object.getOwnPropertyNames()

const obj = { a: 1, b: 2, c: 3 };
Object.getOwnPropertyNames(obj).forEach((property) => {
console.log(property);
});

Object.getOwnPropertyNames() is a static method that will lists all enumerable and non-enumerable properties of the object that you pass it. Since Object.getOwnPropertyNames() returns an array, you can also use the array iteration approaches listed below to iterate through it.

Arrays

for loop

const arr = [1, 2, 3, 4, 5];
for (var i = 0; i < arr.length; i++) {
console.log(arr[i]);
}

A common pitfall here is that var is in the function scope and not the block scope and most of the time you would want block scoped iterator variable. ES2015 introduces let which has block scope and it is recommended to use let over var.

const arr = [1, 2, 3, 4, 5];
for (let i = 0; i < arr.length; i++) {
console.log(arr[i]);
}

Array.prototype.forEach()

const arr = [1, 2, 3, 4, 5];
arr.forEach((element, index) => {
console.log(`${element} at index ${index}`);
});

The Array.prototype.forEach() method can be more convenient at times if you do not need to use the index and all you need is the individual array elements. However, the downside is that you cannot stop the iteration halfway and the provided function will be executed on the elements once. A for loop or for...of statement is more relevant if you need finer control over the iteration.

for...of statement

const arr = [1, 2, 3, 4, 5];
for (let element of arr) {
console.log(element);
}

ES2015 introduces a new way to iterate, the for-of loop, that allows you to loop over objects that conform to the iterable protocol such as String, Array, Map, Set, etc. It combines the advantages of the for loop and the forEach() method. The advantage of the for loop is that you can break from it, and the advantage of forEach() is that it is more concise than the for loop because you don't need a counter variable. With the for...of statement, you get both the ability to break from a loop and a more concise syntax.

Most of the time, prefer the .forEach method, but it really depends on what you are trying to do. Before ES2015, we used for loops when we needed to prematurely terminate the loop using break. But now with ES2015, we can do that with for...of statement. Use for loops when you need more flexibility, such as incrementing the iterator more than once per loop.

Also, when using the for...of statement, if you need to access both the index and value of each array element, you can do so with ES2015 Array.prototype.entries() method:

const arr = ['a', 'b', 'c'];
for (let [index, elem] of arr.entries()) {
console.log(index, elem);
}

Further reading

What are the benefits of using spread syntax in JavaScript and how is it different from rest syntax?

Topics
JavaScript

TL;DR

Spread syntax (...) allows an iterable (like an array or string) to be expanded into individual elements. This is often used as a convenient and modern way to create new arrays or objects by combining existing ones.

OperationTraditionalSpread
Array cloningarr.slice()[...arr]
Array mergingarr1.concat(arr2)[...arr1, ...arr2]
Object cloningObject.assign({}, obj){ ...obj }
Object mergingObject.assign({}, obj1, obj2){ ...obj1, ...obj2 }

Rest syntax is the opposite of what spread syntax does. It collects a variable number of arguments into an array. This is often used in function parameters to handle a dynamic number of arguments.

// Using rest syntax in a function
function sum(...numbers) {
return numbers.reduce((total, num) => total + num, 0);
}
console.log(sum(1, 2, 3)); // Output: 6

Spread syntax

ES2015's spread syntax is very useful when coding in a functional paradigm as we can easily create copies of / merge arrays or objects without resorting to Object.create, Object.assign, Array.prototype.slice, or a library function. This language feature is used often in Redux and RxJS projects.

Copying arrays/objects

The spread syntax provides a concise way to create copies of arrays or objects without modifying the originals. This is useful for creating immutable data structures. However do note that arrays copied via the spread operator are shallowly-copied.

// Copying arrays
const array = [1, 2, 3];
const newArray = [...array];
console.log(newArray); // Output: [1, 2, 3]
// Copying objects
const person = { name: 'John', age: 30 };
const newObj = { ...person, city: 'New York' };
console.log(newObj); // Output: { name: 'John', age: 30, city: 'New York' }

Merging arrays/objects

The spread syntax allows you to merge arrays or objects by spreading their elements/properties into a new array or object.

// Merging arrays
const arr1 = [1, 2, 3];
const arr2 = [4, 5, 6];
const mergedArray = [...arr1, ...arr2];
console.log(mergedArray); // Output: [1, 2, 3, 4, 5, 6]
// Merging objects
const obj1 = {
foo: 'bar',
};
const obj2 = {
qux: 'baz',
};
const mergedObj = { ...obj1, ...obj2 };
console.log(mergedObj); // Output: { foo: "bar", qux: "baz" }

Passing arguments to functions

Use the spread syntax to pass an array of values as individual arguments to a function, avoiding the need for apply().

const numbers = [1, 2, 3];
const max = Math.max(...numbers); // Same as Math.max(1, 2, 3)
console.log(max); // Output: 3

Array vs object spreads

Only iterable values like Arrays and Strings can be spread in an array. Trying to spread non-iterables will result in a TypeError.

Spreading object into array:

const person = {
name: 'Todd',
age: 29,
};
const array = [...person]; // Error: Uncaught TypeError: person is not iterable

On the other hand, arrays can be spread into objects.

const array = [1, 2, 3];
const obj = { ...array };
console.log(obj); // { 0: 1, 1: 2, 2: 3 }

Rest syntax

The rest syntax (...) in JavaScript allows you to represent an indefinite number of elements as an array or object. It is like an inverse of the spread syntax, taking data and stuffing it into an array rather than unpacking an array of data, and it works in function arguments, as well as in array and object destructuring assignments.

Rest parameters in functions

The rest syntax can be used in function parameters to collect all remaining arguments into an array. This is particularly useful when you don't know how many arguments will be passed to the function.

function addFiveToABunchOfNumbers(...numbers) {
return numbers.map((x) => x + 5);
}
const result = addFiveToABunchOfNumbers(4, 5, 6, 7, 8, 9, 10);
console.log(result); // Output: [9, 10, 11, 12, 13, 14, 15]

Provides a cleaner syntax than using the arguments object, which is unsupported for arrow functions and represents all arguments whereas the usage of the rest syntax below allows remaining to represent the 3rd argument and beyond.

const [first, second, ...remaining] = [1, 2, 3, 4, 5];
console.log(first); // Output: 1
console.log(second); // Output: 2
console.log(remaining); // Output: [3, 4, 5]

Note that the rest parameters must be at the end. The rest parameters gather all remaining arguments, so the following does not make sense and causes an error:

function addFiveToABunchOfNumbers(arg1, ...numbers, arg2) {
// Error: Rest parameter must be last formal parameter.
}

Array destructuring

The rest syntax can be used in array destructuring to collect the remaining elements into a new array.

const [a, b, ...rest] = [1, 2, 3, 4];
console.log(a); // Output: 1
console.log(b); // Output: 2
console.log(rest); // Output: [3, 4]

Object destructuring

The rest syntax can be used in object destructuring to collect the remaining properties into a new object.

const { e, f, ...others } = {
e: 1,
f: 2,
g: 3,
h: 4,
};
console.log(e); // Output: 1
console.log(f); // Output: 2
console.log(others); // Output: { g: 3, h: 4 }

Further Reading

What are iterators and generators in JavaScript and what are they used for?

Topics
JavaScript

TL;DR

In JavaScript, iterators and generators are powerful tools for managing sequences of data and controlling the flow of execution in a more flexible way.

Iterators are objects that define a sequence and potentially a return value upon its termination. It adheres to a specific interface:

  • An iterator object must implement a next() method.
  • The next() method returns an object with two properties:
    • value: The next value in the sequence.
    • done: A boolean that is true if the iterator has finished its sequence, otherwise false.

Here's an example of an object implementing the iterator interface.

const iterator = {
current: 0,
last: 5,
next() {
if (this.current <= this.last) {
return { value: this.current++, done: false };
} else {
return { value: undefined, done: true };
}
},
};
let result = iterator.next();
while (!result.done) {
console.log(result.value); // Logs 0, 1, 2, 3, 4, 5
result = iterator.next();
}

Generators are a special functions that can pause execution and resume at a later point. It uses the function* syntax and the yield keyword to control the flow of execution. When you call a generator function, it doesn't execute completely like a regular function. Instead, it returns an iterator object. Calling the next() method on the returned iterator advances the generator to the next yield statement, and the value after yield becomes the return value of next().

function* numberGenerator() {
let num = 0;
while (num <= 5) {
yield num++;
}
}
const gen = numberGenerator();
console.log(gen.next()); // { value: 0, done: false }
console.log(gen.next()); // { value: 1, done: false }
console.log(gen.next()); // { value: 2, done: false }
console.log(gen.next()); // { value: 3, done: false }
console.log(gen.next()); // { value: 4, done: false }
console.log(gen.next()); // { value: 5, done: false }
console.log(gen.next()); // { value: undefined, done: true }

Generators are powerful for creating iterators on-demand, especially for infinite sequences or complex iteration logic. They can be used for:

  • Lazy evaluation – processing elements only when needed, improving memory efficiency for large datasets.
  • Implementing iterators for custom data structures.
  • Creating asynchronous iterators for handling data streams.

Iterators

Iterators are objects that define a sequence and provide a next() method to access the next value in the sequence. They are used to iterate over data structures like arrays, strings, and custom objects. The key use case of iterators include:

  • Implementing the iterator protocol to make custom objects iterable, allowing them to be used with for...of loops and other language constructs that expect iterables.
  • Providing a standard way to iterate over different data structures, making code more reusable and maintainable.

Creating a custom iterator for a range of numbers

In JavaScript, we can provide a default implementation for iterator by implementing [Symbol.iterator]() in any custom object.

// Define a class named Range
class Range {
// The constructor takes two parameters: start and end
constructor(start, end) {
// Assign the start and end values to the instance
this.start = start;
this.end = end;
}
// Define the default iterator for the object
[Symbol.iterator]() {
// Initialize the current value to the start value
let current = this.start;
const end = this.end;
// Return an object with a next method
return {
// The next method returns the next value in the iteration
next() {
// If the current value is less than or equal to the end value...
if (current <= end) {
// ...return an object with the current value and done set to false
return { value: current++, done: false };
}
// ...otherwise, return an object with value set to undefined and done set to true
return { value: undefined, done: true };
},
};
}
}
// Create a new Range object with start = 1 and end = 3
const range = new Range(1, 3);
// Iterate over the range object
for (const number of range) {
// Log each number to the console
console.log(number); // 1, 2, 3
}

Built-in objects using the iterator protocol

In JavaScript, several built-in objects implement the iterator protocol, meaning they have a default @@iterator method. This allows them to be used in constructs like for...of loops and with the spread operator. Here are some of the key built-in objects that implement iterators:

  1. Arrays: Arrays have a built-in iterator that allows you to iterate over their elements.

    const array = [1, 2, 3];
    const iterator = array[Symbol.iterator]();
    console.log(iterator.next()); // { value: 1, done: false }
    console.log(iterator.next()); // { value: 2, done: false }
    console.log(iterator.next()); // { value: 3, done: false }
    console.log(iterator.next()); // { value: undefined, done: true }
    for (const value of array) {
    console.log(value); // Logs 1, 2, 3
    }
  2. Strings: Strings have a built-in iterator that allows you to iterate over their characters.

    const string = 'hello';
    const iterator = string[Symbol.iterator]();
    console.log(iterator.next()); // { value: "h", done: false }
    console.log(iterator.next()); // { value: "e", done: false }
    console.log(iterator.next()); // { value: "l", done: false }
    console.log(iterator.next()); // { value: "l", done: false }
    console.log(iterator.next()); // { value: "o", done: false }
    console.log(iterator.next()); // { value: undefined, done: true }
    for (const char of string) {
    console.log(char); // Logs h, e, l, l, o
    }
  3. DOM NodeLists

    // Create a new div and append it to the DOM
    const newDiv = document.createElement('div');
    newDiv.id = 'div1';
    document.body.appendChild(newDiv);
    const nodeList = document.querySelectorAll('div');
    const iterator = nodeList[Symbol.iterator]();
    console.log(iterator.next()); // { value: HTMLDivElement, done: false }
    console.log(iterator.next()); // { value: undefined, done: true }
    for (const node of nodeList) {
    console.log(node); // Logs each <div> element, in this case only div1
    }

Maps and Sets also have built-in iterators.

Generators

Generators are a special kind of function that can pause and resume their execution, allowing them to generate a sequence of values on-the-fly. They are commonly used to create iterators but have other applications as well. The key use cases of generators include:

  • Creating iterators in a more concise and readable way compared to manually implementing the iterator protocol.
  • Implementing lazy evaluation, where values are generated only when needed, saving memory and computation time.
  • Simplifying asynchronous programming by allowing code to be written in a synchronous-looking style using yield and await.

Generators provide several benefits:

  • Lazy evaluation: They generate values on the fly and only when required, which is memory efficient.
  • Pause and resume: Generators can pause execution (via yield) and can also receive new data upon resuming.
  • Asynchronous iteration: With the advent of async/await, generators can be used to manage asynchronous data flows.

Creating an iterator using a generator function

We can rewrite our Range example to use a generator function:

// Define a class named Range
class Range {
// The constructor takes two parameters: start and end
constructor(start, end) {
// Assign the start and end values to the instance
this.start = start;
this.end = end;
}
// Define the default iterator for the object using a generator
*[Symbol.iterator]() {
// Initialize the current value to the start value
let current = this.start;
// While the current value is less than or equal to the end value...
while (current <= this.end) {
// ...yield the current value
yield current++;
}
}
}
// Create a new Range object with start = 1 and end = 3
const range = new Range(1, 3);
// Iterate over the range object
for (const number of range) {
// Log each number to the console
console.log(number); // 1, 2, 3
}

Iterating over data streams

Generators are well-suited for iterating over data streams, such as fetching data from an API or reading files. This example demonstrates using a generator to fetch data from an API in batches:

async function* fetchDataInBatches(url, numBatches = 5, batchSize = 10) {
let startIndex = 0;
let currBatch = 0;
while (currBatch < numBatches) {
const response = await fetch(
`${url}?_start=${startIndex}&_limit=${batchSize}`,
);
const data = await response.json();
if (data.length === 0) break;
yield data;
startIndex += batchSize;
currBatch += 1;
}
}
async function fetchAndLogData() {
const dataGenerator = fetchDataInBatches(
'https://jsonplaceholder.typicode.com/todos',
);
for await (const batch of dataGenerator) {
console.log(batch);
}
}
fetchAndLogData();

This generator function fetchDataInBatches fetches data from an API in batches of a specified size. It yields each batch of data, allowing you to process it before fetching the next batch. This approach can be more memory-efficient than fetching all data at once.

Implementing asynchronous iterators

Generators can be used to implement asynchronous iterators, which are useful for working with asynchronous data sources. This example demonstrates an asynchronous iterator for fetching data from an API:

async function* fetchDataAsyncIterator(url, pagesToFetch = 3) {
let currPage = 1;
while (currPage <= pagesToFetch) {
const response = await fetch(`${url}?_page=${currPage}`);
const data = await response.json();
if (data.length === 0) break;
yield data;
currPage++;
}
}
async function fetchAndLogData() {
const asyncIterator = fetchDataAsyncIterator(
'https://jsonplaceholder.typicode.com/todos',
);
for await (const chunk of asyncIterator) {
console.log(chunk);
}
}
fetchAndLogData();

The generator function fetchDataAsyncIterator is an asynchronous iterator that fetches data from an API in pages. It yields each page of data, allowing you to process it before fetching the next page. This approach can be useful for handling large datasets or long-running operations.

Generators are also used extensively in JavaScript libraries and frameworks, such as Redux-Saga and RxJS, for handling asynchronous operations and reactive programming.

Summary

Iterators and generators provide a powerful and flexible way to work with collections of data in JavaScript. Iterators define a standardized way to traverse data sequences, while generators offer a more expressive and efficient way to create iterators, handle asynchronous operations, and compose complex data pipelines.

Further reading

Explain the difference between mutable and immutable objects in JavaScript

Topics
JavaScript

TL;DR

Mutable objects allow for modification of properties and values after creation, which is the default behavior for most objects.

const mutableObject = {
name: 'John',
age: 30,
};
// Modify the object
mutableObject.name = 'Jane';
// The object has been modified
console.log(mutableObject); // Output: { name: 'Jane', age: 30 }

Immutable objects cannot be directly modified after creation. Its content cannot be changed without creating an entirely new value.

const immutableObject = Object.freeze({
name: 'John',
age: 30,
});
// Attempt to modify the object
immutableObject.name = 'Jane';
// The object remains unchanged
console.log(immutableObject); // Output: { name: 'John', age: 30 }

The key difference between mutable and immutable objects is modifiability. Immutable objects cannot be modified after they are created, while mutable objects can be.


Immutability

Immutability is a core principle in functional programming but it has lots to offer to object-oriented programs as well.

Mutable objects

Mutability refers to the ability of an object to have its properties or elements changed after it's created. A mutable object is an object whose state can be modified after it is created. In JavaScript, objects and arrays are mutable by default. They store references to their data in memory. Changing a property or element modifies the original object. Here is an example of a mutable object:

const mutableObject = {
name: 'John',
age: 30,
};
// Modify the object
mutableObject.name = 'Jane';
// The object has been modified
console.log(mutableObject); // Output: { name: 'Jane', age: 30 }

Immutable objects

An immutable object is an object whose state cannot be modified after it is created. Here is an example of an immutable object:

const immutableObject = Object.freeze({
name: 'John',
age: 30,
});
// Attempt to modify the object
immutableObject.name = 'Jane';
// The object remains unchanged
console.log(immutableObject); // Output: { name: 'John', age: 30 }

Primitive data types like numbers, strings, booleans, null, and undefined are inherently immutable. Once assigned a value, you cannot directly modify them.

let name = 'Alice';
name.toUpperCase(); // This won't modify the original name variable
console.log(name); // Still prints "Alice"
// To change the value, you need to reassign a new string
name = name.toUpperCase();
console.log(name); // Now prints "ALICE"

Some built-in immutable JavaScript objects are Math, Date but custom objects are generally mutable.

const vs immutable objects

A common confusion / misunderstanding is that declaring a variable using const makes the value immutable, which is not true at all.

const prevents reassignment of the variable itself, but does not make the value it holds immutable. This means:

  • For primitive values (numbers, strings, booleans), const makes the value immutable since primitives are immutable by nature.
  • For non-primitive values like objects and arrays, const only prevents reassigning a new object/array to the variable, but the properties/elements of the existing object/array can still be modified.

On the other hand, an immutable object is an object whose state (properties and values) cannot be modified after it is created. This is achieved by using methods like Object.freeze() which makes the object immutable by preventing any changes to its properties.

// Using const
const person = { name: 'John' };
person = { name: 'Jane' }; // Error: Assignment to constant variable
person.name = 'Jane'; // Allowed, person.name is now 'Jane'
// Using Object.freeze() to create an immutable object
const frozenPerson = Object.freeze({ name: 'John' });
frozenPerson.name = 'Jane'; // Fails silently (no error, but no change)
frozenPerson = { name: 'Jane' }; // Error: Assignment to constant variable

In the first example with const, reassigning a new object to person is not allowed, but modifying the name property is permitted. In the second example, Object.freeze() makes the frozenPerson object immutable, preventing any changes to its properties.

It's important to note that Object.freeze() creates a shallow immutable object. If the object contains nested objects or arrays, those nested data structures are still mutable unless frozen separately.

Therefore, while const provides immutability for primitive values, creating truly immutable objects requires using Object.freeze() or other immutability techniques like deep freezing or using immutable data structures from libraries like Immer or Immutable.js.

Various ways to implement immutability in plain JavaScript objects

Here are a few ways to add/simulate different forms of immutability in plain JavaScript objects.

Immutable object properties

By combining writable: false and configurable: false, you can essentially create a constant (cannot be changed, redefined or deleted) as an object property, like:

const myObject = {};
Object.defineProperty(myObject, 'number', {
value: 42,
writable: false,
configurable: false,
});
console.log(myObject.number); // 42
myObject.number = 43;
console.log(myObject.number); // 42

Preventing extensions on objects

If you want to prevent an object from having new properties added to it, but otherwise leave the rest of the object's properties alone, call Object.preventExtensions(...):

let myObject = {
a: 2,
};
Object.preventExtensions(myObject);
myObject.b = 3;
console.log(myObject.b); // undefined

In non-strict mode, the creation of b fails silently. In strict mode, it throws a TypeError.

Sealing an object

Object.seal() creates a "sealed" object, which means it takes an existing object and essentially calls Object.preventExtensions() on it, but also marks all its existing properties as configurable: false. Therefore, not only can you not add any more properties, but you also cannot reconfigure or delete any existing properties, though you can still modify their values.

// Create an object
const person = {
name: 'John Doe',
age: 30,
};
// Seal the object
Object.seal(person);
// Try to add a new property (this will fail silently)
person.city = 'New York'; // This has no effect
// Try to delete an existing property (this will fail silently)
delete person.age; // This has no effect
// Modify an existing property (this will work)
person.age = 35;
console.log(person); // Output: { name: 'John Doe', age: 35 }
// Try to re-configure an existing property descriptor (this will fail silently)
Object.defineProperty(person, 'name', { writable: false }); // Fails silently in non strict mode
// Check if the object is sealed
console.log(Object.isSealed(person)); // Output: true

Freezing an object

Object.freeze() creates a frozen object, which means it takes an existing object and essentially calls Object.seal() on it, but it also marks all "data accessor" properties as writable:false, so that their values cannot be changed.

This approach is the highest level of immutability that you can attain for an object itself, as it prevents any changes to the object or to any of its direct properties (though, as mentioned above, the contents of any referenced other objects are unaffected).

let immutableObject = Object.freeze({});

Freezing an object does not allow new properties to be added to an object and prevents users from removing or altering the existing properties. Object.freeze() preserves the enumerability, configurability, writability and the prototype of the object. It returns the passed object and does not create a frozen copy.

Object.freeze() makes the object immutable. However, it is not necessarily constant. Object.freeze prevents modifications to the object itself and its direct properties, nested objects within the frozen object can still be modified.

let obj = {
user: {},
};
Object.freeze(obj);
obj.user.name = 'John';
console.log(obj.user.name); //Output: 'John'

What are the pros and cons of immutability?

Pros

  • Easier change detection: Object equality can be determined in a performant and easy manner through referential equality. This is useful for comparing object differences in React and Redux.
  • Less complicated: Programs with immutable objects are less complicated to think about, since you don't need to worry about how an object may evolve over time.
  • Easy sharing via references: One copy of an object is just as good as another, so you can cache objects or reuse the same object multiple times.
  • Thread-safe: Immutable objects can be safely used between threads in a multi-threaded environment since there is no risk of them being modified in other concurrently running threads. In the most cases, JavaScript runs in a single-threaded environment
  • Less memory needed: Using libraries like Immer and Immutable.js, objects are modified using structural sharing and less memory is needed for having multiple objects with similar structures.
  • No need for defensive copying: Defensive copies are no longer necessary when immutable objects are returning from or passed to functions, since there is no possibility an immutable object will be modified by it.

Cons

  • Complex to create yourself: Naive implementations of immutable data structures and its operations can result in extremely poor performance because new objects are created each time. It is recommended to use libraries for efficient immutable data structures and operations that leverage on structural sharing.
  • Potential negative performance: Allocation (and deallocation) of many small objects rather than modifying existing ones can cause a performance impact. The complexity of either the allocator or the garbage collector usually depends on the number of objects on the heap.
  • Complexity for cyclic data structures: Cyclic data structures such as graphs are difficult to implement.

Further reading

What is the difference between a `Map` object and a plain object in JavaScript?

Topics
JavaScript

TL;DR

Both Map objects and plain objects in JavaScript can store key-value pairs, but they have several key differences:

FeatureMapPlain object
Key typeAny data typeString (or Symbol)
Key orderMaintainedNot guaranteed
Size propertyYes (size)None
IterationforEach, keys(), values(), entries()for...in, Object.keys(), etc.
InheritanceNoYes
PerformanceGenerally better for larger datasets and frequent additions/deletionsFaster for small datasets and simple operations
SerializableNoYes

Map vs plain JavaScript objects

In JavaScript, Map objects and a plain object (also known as a "POJO" or "plain old JavaScript object") are both used to store key-value pairs, but they have different characteristics, use cases, and behaviors.

Plain JavaScript objects (POJO)

A plain object is a basic JavaScript object created using the {} syntax. It is a collection of key-value pairs, where each key is a string (or a symbol, in modern JavaScript) and each value can be any type of value, including strings, numbers, booleans, arrays, objects, and more.

const person = { name: 'John', age: 30, occupation: 'Developer' };
console.log(person);

Map objects

A Map object, introduced in ECMAScript 2015 (ES6), is a more advanced data structure that allows you to store key-value pairs with additional features. A Map is an iterable, which means you can use it with for...of loops, and it provides methods for common operations like get, set, has, and delete.

const person = new Map([
['name', 'John'],
['age', 30],
['occupation', 'Developer'],
]);
console.log(person);

Key differences

Here are the main differences between a Map object and a plain object:

  1. Key types: In a plain object, keys are always strings (or symbols). In a Map, keys can be any type of value, including objects, arrays, and even other Maps.
  2. Key ordering: In a plain object, the order of keys is not guaranteed. In a Map, the order of keys is preserved, and you can iterate over them in the order they were inserted.
  3. Iteration: A Map is iterable, which means you can use for...of loops to iterate over its key-value pairs. A plain object is not iterable by default, but you can use Object.keys() or Object.entries() to iterate over its properties.
  4. Performance: Map objects are generally faster and more efficient than plain objects, especially when dealing with large datasets.
  5. Methods: A Map object provides additional methods, such as get, set, has, and delete, which make it easier to work with key-value pairs.
  6. Serialization: When serializing a Map object to JSON, it will be converted to an object but the existing Map properties might be lost in the conversion. A plain object, on the other hand, is serialized to a JSON object with the same structure.

When to use which

Use a plain object (POJO) when:

  • You need a simple, lightweight object with string keys.
  • You're working with a small dataset.
  • You need to serialize the object to JSON (e.g. to send over the network).

Use a Map object when:

  • You need to store key-value pairs with non-string keys (e.g., objects, arrays).
  • You need to preserve the order of key-value pairs.
  • You need to iterate over the key-value pairs in a specific order.
  • You're working with a large dataset and need better performance.

In summary, while both plain objects and Map objects can be used to store key-value pairs, Map objects offer more advanced features, better performance, and additional methods, making them a better choice for more complex use cases.

Notes

Map objects cannot be serialized to be sent in HTTP requests, but libraries like superjson allow them to be serialized and deserialized.

Further reading

What are the differences between `Map`/`Set` and `WeakMap`/`WeakSet` in JavaScript?

Topics
JavaScript

TL;DR

The primary difference between Map/Set and WeakMap/WeakSet in JavaScript lies in how they handle keys. Here's a breakdown:

Map vs. WeakMap

Maps allows any data type (strings, numbers, objects) as keys. The key-value pairs remain in memory as long as the Map object itself is referenced. Thus they are suitable for general-purpose key-value storage where you want to maintain references to both keys and values. Common use cases include storing user data, configuration settings, or relationships between objects.

WeakMaps only allows objects as keys. However, these object keys are held weakly. This means the garbage collector can remove them from memory even if the WeakMap itself still exists, as long as there are no other references to those objects. WeakMaps are ideal for scenarios where you want to associate data with objects without preventing those objects from being garbage collected. This can be useful for things like:

  • Caching data based on objects without preventing garbage collection of the objects themselves.
  • Storing private data associated with DOM nodes without affecting their lifecycle.

Set vs. WeakSet

Similar to Map, Sets allow any data type as keys. The elements within a Set must be unique. Sets are useful for storing unique values and checking for membership efficiently. Common use cases include removing duplicates from arrays or keeping track of completed tasks.

On the other hand, WeakSet only allows objects as elements, and these object elements are held weakly, similar to WeakMap keys. WeakSets are less commonly used, but applicable when you want a collection of unique objects without affecting their garbage collection. This might be necessary for:

  • Tracking DOM nodes that have been interacted with without affecting their memory management.
  • Implementing custom object weak references for specific use cases.

Here's a table summarizing the key differences:

FeatureMapWeakMapSetWeakSet
Key TypesAny data typeObjects (weak references)Any data type (unique)Objects (weak references, unique)
Garbage CollectionKeys and values are not garbage collectedKeys can be garbage collected if not referenced elsewhereElements are not garbage collectedElements can be garbage collected if not referenced elsewhere
Use CasesGeneral-purpose key-value storageCaching, private DOM node dataRemoving duplicates, membership checksObject weak references, custom use cases

Choosing between them

  • Use Map and Set for most scenarios where you need to store key-value pairs or unique elements and want to maintain references to both the keys/elements and the values.
  • Use WeakMap and WeakSet cautiously in specific situations where you want to associate data with objects without affecting their garbage collection. Be aware of the implications of weak references and potential memory leaks if not used correctly.

Map/Set vs WeakMap/WeakSet

The key differences between Map/Set and WeakMap/WeakSet in JavaScript are:

  1. Key types: Map and Set can have keys of any type (objects, primitive values, etc.), while WeakMap and WeakSet can only have objects as keys. Primitive values like strings or numbers are not allowed as keys in WeakMap and WeakSet.
  2. Memory management: The main difference lies in how they handle memory. Map and Set have strong references to their keys and values, which means they will prevent garbage collection of those values. On the other hand, WeakMap and WeakSet have weak references to their keys (objects), allowing those objects to be garbage collected if there are no other strong references to them.
  3. Key enumeration: Keys in Map and Set are enumerable (can be iterated over), while keys in WeakMap and WeakSet are not enumerable. This means you cannot get a list of keys or values from a WeakMap or WeakSet.
  4. size property: Map and Set have a size property that returns the number of elements, while WeakMap and WeakSet do not have a size property because their size can change due to garbage collection.
  5. Use cases: Map and Set are useful for general-purpose data structures and caching, while WeakMap and WeakSet are primarily used for storing metadata or additional data related to objects, without preventing those objects from being garbage collected.

Map and Set are regular data structures that maintain strong references to their keys and values, while WeakMap and WeakSet are designed for scenarios where you want to associate data with objects without preventing those objects from being garbage collected when they are no longer needed.

Use cases of WeakMap and WeakSet

Tracking active users

In a chat application, you might want to track which user objects are currently active without preventing garbage collection when the user logs out or the session expires. We use a WeakSet to track active user objects. When a user logs out or their session expires, the user object can be garbage-collected if there are no other references to it.

const activeUsers = new WeakSet();
// Function to mark a user as active
function markUserActive(user) {
activeUsers.add(user);
}
// Function to check if a user is active
function isUserActive(user) {
return activeUsers.has(user);
}
// Example usage
let user1 = { id: 1, name: 'Alice' };
let user2 = { id: 2, name: 'Bob' };
markUserActive(user1);
markUserActive(user2);
console.log(isUserActive(user1)); // true
console.log(isUserActive(user2)); // true
// Simulate user logging out
user1 = null;
// user1 is now eligible for garbage collection
console.log(isUserActive(user1)); // false

Detecting circular references

WeakSet is provides a way of guarding against circular data structures by tracking which objects have already been processed.

// Create a WeakSet to track visited objects
const visited = new WeakSet();
// Function to traverse an object recursively
function traverse(obj) {
// Check if the object has already been visited
if (visited.has(obj)) {
return;
}
// Add the object to the visited set
visited.add(obj);
// Traverse the object's properties
for (let prop in obj) {
if (obj.hasOwnProperty(prop)) {
let value = obj[prop];
if (typeof value === 'object' && value !== null) {
traverse(value);
}
}
}
// Process the object
console.log(obj);
}
// Create an object with a circular reference
const obj = {
name: 'John',
age: 30,
friends: [
{ name: 'Alice', age: 25 },
{ name: 'Bob', age: 28 },
],
};
// Create a circular reference
obj.self = obj;
// Traverse the object
traverse(obj);

Further reading

Why might you want to create static class members in JavaScript?

Topics
JavaScriptOOP

TL;DR

Static class members (properties/methods) has a static keyword prepended. Such members cannot be directly accessed on instances of the class. Instead, they're accessed on the class itself.

class Car {
static noOfWheels = 4;
static compare() {
return 'Static method has been called.';
}
}
console.log(Car.noOfWheels); // 4

Static members are useful under the following scenarios:

  • Namespace organization: Static properties can be used to define constants or configuration values that are specific to a class. This helps organize related data within the class namespace and prevents naming conflicts with other variables. Examples include Math.PI, Math.SQRT2.
  • Helper functions: Static methods can be used as helper functions that operate on the class itself or its instances. This can improve code readability and maintainability by separating utility logic from the core functionality of the class. Examples of frequently used static methods include Object.assign(), Math.max().
  • Singleton pattern: In some rare cases, static properties and methods can be used to implement a singleton pattern, where only one instance of a class ever exists. However, this pattern can be tricky to manage and is generally discouraged in favor of more modern dependency injection techniques.

Static class members

Static class members (properties/methods) are not tied to a specific instance of a class and have the same value regardless of which instance is referring to it. Static properties are typically configuration variables and static methods are usually pure utility functions which do not depend on the state of the instance. Such properties has a static keyword prepended.

class Car {
static noOfWheels = 4;
static compare() {
return 'static method has been called.';
}
}
console.log(Car.noOfWheels); // Output: 4
console.log(Car.compare()); // Output: static method has been called.

Static members are not accessible by specific instance of class.

class Car {
static noOfWheels = 4;
static compare() {
return 'static method has been called.';
}
}
const car = new Car();
console.log(car.noOfWheels); // Output: undefined
console.log(car.compare()); // Error: TypeError: car.compare is not a function

The Math class in JavaScript is a good example of a common library that uses static members. The Math class in JavaScript is a built-in object that provides a collection of mathematical constants and functions. It is a static class, meaning that all of its properties and methods are static. Here's an example of how the Math class uses static members:

console.log(Math.PI); // Output: 3.141592653589793
console.log(Math.abs(-5)); // Output: 5
console.log(Math.max(1, 2, 3)); // Output: 3

In this example, Math.PI, Math.abs(), and Math.max() are all static members of the Math class. They can be accessed directly on the Math object without the need to create an instance of the class.

Reasons to use static class members

Utility functions

Static class members can be useful for defining utility functions that don't require any instance-specific (don't use this) data or behavior. For example, you might have a Arithmetic class with static methods for common mathematical operations.

class Arithmetic {
static add(a, b) {
return a + b;
}
static subtract(a, b) {
return a - b;
}
}
console.log(Arithmetic.add(2, 3)); // Output: 5
console.log(Arithmetic.subtract(5, 2)); // Output: 3

Singletons

Static class members can be used to implement the Singleton pattern, where you want to ensure that only one instance of a class exists throughout your application.

class Singleton {
static instance;
static getInstance() {
if (!this.instance) {
this.instance = new Singleton();
}
return this.instance;
}
}
const singleton1 = Singleton.getInstance();
const singleton2 = Singleton.getInstance();
console.log(singleton1 === singleton2); // Output: true

Configurations

Static class members can be used to store configuration or settings that are shared across all instances of a class. This can be useful for things like API keys, feature flags, or other global settings.

class Config {
static API_KEY = 'your-api-key';
static FEATURE_FLAG = true;
}
console.log(Config.API_KEY); // Output: 'your-api-key'
console.log(Config.FEATURE_FLAG); // Output: true

Performance

In some cases, using static class members can improve performance by reducing the amount of memory used by your application. This is because static class members are shared across all instances of a class, rather than being duplicated for each instance.

Further Reading

What are `Symbol`s used for in JavaScript?

Topics
JavaScript

TL;DR

Symbols in JavaScript are a new primitive data type introduced in ES6 (ECMAScript 2015). They are unique and immutable identifiers that is primarily for object property keys to avoid name collisions. These values can be created using Symbol(...) function, and each Symbol value is guaranteed to be unique, even if they have the same key/description. Symbol properties are not enumerable in for...in loops or Object.keys(), making them suitable for creating private/internal object state.

let sym1 = Symbol();
let sym2 = Symbol('myKey');
console.log(typeof sym1); // "symbol"
console.log(sym1 === sym2); // false, because each symbol is unique
let obj = {};
let sym = Symbol('uniqueKey');
obj[sym] = 'value';
console.log(obj[sym]); // "value"

Note: The Symbol() function must be called without the new keyword. It is not exactly a constructor because it can only be called as a function instead of with new Symbol().


Symbols in JavaScript

Symbols in JavaScript are a unique and immutable data type used primarily for object property keys to avoid name collisions.

Key characteristics

  • Uniqueness: Each Symbol value is unique, even if they have the same description.
  • Immutability: Symbol values are immutable, meaning their value cannot be changed.
  • Non-enumerable: Symbol properties are not included in for...in loops or Object.keys().

Creating Symbols

Symbols can be created using the Symbol() function:

const sym1 = Symbol();
const sym2 = Symbol('uniqueKey');
console.log(typeof sym1); // "symbol"
console.log(sym1 === sym2); // false, because each symbol is unique

The Symbol(..) function must be called without the new keyword.

Using Symbols as object property keys

Symbols can be used to add properties to an object without risk of name collision:

const obj = {};
const sym = Symbol('uniqueKey');
obj[sym] = 'value';
console.log(obj[sym]); // "value"

Symbols are not enumerable

  • Symbol properties are not included in for...in loops or Object.keys().
  • This makes them suitable for creating private/internal object state.
  • Use Object.getOwnPropertySymbols(obj) to get all symbol properties on an object.
const mySymbol = Symbol('privateProperty');
const obj = {
name: 'John',
[mySymbol]: 42,
};
console.log(Object.keys(obj)); // Output: ['name']
console.log(obj[mySymbol]); // Output: 42

Global Symbol registry

You can create global Symbols using Symbol.for('key'), which creates a new Symbol in the global registry if it doesn't exist, or returns the existing one. This allows you to reuse Symbols across different parts of your code base or even across different code bases.

const globalSym1 = Symbol.for('globalKey');
const globalSym2 = Symbol.for('globalKey');
console.log(globalSym1 === globalSym2); // true
const key = Symbol.keyFor(globalSym1);
console.log(key); // "globalKey"

Well-known Symbol

JavaScript includes several built-in Symbols, referred as well-known Symbols.

  • Symbol.iterator: Defines the default iterator for an object.
  • Symbol.toStringTag: Used to create a string description for an object.
  • Symbol.hasInstance: Used to determine if an object is an instance of a constructor.

Symbol.iterator

let iterable = {
[Symbol.iterator]() {
let step = 0;
return {
next() {
step++;
if (step <= 5) {
return { value: step, done: false };
}
return { done: true };
},
};
},
};
for (let value of iterable) {
console.log(value); // 1, 2, 3, 4, 5
}

Symbol.toStringTag

let myObj = {
[Symbol.toStringTag]: 'MyCustomObject',
};
console.log(Object.prototype.toString.call(myObj)); // "[object MyCustomObject]"

Summary

Symbols are a powerful feature in JavaScript, especially useful for creating unique object properties and customizing object behavior. They provide a means to create hidden properties, preventing accidental access or modification, which is particularly beneficial in large-scale applications and libraries.

Further reading

What are server-sent events?

Topics
JavaScriptNetworking

TL;DR

Server-sent events (SSE) is a standard that allows a web page to receive automatic updates from a server via an HTTP connection. Server-sent events are used with EventSource instances that opens a connection with a server and allows client to receive events from the server. Connections created by server-sent events are persistent (similar to the WebSockets), however there are a few differences:

PropertyWebSocketEventSource
DirectionBi-directional – both client and server can exchange messagesUnidirectional – only server sends data
Data typeBinary and text dataOnly text
ProtocolWebSocket protocol (ws://)Regular HTTP (http://)

Creating an event source

const eventSource = new EventSource('/sse-stream');

Listening for events

// Fired when the connection is established.
eventSource.addEventListener('open', () => {
console.log('Connection opened');
});
// Fired when a message is received from the server.
eventSource.addEventListener('message', (event) => {
console.log('Received message:', event.data);
});
// Fired when an error occurs.
eventSource.addEventListener('error', (error) => {
console.error('Error occurred:', error);
});

Sending events from server

const express = require('express');
const app = express();
app.get('/sse-stream', (req, res) => {
// `Content-Type` need to be set to `text/event-stream`.
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
// Each message should be prefixed with data.
const sendEvent = (data) => res.write(`data: ${data}\n\n`);
sendEvent('Hello from server');
const intervalId = setInterval(() => sendEvent(new Date().toString()), 1000);
res.on('close', () => {
console.log('Client closed connection');
clearInterval(intervalId);
});
});
app.listen(3000, () => console.log('Server started on port 3000'));

In this example, the server sends a "Hello from server" message initially, and then sends the current date every second. The connection is kept alive until the client closes it


Server-sent events (SSE)

Server-sent events (SSE) is a standard that allows a server to push updates to a web client over a single, long-lived HTTP connection. It enables real-time updates without the client having to constantly poll the server for new data.

How SSE works

  1. The client creates a new EventSource object, passing the URL of the server-side script that will generate the event stream:

    const eventSource = new EventSource('/event-stream');
  2. The server-side script sets the appropriate headers to indicate that it will be sending an event stream (Content-Type: text/event-stream), and then starts sending events to the client.

  3. Each event sent by the server follows a specific format, with fields like event, data, and id. For example:

    event: message
    data: Hello, world!
    event: update
    id: 123
    data: {"temperature": 25, "humidity": 60}
  4. On the client-side, the EventSource object receives these events and dispatches them as browser events, which can be handled using event listeners or the onmessage event handler:

    eventSource.onmessage = function (event) {
    console.log('Received message:', event.data);
    };
    eventSource.addEventListener('update', function (event) {
    console.log('Received update:', JSON.parse(event.data));
    });
  5. The EventSource object automatically handles reconnection if the connection is lost, and it can resume the event stream from the last received event ID using the Last-Event-ID HTTP header.

SSE features

  • Unidirectional: Only the server can send data to the client. For bidirectional communication, web sockets would be more appropriate.
  • Retry mechanism: The client will retry the connection if it fails, with the retry interval specified by the retry: field from the server.
  • Text-only data: SSE can only transmit text data, which means binary data needs to be encoded (e.g., Base64) before transmission. This can lead to increased overhead and inefficiency for applications that need to transmit large binary payloads.
  • Built-in browser support: Supported by most modern browsers without additional libraries.
  • Event types: SSE supports custom event types using the event: field, allowing categorization of messages.
  • Last-Event-Id: The client sends the Last-Event-Id header when reconnecting, allowing the server to resume the stream from the last received event. However, there is no built-in mechanism to replay missed events during the disconnection period. You may need to implement a mechanism to handle missed events, such as using the Last-Event-Id header.
  • Connection limitations: Browsers have a limit on the maximum number of concurrent SSE connections, typically around 6 per domain. This can be a bottleneck if you need to establish multiple SSE connections from the same client. Using HTTP/2 will mitigate this issue.

Implementing SSE in JavaScript

The following code demonstrates a minimal implementation of SSE on the client and the server:

  • The server sets the appropriate headers to establish an SSE connection.
  • Messages are sent to the client every 5 seconds.
  • The server cleans up the interval and ends the response when the client disconnects.

On the client:

// Create a new EventSource object
const eventSource = new EventSource('/sse');
// Event listener for receiving messages
eventSource.onmessage = function (event) {
console.log('New message:', event.data);
};
// Event listener for errors
eventSource.onerror = function (error) {
console.error('Error occurred:', error);
};
// Optional: Event listener for open connection
eventSource.onopen = function () {
console.log('Connection opened');
};

On the server:

const http = require('http');
http
.createServer((req, res) => {
if (req.url === '/sse') {
// Set headers for SSE
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
Connection: 'keep-alive',
});
// Function to send a message
const sendMessage = (message) => {
res.write(`data: ${message}\n\n`); // Messages are delimited with double line breaks.
};
// Send a message every 5 seconds
const intervalId = setInterval(() => {
sendMessage(`Current time: ${new Date().toLocaleTimeString()}`);
}, 5000);
// Handle client disconnect
req.on('close', () => {
clearInterval(intervalId);
res.end();
});
} else {
res.writeHead(404);
res.end();
}
})
.listen(8080, () => {
console.log('SSE server running on port 8080');
});

Summary

Server-sent events provide an efficient and straightforward way to push updates from a server to a client in real-time. They are particularly well-suited for applications that require continuous data streams but do not need full bidirectional communication. With built-in support in modern browsers, SSE is a reliable choice for many real-time web applications.

Further reading

Explain the concept of "hoisting" in JavaScript