Florian Lackner

Florian Lackner

Why do we need async/await in JavaScript?

Async/await is a major source of confusion in JavaScript. This article explains it by showing where it originates from.

Why do we need async/await in JavaScript?

Async/await is a major source of confusion in JavaScript. People tend just to add and remove the async and await keywords until the code works (or seems to work). In this article, I want to explain what async/await actually means by developing the concept from vanilla, synchronous JavaScript to asynchronous JavaScript with the async/await syntax sugar. It is my personal take on the subject. I hope some will find it interesting.

Disclaimer: I use Node.js as an example of a JavaScript runtime in this article. However, what's discussed also applies to other runtimes like web browser JS engines. All these environments follow similar architectures.

What problem does it solve?#

Suppose you have a function that does some I/O.

function handleRequest() {  try {    const data = doDatabaseQuery();    const fileName = doElasticsearchQuery(data);    const result = readMyFile(fileName);    return result;  } catch (err) {    return handleError(err);  }}

Here we assume that the query functions and the readMyFile function will block the JavaScript code while they perform their I/O. Eventually, it will be done, and we can continue where we left off. They will signal an error by throwing. We handle it with a try/catch block.

Doing database queries or other I/O like network requests or filesystem operations takes a very long time™ compared to running your JavaScript code. This is fine in this example if we look at a single request/response. But web servers usually handle thousands of requests. And then this model breaks. To understand why we have to look at how Node.js runs your JS code.

Node.js has a single-threaded event loop#

Node.js runs all your JS code in a single thread. The exact reasoning behind this design choice is fascinating but out of scope for this article. In my opinion, the two most compelling arguments in favor of Node's architecture are:

  1. The other option would be to use concurrent threads. This comes with many complexity hazards like synchronization, deadlocks, etc. In this sense, Node's model is simpler.
  2. It turns out that I/O heavy applications like web servers tend to wait a lot. And by that, I mean a lot. They do nothing else, basically. The heavy lifting usually happens outside the web server, in a database, in elastic search, you name it. The web server just requests the data, waits for it, and glues it together to send it back to the client. So having several threads will not make your code more performant because its performance is not CPU-bound.

However, while JS code runs sequentially, we can still work on many requests simultaneously. For example, you can start your washing machine, dishwasher, and microwave one after another and then wait for them to finish their respective tasks. You can do the same with requests to your database, elastic search, etc.

The big question is: How can we wait without blocking?


So code running in Node is not supposed to stop executing to wait for something external to be done. But we often need to do something after say, a network request was completed. The way this is handled is straightforward: We give the runtime a function to call after it performed our I/O operation with its result. This function is called a callback function. We can see how the example would look like using callbacks.

function handleRequest() {  doDatabaseQuery(function (err, data) {    // first callback    if (err) {      handle(err);      return;    }    doElasticsearchQuery(data, function (err, fileName) {      // second callback      if (err) {        handleError(err);        return;      }      readMyFile(fileName, function (err, result) {        // third callback        if (err) {          handleError(err);          return;        }        sendToClient(result); // strawman function to send a response to the client      });    });  });}

The I/O functions take a second argument this time, the callback. Note that we couldn't just return our result in this case. When handleRequest runs it will call doDatabaseQuery and then it will return immediately. The Node.js runtime will later call the callback, which will call doElasticsearchQuery. The callback also returns quickly. It will call only readMyFile with another callback which will finally call sendToClient to finish our request once the file was read. We returned result in the original example. This is impossible with callbacks, so we're using this strawman function sendToClient that sends the response back to the client.

Error handling is explicit in this model. By convention, the first argument of the callback function is an error object. If it is not undefined, something went wrong, and we can handle it in the callback. The second argument is, by convention, the result of the I/O operation. There are other styles, but they all boil down to the same idea: The error is communicated via some error argument to the callback.

This is the way Node.js handles I/O. Many will rightfully argue that the code looks messier than initially. The code tends to get more and more unreadable if the nesting of the callbacks becomes too deep. And error handling makes things even worse. This problem is referred to as callback hell. Several abstractions over callbacks have been developed to deal with this. The most widely used one is called Promise.


The idea of an asynchronous function not returning its result contributes greatly to the unreadability of callback-spaghetti-code. Promises fix this by letting an asynchronous function return something again. But because asynchronous functions still can't block, it can't be the actual value. Instead, they return a Promise, which can be interpreted as: "I promise that the value will eventually be here." The jargon is: "The promise resolves to the value". So this time, the I/O functions return promises that resolve to their respective values.

Furthermore, promises simplify error handling by splitting the error handling logic from the business logic. Instead of getting the error in the callback, you can register a dedicated error callback, which gets called instead of the normal callback if there is an error.

function handleRequest() {  const dataPromise = doDatabaseQuery();  dataPromise.catch(err => handleError(err));  const fileNamePromise = dataPromise.then(function (data) {    // first callback    const fileNamePromise = doElasticsearchQuery(data);    return fileNamePromise;  });  fileNamePromise.catch(err => handleError(err));  const resultPromise = fileNamePromise.then(function (fileName) {    // second callback    const resultPromise = readMyFile(fileName);    return resultPromise;  });  resultPromise.catch(err => handleError(err));  return resultPromise;}

The cool thing about promises is that they are designed to be chained. You can add callbacks to a promise chain with then(callback). The result is again a promise that resolves to the return value of the callback given to then(). If the callback returns another Promise, the promise returned by then will not resolve to this other Promise, but the value it resolved to. This enables chaining. And chaining avoids the deep nesting of pure callback approaches. Also, we can return something from handleRequest again. This time it's a Promise that resolves to result.

Chaining also works for errors! You can add an error callback with catch(callback). If an operation fails the closest error callback will be called. This helps a lot in our example because all our error handling code is the same.

We can write our example more elegantly. Firstly, we can eliminate the repetitive error handling and add the error handler only once. Secondly, we can use our I/O functions and the error handler directly as callbacks. This works because, e.g., the first callback in the code snippet takes data and returns a Promise resolving to fileName. This is exactly what doElasticsearchQuery is doing, which is why we can use it directly as an argument to then. We can do similarly for readMyFile and handleError.

function handleRequest() {  return doDatabaseQuery()    .then(doElasticsearchQuery)    .then(readMyFile)    .catch(handleError);}

Promises have already helped tremendously in cutting down the complexity of callbacks. However, the resulting code is still a bit weird and not as clear as the synchronous equivalent. This is why async/await (and try/catch in async contexts) has been added to JavaScript!


async/await is so-called syntax sugar for promises. This means it gets compiled to promises by the JavaScript compiler before executing the code. And remember, promises are callbacks under the hood. Nothing magical about these things. So here is the example written with async/await.

async function handleRequest() {  try {    const data = await doDatabaseQuery();    const fileName = await doElasticsearchQuery(data);    const result = await readMyFile(fileName);  } catch (err) {    handleError(err);  }  return result;}

As you can see, it looks very similar to the initial synchronous example. But actually, it is identical to the Promise version. This means you could replace the Promise version with this code and get the exact same function. Let's look at the differences to the initial synchronous example:

  • handleRequest became an async function
  • Use of the await keyword in front of the I/O functions. Even though they return promises, await turns those into the values they resolve to.
  • We return result directly. However, handleRequest will still return a Promise.

async will turn a function into an async function. This has two effects:

  1. You can use await inside of async functions. Outside of them, you can't.
  2. The return value of an async function is a Promise resolving to the value returned by the async function. Even if you don't explicitly return your async function will return a Promise that resolves to undefined.

await can only be used inside async functions (and JavaScript modules). It takes a Promise and turns it into the value it resolves to. It "waits" for value to be ready.

Furthermore, if there was an error and the Promise had no error callback attached, await will throw the error object that would have been passed as argument to the error callback. This makes error handling with try/catch easy and convenient.


I hope this showed how async/await fits into the I/O story for Node.js and JavaScript in general. It's just syntactic sugar for promises, which are themselves an abstraction over callbacks. This article just touches on the basics, though. I deliberately omitted important pieces of the puzzle, like error handling with promises, to keep it simple.

As a bonus, I will give some async functions and their identical twins implemented with Promises to show how the code is transformed.

async function f(x) {  return x;}function f(x) {  // resolve returns a promise that calls its callback immediately  // with the value x. i.e. a promise that is immediately "ready"  return Promise.resolve(x);}async function nothing() {  // empty body}function nothing() {  return Promise.resolve(undefined);}async function myFetch(x) {  try {    const data = await fetch(x);    return data.toString()  } catch (err) {    return "fetch error";  }}function myFetch(x) {  return fetch(x)    .then(data => data.toString())    .catch(err => "fetch error");}async function parallelFetch(x, y) {  const promises = [    fetch(x),    fetch(y),  ]);  // both fetches are being performed simultaneously in the background  // (outside of the event loop, parallel to this code)  // new promise that resolves when all given promises have resolved  const bundlePromise = Promise.all(promises);  // this will wait until both fetches are done  const [resultX, resultY] = await bundlePromise;  return { resultX, resultY };}function parallelFetch(x, y) {  const promises = [    fetch(x),    fetch(y),  ]);  const bundlePromise = Promise.all(promises);  return bundlePromise.then(([resultX, resultY]) => {    return { resultX, resultY };  });}