Categories
Hard Skills Software Development

Node.js is not “not suitable for big computing projects.”

In this article we look at 3 options that helps make use of all the processor cores without blocking the main thread and event loop of a node.js application.

Node.js improved a lot from the early days and matured enough to handle both I/O, Data, and CPU-intensive workloads.

As I was researching the application types that can be written in Node.js, I came across almost every article I looked up that “Node.js is not suitable for big computing projects” or anything CPU-intensive workload. They argued that because of the single-threaded and asynchronous nature of Node.js, it excels at scaling with I/O operations. But its strength makes its biggest weakness. Since Node.js uses an event loop on a single thread, if any of the requests take long to calculate, it will block the entire process. Meaning it will bottleneck the application.

I found no article that would try to prove otherwise. As I am writing this article in 2022, most of the Google search results are coming back from 8+ years back. New articles are only copying the same statement from older articles. I am not saying that Node.js excels at CPU-intensive workloads, but it can spread the workload now. In the past eight years, Node.js improved a lot.

They introduced new APIs that help manage CPU-intensive workloads and also help scale your application from a single-threaded application to as many threads as your processor allows.

Let me introduce you to a few APIs that will help your application written in Node.js to become not “not suitable for big computing projects.”

  1. The Cluster API
  2. The Worker thread API
  3. The Child Process API 

Side of caution: If you are evaluating technologies for your new project in mind, please note that probably there is a better choice for CPU-intensive workloads. But you might consider using these APIs to scale your application, even if you have CPU intensive workloads in the following situations:

  • Your team only speaks JavaScript, and you want to start a new project.
  • You want to reuse your existing code, infrastructure, and human resources.
  • You have your application already written in Node.js, and you have bottlenecking functions.

The Cluster API

“Clusters of Node.js processes can be used to run multiple instances of Node.js that can distribute workloads among their application threads.”

From <https://nodejs.org/dist/latest-v16.x/docs/api/cluster.html>

The closest analogy I can give you is with Micro-Service architectures.

Note: If you are not familiar with Micro Service concepts, continue reading, it still should make sense. If not, contact me with your question, and I will update the article.

In a Micro-Service architecture having users, a load balancer, and multiple instances of a Micro Services, we are getting really close to what a cluster does in a Micro Service level running in a single Node.js application.

There is the concept of the main and child processes in the cluster API. The main process (running in a single thread) is equivalent to a load balancer, and the child process is equivalent to a Micro Service itself.

Note: The original documentation also refers to child processes as worker processes, but not to be confused with the Worker thread API’s worker processes. To avoid confusion in this article, I will use the name child process for the cluster API and the worker process for the Worker thread API.

The main process as a load balancer

“In computing, load balancing refers to the process of distributing a set of tasks over a set of resources (computing units), with the aim of making their overall processing more efficient. Load balancing can optimize the response time and avoid unevenly overloading some compute nodes while other compute nodes are left idle.”

From Wikipedia <https://en.wikipedia.org/wiki/Load_balancing_(computing)>

If you think about your main process as a load balancer, you are close to what the main thread is doing. It accepts the incoming request and delegates the work to a child process.

Delegating work to child processes makes the main process free to accept more incoming requests. After your application starts using the Cluster API, it can listen to thousands of incoming requests since it does not have to deal with heavy computing tasks.

It is good to know that the main process can use three different strategies to pass work to child processes that allow fine-tuning your load balancing needs.

The main thread is also responsible for spawning and managing new child processes. You can start multiple child processes based on how many threads your CPU has.

Child processes as Micro Services

Microservices architecture (often shortened to microservices) refers to an architectural style for developing applications. Microservices allow a large application to be separated into smaller independent parts, with each part having its own realm of responsibility.

From Google Cloud < https://cloud.google.com/learn/what-is-microservices-architecture#:~:text=Microservices%20architecture%20(often%20shortened%20to,its%20own%20realm%20of%20responsibility.>

A child process is like a single Micro Service. It has a logic to do a certain work, and it is only responsible for doing that work. The main thread will spawn (the documentation calls it fork) new copies of these workers that will run in a new processor thread, and it will have its own isolated memory. The benefit of running the child process on a new thread means that if any exceptions happen, that kills only the child process, and the main thread is free to create a new child process in its place. The drawback of having a separate memory for each thread is that you cannot easily share data between child instances. It is good for safety but makes algorithms dependent on a session or in-memory storage complicated.

The Worker thread API

The worker_threads module enables the use of threads that execute JavaScript in parallel.

From <https://nodejs.org/dist/latest-v16.x/docs/api/worker_threads.html>”

“Workers (threads) are useful for performing CPU-intensive JavaScript operations.

From <https://nodejs.org/dist/latest-v16.x/docs/api/worker_threads.html>

“Unlike child_process or cluster, worker_threads can share memory. They do so by transferring ArrayBuffer instances or sharing SharedArrayBuffer instances.

From <https://nodejs.org/dist/latest-v16.x/docs/api/worker_threads.html>

Based on the documentation, workers seem to have a similar purpose to clusters, but they are quite different. When you have multiple workers, those can be purposed to make different tasks in their own safe processes, unlike in a cluster, where each instance basically is a copy of another cluster process, doing exactly the same functions. While clusters are good to scale your application to utilize multiple threads of your CPU in a single node instance, Workers are also good to delegate different, blocking, CPU intensive jobs, and Node.js will handle the details of sharing the load between the CPU cores. Workers can also communicate with the main thread using an event-based style, where the main thread subscribes to messaging events of the worker. A worker’s unexpected failure will not terminate the main thread on its own. When a worker fails unexpectedly, it raises an event, and the main thread can subscribe to read the data coming from that event. A worker can create other workers if needed. The documentation recommends creating a pool of workers since creating workers is an expensive operation. When you have a pool of workers available, your application can instantly pass jobs to workers, freeing up the main thread to do other jobs while it is listening to messages from workers. Having that option available means your application’s UI can remain responsive.

If you want to imagine what a Node.js worker could be like in real life situation, imagine the following scenario. You are the manager of a large shop. You have the following problems to solve:

  • Getting goods to sell.
  • Helping incoming customers.
  • Keeping count of available goods.
  • Restock goods when needed.
  • Creating receipts for customers.
  • Creating invoices for companies.
  • Accounting and reporting.
  • Cleaning the shop.
  • Running marketing campaigns.
  • And many other things.

Players:

  • The shop as the Node.js application.
  • The manager as the code running on the main thread.
  • Employees as Worker processes that can help to free up the manager’s time, ergo unblocking the main thread.

The manager of that shop is the first and also the final person who is responsible for getting all of those jobs done by delegating. The manager in the Node.js concept is running the main thread.

In the tables below, you can see the real-life situation/players on the left and the equivalent application parts on the right.

Real-lifeNode.js
ShopApplication
ManagerMain thread. (limited by the single-threaded performance of the CPU core, unless clustered)
EmployeeWorker process (count limited by CPU cores)
CustomerUser (via User Interface)
ProductFeature to be used
Customers’ lineUser request queue
TimeProcess Time
Handling user’s problemJobs/Workers running.
Welcome another customerTake another request
SpaceMemory

Roleplay situation:

If the manager did everything alone, it would be a bad customer experience. Let’s say the customers have infinite patience, and the product the manager is selling is good. The customers line will get really long since it will take forever to serve a single customer at once. By delegating all the work to employees, the manager could free up its time to deal with only incoming user requests and delegate all user problems to employees. While the customer is waiting to get help from employees now, the manager can welcome another customer. If that shop has enough space and enough employees, the only limitation now is in the performance of the manager.    If the main thread did everything alone, it would be a bad user experience. Let’s say the users have infinite patience, and the features the application is selling(serving) are good. The user request queue will get really long since it will take forever to serve a single user request at once.

By delegating all the work to workers, the main thread could free up its process time to deal with only incoming user requests and delegate all jobs to workers. While the user is waiting to get served by workers (technically, the manager will forward the worker’s response to customers), the manager can receive another request. If that application has enough memory and enough CPU cores, the only limitation now is the single-threaded performance of the CPU core.
   

When clusters and workers are not enough anymore, it means the computer that is running the application has reached the memory or processor limits at that point. When that happens, there is still hope.

When there is more customer than available resource potential in a single shop, that is when the big shop can scale into franchises and open-chain stores.When there is more user request coming in than what the application can serve, that is when the application can be scaled with Micro Service Architectures (utilizing multiple computers) and requests delegated by load balancers.

The Child Process API

“The child_process module provides the ability to spawn subprocesses in a manner that is similar, but not identical, to popen(3).

From <https://nodejs.org/dist/latest-v16.x/docs/api/child_process.html>

“The child_process.spawn() method spawns the child process asynchronously, without blocking the Node.js event loop.

From <https://nodejs.org/dist/latest-v16.x/docs/api/child_process.html>

I left this section as bonus material since, practically, child processes can help with heavy computing problems; it has some caveats. On the benefits side, you can call any script that is callable as a command in a terminal. This implies that the script you are calling is not limited to Node.js. It could be written in almost any programming language, shell command, Python, C++, C#, etc.

On the drawback side, you should consider the following:

  • The child process has limitations around reading buffers:

“By default, pipes for stdin, stdout, and stderr are established between the parent Node.js process and the spawned subprocess. These pipes have limited (and platform-specific) capacity. If the subprocess writes to stdout in excess of that limit without the output being captured, the subprocess blocks wait for the pipe buffer to accept more data. This is identical to the behavior of pipes in the shell.

From <https://nodejs.org/dist/latest-v16.x/docs/api/child_process.html>

  • On Windows, each child process opens a separate terminal window, populating the user’s screen if run in a desktop environment:

“On Unix-type operating systems (Unix, Linux, macOS) child_process.execFile() can be more efficient because it does not spawn a shell by default. On Windows, however, .bat and .cmd files are not executable on their own without a terminal and therefore cannot be launched using child_process.execFile().

From <https://nodejs.org/dist/latest-v16.x/docs/api/child_process.html>

Summary

In this article we looked at 3 options that helps make use of all the processor cores without blocking the main thread of a node.js application.

Depending on your need you can choose:

I hope you enjoyed this article. If you have any feedback or question, please feel free to comment. Have fun coding 🙂


By Botond Bertalan

I am the founder of www.botond.dev
I started my programming studies in 2010 and started to work professionally in 2012 before graduation.
I love programming and architecting code that solves real business problems and gives value for the end-user.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.