Node.js Microservice Optimisations

A few performance, scalability and availability tips for running Node.js microservices.

Unlike monolithic architectures, microservices typically have a relatively small footprint and achieve their goals by collaborating with other microservices over a network. Node.js has strengths that make it an obvious implementation choice, but some of its default behaviour could catch you out.

 

Cache your DNS results

Node does not cache the results of DNS queries. That means that every time your application uses a DNS name, it might be looking up an IP address for that name first.

It might seem odd that Node handles DNS queries like this. The quick version – the system calls that applications can use don’t expose important DNS details, preventing applications from using TTL information to manage caching. If you’re interested, Catchpoint has a nice walkthrough of why DNS works the way that it does and why applications typically work naively with DNS.

Never caching DNS lookups is going to really hurt your application’s performance and scalability. I think the simplest solution from a developer’s perspective is to add your own naive DNS cache. There are even libraries to help, like dnscache. I’d tend to err on the side of short cache expiry, particularly if you don’t own the DNS names your looking up. Even a 60-second cache will have a big impact on a system that’s doing a lot of DNS lookups.

An alternative, if you are running in an environment where you have sufficient control, is to add a caching DNS resolver to your system. This might be a little more complex but a better solution for some scenarios as it should be able to take advantage of the full DNS records, avoiding the hardcoded expiry. Bind, dnsmasq and unbound are solutions in this space and a little Google-fu should find you tutorials and walkthroughs.

Reuse HTTP Connections

Based on the network traffic I’ve seen from applications and test code, Node’s global HTTP agent disables HTTP Keep-Alive by default, always sending a Connection:close request header. That means that whether the server you’re talking to supports it or not, your Node application will create and destroy an HTTP connection for every request you make. That’s a lot of potentially unnecessary overhead on your service and the network. I’d expect a typical microservice to be talking frequently to a relatively small set of other services, in which case keep-alive might improve performance and scalability.

Enabling keep-alive is straightforward if it makes sense to do so, passing the option to a new agent or setting the global agent http.globalAgent.keepAlive andhttp.globalAgent.keepAliveMsecs parameters as is appropriate for your situation.

Tell Node if it’s running in less than 1.5G of memory

According to RisingStack, Node assumes it has 1.5G of memory to work with. If you’re running in less, you can configure the allowed sizes of the different memory areas via v8 command line parameters. Their suggestion is to configure the old generation space by adding the “–max_old_space_size” with a numeric value for number of megabytes to the startup command.

For a 512M available, they suggest 400M old generation space. I couldn’t find a great deal of information about the memory settings and their defaults in v8, so I’m using 80% as a starting point rule of thumb.

Summary

These tips might be pretty obvious – but they’re also subtle and easy to miss, particularly if you’re testing in a larger memory space, looping back to localhost or some local container.

 

Author: brabster

Software developer in the North of England

Leave a comment