Node.js Microservice Optimisations

A few performance, scalability and availability tips for running Node.js microservices.

Unlike monolithic architectures, microservices typically have a relatively small footprint and achieve their goals by collaborating with other microservices over a network. Node.js has strengths that make it an obvious implementation choice, but some of its default behaviour could catch you out.

 

Cache your DNS results

Node does not cache the results of DNS queries. That means that every time your application uses a DNS name, it might be looking up an IP address for that name first.

It might seem odd that Node handles DNS queries like this. The quick version – the system calls that applications can use don’t expose important DNS details, preventing applications from using TTL information to manage caching. If you’re interested, Catchpoint has a nice walkthrough of why DNS works the way that it does and why applications typically work naively with DNS.

Never caching DNS lookups is going to really hurt your application’s performance and scalability. I think the simplest solution from a developer’s perspective is to add your own naive DNS cache. There are even libraries to help, like dnscache. I’d tend to err on the side of short cache expiry, particularly if you don’t own the DNS names your looking up. Even a 60-second cache will have a big impact on a system that’s doing a lot of DNS lookups.

An alternative, if you are running in an environment where you have sufficient control, is to add a caching DNS resolver to your system. This might be a little more complex but a better solution for some scenarios as it should be able to take advantage of the full DNS records, avoiding the hardcoded expiry. Bind, dnsmasq and unbound are solutions in this space and a little Google-fu should find you tutorials and walkthroughs.

Reuse HTTP Connections

Based on the network traffic I’ve seen from applications and test code, Node’s global HTTP agent disables HTTP Keep-Alive by default, always sending a Connection:close request header. That means that whether the server you’re talking to supports it or not, your Node application will create and destroy an HTTP connection for every request you make. That’s a lot of potentially unnecessary overhead on your service and the network. I’d expect a typical microservice to be talking frequently to a relatively small set of other services, in which case keep-alive might improve performance and scalability.

Enabling keep-alive is straightforward if it makes sense to do so, passing the option to a new agent or setting the global agent http.globalAgent.keepAlive andhttp.globalAgent.keepAliveMsecs parameters as is appropriate for your situation.

Tell Node if it’s running in less than 1.5G of memory

According to RisingStack, Node assumes it has 1.5G of memory to work with. If you’re running in less, you can configure the allowed sizes of the different memory areas via v8 command line parameters. Their suggestion is to configure the old generation space by adding the “–max_old_space_size” with a numeric value for number of megabytes to the startup command.

For a 512M available, they suggest 400M old generation space. I couldn’t find a great deal of information about the memory settings and their defaults in v8, so I’m using 80% as a starting point rule of thumb.

Summary

These tips might be pretty obvious – but they’re also subtle and easy to miss, particularly if you’re testing in a larger memory space, looping back to localhost or some local container.

 

Continuous Integration for Researchers?

TL;DR

Could tailored continuous integration help scientific researchers avoid errors in their data and code?

Computer Error?

Nature reported on the growing problem of errors in the computer code produced by researchers back in 2010. Last year, news hit the press about an error made in an Excel spreadsheet that undermined public policy in the UK. Mike Croucher discusses several more examples of bad code leading to bad research in his talk ‘Is your Research Software Correct?’.

It seems odd that computers are involved in these kinds of errors – after all, we write instructions down in the form of programs, complete and unambiguous descriptions of our methods. We feed the programs to computers and they do exactly what the programs tell them to do. If there’s an error, the scientific method should catch them when other researchers fail to reproduce the results. So why are errors slipping through?

That’s the question that Mike and I were chewing over between talks at TEDxSHU in December 2015. I think the talks I heard there inspired me to think harder about trying to find an answer. It seems like the first step to solving the problem is reproducing results.

Reproducibility Fail

My MSc. dissertation involved processing a load of data that I was given and running programs that I’d written to draw conclusions. Although my dissertation ran to many thousands of words, it was a fairly shallow description – my interpretation, in fact – of what the data said and what the code did. I can’t give you the data or the code as there were privacy and intellectual property concerns about both.

If I’m going to tear it apart, my dissertation really describes what I intended to tell a computer to do to execute my experiment. Then it claims success based on what happened when it did what I actually told it to do.

If you had my code, you could run it on your own data and see if my conclusions held up. You could inspect it for yourself. You could see the tests I wrote and maybe write some yourself if you had concerns. You could see exactly what versions of what library code I was using – maybe there have been bugs discovered since that invalidate my conclusions. If you had my data you could check that my answers were at least correct at the time and are still correct on more recent versions of the libraries.

If you had my code and my data, you won’t know what kind of computer I did the work on or how it was set up. Even that could change the result – remember the pentium bug? Finally, if you had all that information, you’ve still got to get hold of everything that you need, wire it all up and do your verifications. That’s quite a time and cost commitment, assuming that you still can get hold of all that stuff months or years later.

Continuous Integration to the Rescue?

I’m sure I’ve just skimmed the surface of the problem here – I’m not a researcher myself, nor am I claiming that my dissertation was in any way equivalent to an academic paper. It’s just an example I can talk about,  and it’s enough to give me an idea. It sounds a little like the “works on my machine” problem that used to be rife in software development. One of the tools we use to solve it is “continuous integration”.

Developers push their code to a system that “builds” it independently, in a clean and consistent environment (unlike a developer’s computer!). “Building” might involve steps like getting libraries you need, compiling and testing your code. If that system can’t independently build and test your code, then the build breaks and you fix it.

A solution along these lines would necessarily have to automatically verify that all the information needed to get the code running, such as the code itself, configuration parameters, libraries and their versions, and so forth are present and correct. If the solution could also accept data and results, and then verify that the code runs against the data to produce the results, then it seems like we’ve demonstrated reproducibility.

Setting your own CI server isn’t necessarily straightforward, but Codeship, SnapCI and the like show that hosted versions of such solutions work, offer high levels of privacy and (IMHO) simplify the user experience dramatically. A solution like one of these, but tailored to the needs and skills of researchers might help us start to solve the problem.

Tailored CI for Researchers

I think that the needs of a researcher might differ a little from those of a software developer. What kinds of tailoring am I talking about? How about:

  • quick, easy uploading of code, data and results, every effort to make it “just work” for a researcher with minimal general computing skills
  • built-in support for common research computing platforms like MATLAB and Mathematica
  • simple version control applied automatically behind the scenes – maybe by default each upload of code, data and results is a new commit on a single branch
  • maybe even entirely web-based development for the commonly-taken paths (taking cloud9 as inspiration)
  • support taking your code and data straight into big cloud and HPC compute services
  • enable more expert users to take more control of the build and test process for more unusual situations
  • private by default with ability to share code, data and results with individuals or groups
  • ability to allow individuals or groups to execute your code on their data, or their code on your data, without actually seeing any of your code or data
  • what-if scenarios, for example, does the code still produce the correct results if I update a library? How about if I run it on a Mac instead of a Windows machine?
  • support for academic scenarios like teams that might be researching under a grant but then move on to other things
  • support for important publication concerns like citations
  • APIs to allow integration with other academic services like figshare and academic journal systems

I think that’s the idea, in a nutshell. I’m not sure if it’s already being or been done, or if not, what could happen next, so I’m punting it into the public domain. If you have any comments or criticism, or if there’s anything I’ve skimmed over that you’d like me to talk about more please leave me a comment or ping me on Twitter.