|License Type||SaaS & On-Premise|
|Main Product Category||Node.js Agent|
Does the Node.js agent affect my application's performance?
As you might expect, Contrast's analysis does make your app run a little slower. The good news is it's generally not enough for anyone to complain about, and the results are definitely worth it.
Assessing Your Resource Usage
You should check the amount of resources your application is using without our agent. Often the environment you are using is optimized for your application in terms of assigned available resources (RAM and CPU) - for example, a Docker image will have a certain amount of allocated memory, an AWS instance will have certain parameters too - CPU and RAM. If your application uses almost all of these resources it won’t be able to handle the additional instrumentation done by the agent. So your first and simplest step is to increase the allocated resources. Double the usual usage for your app as a start - this is quite conservative but it’s a good start so you can actually observe the normal behaviour of your application with the agent. After that, conclusions can be made and more fine tuning of the resources can be done.
When you start your server with Contrast, you'll notice a delay in startup time. This is first caused by the agent attempting to establish a connection with the Contrast application. High latency between the server and the Contrast application, or a Contrast application which is under heavy load, may exacerbate the startup time. Where startup time is critical, this cost can be reduced as follows:
- Run Contrast with
--inventory.analyze_libraries false. The agent will not collect information about the application's dependencies.
If startup time is still unmanageable after using these options, please follow the guide here: Startup Performance Guide coming soon
It's probably more important to think about how Contrast affects the round-trip time. In typical applications, Contrast may noticeably impact the round-trip time of requests that contain a lot of business logic. Round-trip times for static resources typically don't get measurably worse. In requests where the total round-trip time is dominated by database or Web Service calls, Contrast's effect will be less noticeable.
If better performance is really important to your environment, consider the following options:
- Run Contrast nightly during integration tests
- Run Contrast in an alternate environment (QA system or DEV environment)
- Run Contrast on a single node in a load-balanced environment
- Limit stack trace reports by adding
agent.stack_trace_limitin the .yaml file or
CONTRAST__AGENT__STACK_TRACE_LIMITas an environment variable - this option sets the limit of the stack trace the agent reports. The default here is 10. Lowering the limit will decrease the memory usage of the agent. Having a low stack-trace limit does not decrease accuracy in finding vulnerabilities, it only decreases what is reported to the Contrast UI. Nevertheless, this can be useful when the app is running with the agent in some CI/CL pipelines where the goal is not pinpointing the vulnerability but the mere occurrence of it.
If request processing time is still unmanageable after using these options, please follow the guide here: Node.js - Runtime Performance Troubleshooting
While it's normal to experience some overhead from our Assess instrumentation, it should not affect performance to the extent that the application is rendered unusable. If such a performance impact is experienced, it may indicate that the agent is monitoring too many calls that it doesn't need to.
For example, the agent may try to observe too many function calls in intentionally slow operations such as the
bcrypt hashing function and the overhead this causes can cause the application to become very slow to respond.
The agent has built-in measures to avoid these situations as much as possible. To handle them, the agent employs something we call dead-zoning. Dead-zoning disables instrumentation during operations that are too expensive to follow in detail, or which the agent does not need to watch because it provides no benefit.
For example, if we detect that our instrumentation in a library such as
bcrypt-js is responsible for this kind of performance degradation, we will add it to our list of dead-zones.
If you suspect something like this is causing performance issues in your application, you can add a dead-zone for a library with the
--agent.node.unsafe.deadzones option, which accepts a comma-separated list of the modules you wish to dead-zone.
If the use of this setting resolves a performance issue, please reach out to our Support team, especially if the module is a public one available in npm. We can use this information to update our internal dead-zoning policy and improve the way we detect modules that should be dead-zoned automatically.