Metrics Driven Development – What I did to reduce AWS EC2 costs to 27% and improve 25% in latency

Recently, I did some work related to auto-scaling and performance tuning. As a result, the costs reduced to 27% and service latency improved 25%.

Overall Instance Count And Service Latency Change
Overall Instance Count And Service Latency Change

Takeaways

  • React Server Side Render performs not good under Nodejs Cluster, consider using a reverse proxy, e.g. Nginx
  • React V16 Server Side Render performs much faster than V15, 40% in our case
  • Use smaller instances to get better scaling granularity if possible, e.g. change C4.2xLarge to C4.Large
  • AWS t2.large performs 3 times slower than C4.large on React Server Side Render
  • AWS Lambda performs 3 times slower than C4.large on React Server Side Render
  • There’s a race condition in Nginx http upstream keepalive module which generates 502 Bad Gateway errors (104 connection reset by peer)

Background

Here’s the background of the service before optimization:

Improve performance of React Server Side Render by warming up service

Background

We’re using ElasticBeanstalk Blue/Green deployment by swapping DNS, and in front of the deployed services, there’s a company level Nginx to forward requests to different services.
The TTL of DNS entry set to 1 minute, and Nginx will cache the resolved name entries until it expired. Each time after we deploy, all the requests hit the new environment after the DNS cache expired in Nginx.

React Server Render time without warming up

The response time increases a lot in the following a couple of minutes and becomes stable after 5 minutes. Because the response time impacted by upstream services, it’s better to analyze and improve react server render time which is a sync method call and not involve IO operations.

Here’s the initial reactServerRender time for Home Page and Search Result Page:


For the Home page, it took 2-3 minutes for the reactRender time reduced from 450 – 550 ms to 120 ms

Continue reading “Improve performance of React Server Side Render by warming up service”

Aws Lambda retry behaviours on stream-based event sources

From the documentation, AWS Lambda will retry failed function on stream-based events sources.

By using Node.js, we can fail the function by many different ways, e.g. using callback to return error, throw exception directly, throw exception inside Promise, using Promise.reject. Then the questions is, what’s the proper way to let AWS Lambda know it needs a retry?

I did an quick test on following scenarios by setting up DynamoDB Stream and event mappings. It’s fun to have a guess which one will be retried and which one won’t.

Different ways to end the function

  • On Exception
module.exports.throwException = (event, context) => {
  console.log(JSON.stringify(event));
  throw new Error('something wrong');
};

Continue reading “Aws Lambda retry behaviours on stream-based event sources”

Capture console output when using child_process.execSync in node.js

I’m working on a nodejs project recently, and need to execute a command by using child_process.execSync().

The ideal solution should:

  • Support color output from command, which generally means the stdout/stderr should be a tty
  • Be testable, which means I can get the command output and verify

From the node.js documentation, I can use options.stdio to change stdio fds in child process.

For convenience, options.stdio may be one of the following strings:

‘pipe’ – equivalent to [‘pipe’, ‘pipe’, ‘pipe’] (the default)
‘ignore’ – equivalent to [‘ignore’, ‘ignore’, ‘ignore’]
‘inherit’ – equivalent to [process.stdin, process.stdout, process.stderr] or [0,1,2]

I started from the default behaviour – using pipe, which returns the console output after child process finishes. But it doesn’t support colours by default because piped stdout/stderr is not a tty. Continue reading “Capture console output when using child_process.execSync in node.js”