Blazeclan Technologies Recognized as a Niche Player in the 2023 Gartner® Magic Quadrant™ for Public Cloud IT Transformation Services

Things Every CXO Must Know About AWS Latency

400 milliseconds is literally the blink of an eye.

10 milliseconds for the page to load from AWS’s Mumbai infrastructure if you are accessing it in India.

Cheap data, affordable smartphone, computerization of services, explosion of content has not just led to increasing load on server infrastructures but also have habituated consumers to expect page load times and streaming of games and multimedia content at the speed of light.

Studies have discovered that a website user will close the page if it doesn’t load under two seconds. And if you are a sports fan streaming your game on an OTT app, then you know that even 2 seconds delay or buffering can play a literal spoil sport in the viewing experience.

The answer for all this lies in understanding the fundamentals of latency and how you can reduce it. AWS, a pioneer in providing cloud computing infrastructure, can help enterprise brands to deliver a seamless experience.

Latency is the wait time introduced by the signal travelling the geographical distance as well as over the various pieces of communications equipment.” – Whatis.com

Choosing a cloud service provider has many attributes, but when it comes to selecting latency, here are the four immutable parameters every CXO must reckon with.

  1. AWS Regions
  2. CDN’s role
  3. Developers’ confidence
  4. End-to-end Latency parameters

Let’s look into each of the parameters of latency

AWS Regions: One of the direct and obvious factor affecting is the distance between where your customers are accessing your OTT/App or website to that of physical location of closest cloud infrastructure (called AWS regions).

Amazon Web Services’ regions are currently available across the world at 19 strategic locations.

A simple ping test at cloudping [dot] info can let you decide which AWS region you must choose to provide a fast load user experience to your consumers.

Here are two different ping test we did for determining the latency time between two AWS regions. The image on the left is result from Dubai, which has a latency of 308 ms from Mumbai. The image on the right is a result from Mumbai, where the latency is 40x faster at just 10 ms!

If users of your application were majorly in India, then choosing AWS Asia – Pacific (Mumbai) would be the ultimate choice to have lowest latency.

CDN’s role in latency:  A Content Delivery Network can reduce and fix a lot of the extra latency by delivering a copy from a closer server to your customers. However, your host server location is still very important as we saw in above example.

It is a popular myth that CDN can reduce latency completely, no matter what you change in CDN configuration, there’s not much it can do. In case of Amazon’s own CDN – CloudFront, it doesn’t add artificial buffers along the data chain. And in case you are considering a third party CDN service, you may want to check their artificial buffer policy. In sum, what matters is the host location and how much lag it has with a CDN.

Developers’ confidence: As a CXO it may be your prerogative right to select any cloud computing solution provider, be it for economic reasons or otherwise. But when it comes to running the cloud show successfully on a daily basis, it is the execution team of engineers, developers, dev ops, and system admins who matter immensely; and AWS being a pioneer in cloud since 2006 has created a habituated and experienced ecosystem of cloud engineers.

 

Caption: AWS wins hands down.

Going by Stack Overflow’s trends tool paints a pretty neat picture in terms of how active developers are in discussing AWS vs GCP on its platform. Any technology is only as good as what the developers make of it, as a CXO, choosing AWS is not just a rational decision, but also a wise one.

End-to-end Latency parameters: Despite choosing the right AWS Region, it is critical to measure end to end latency, that is to ascertain which component in the data chain is attributable for what percentage of latency. For example, of a media streaming application, component of end to end latency would include capture latency, encoding latency, packaging and repackaging latency, delivery latency (CDN plays a role here) and Client latency.

Of all the components, encoding and client latency (last mile at user’s end such as bandwidth) are significant contributors. Optimising them is key to reduce overall latency.

End notes for CXOs. There are more parameters than the above four to be optimised for a seamless user experience, such as segment size, buffer size, edge time etc., at BlazeClan we work with global companies who are leaders in their industry and help them thrive at scale by working closely with both AWS and the brand’s team.

Reach out for a more detailed conversation, coffee is on us!

1 Comment

  1. Tarhib IT Limited says: 1 month ago

    Understanding how regions, CDNs, developer confidence, and end-to-end latency parameters impact your applications.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.