Optimising the cloud means knowing where to store your data in the most cost-effective way because the difference in savings is significant. 

In this blog post find out how edge caching and regional databases will make a significant difference to your bottom line. 

At the dawn of cloud computing, it was essentially a business centralising IT resources into a well-defined set of integrated massive-scale data centres. The architecture hence reflected the challenges of IT techniques and technologies at that moment in time. Over time, as cloud adoption and the scale of integration ever increases, even the largest hyper-scale providers reach the physical limits of their data centres.

How do they ensure increasing customer demands are met?

The first years of cloud computing were mostly an IT outsourcing business exercise. While that still hasn’t changed, the nature and scale of cloud computing have only ever increased (and at an increasing velocity). 

This development has fuelled the integration of increasingly mission-critical elements of individual organisations IT landscapes as confidence in cloud computing and its value proposition grew. As the benefits of the cloud have become appreciated by business owners, it has resulted in organisations asking for more services to be made available as a cloud service – thus accelerating the disruptive impact and scale of cloud adoption even further.

At the same time, the well-defined, yet strictly hierarchical topology of typical cloud infrastructures aims to provide high levels of performance, resilience and near-instantaneous scalability of IT operations outsourced to the cloud. Take Amazon Web Services as an example, individual data centres are grouped together into availability zones, which in turn are aggregated into AWS regions. This intrinsic redundancy built into the infrastructure comes at a price – despite the spectacularly low price tags attached to stock compute and storage resources these days. 

With increased demand comes an increased need for data centres, availability zones and regions. However, there are two issues here that limit the efficacy of a sloppy “Add more resources!” response to more demand:

  • Not all cloud use cases require this level of built-in redundancy and management overhead
  • Data centres have a natural limit in physical size

Moore’s law for data centres?

Do you remember Moore’s Law for CPUs? The same appears to happen for data centres. The similarities cannot be overlooked: Initially, CPUs were unfettered in growth and capabilities, and the miniaturisation of CPUs has now reached the scale of nanometers. A nanometer is one-billionth of a metre or one millionth of a millimetre. A thick human hair is about 0.1mm thick – which equates to 100,000 nanometers.

Upon reaching this physical hard limit, CPU manufacturers did not scale up – instead, they scaled out. The day and age of single-core CPUs are long gone. These days, even the consumer market features laptops with 8 cores and more per CPU. Even handheld devices (mobile phones, tablets) operate CPUs with 4 cores. That means that contemporary IT hardware is a distributed system!

Sounds Familiar? It is.

As data centres reach their physical capacity limits of floor space, heat exchange capacity, electrical power requirements (and many more), a natural and comparable response would be to … build more data centres! But that comes again at the cost of the operational overhead to satisfy the SLOs cloud providers offer when you use their infrastructure.

Simply put, the cost to increase the physical size of a data centre per revenue-generating service unit (e.g. a container of a certain size) increases disproportionally until it exceeds the revenue generated per service unit: The cloud provider would turn a loss.

Combine that with the first issue of not all use cases requiring this level of redundancy. This is the scaling out part of cloud providers’ response to the limitations of data centre growth in physical size.

Not More Data Centres – But More Locations

This scaling out answer is as elegant as it is necessary: Instead of building and operating highly complex data centres everywhere (leaving aside the fact that not all geolocations are suitable for data centres, or planning permission is simply not granted) cloud providers now build or rent floor space for much less complex and smaller locations for specific use cases.

The most obvious one is content delivery networks:

  • Video on-demand –  Netflix, Amazon Prime, Disney, live TV streaming
  • Software delivery – jsdelivr, GitHub, and other global software repository services that serve precompiled downloadable software packages

The typical use case is to bring the delivery of otherwise static content much closer to the customer (i.e. the customer of the cloud service user!). This saves huge amounts of generated traffic unblocks unnecessarily blocked bandwidth and is generally more performant than serving globally dispersed customers from one centralised location (where the content is produced)

In other words, these locations are a giant macro-scale, worldwide caching service for optimising outbound traffic and responsiveness of your service to your customers.

Edge computing – one step further

You rightfully wonder about the other part – inbound caching.

This is where services such as AWS Cloudfront go beyond mere content delivery networks. While CDNs are a classic one-way use case very similar to TV broadcasting, true edge computing reverses the trend of cloud computing centralising IT services to some extent.

Edge computing offers (limited) computing capabilities at the edge (i.e. perimeter) of the cloud provider’s IT infrastructure. 

This design pattern reduces demand on the centralised system and network by serving frequent similar customer requests close to the customer instead of routing it (possibly long-distance) to the central system and then all the way back. In the age of search engine optimisation (SEO), a fast response time is a must for any organisation.

Typical use cases where edge computing will be beneficial:

  • Location-based services – ride-sharing services, store locators, etc.
  • Online gaming – Game servers are often partitioned to continental or regional boundaries to overcome latency issues
  • IoT – IoT often requires data sanitation and pre-processing capabilities that need to be close to the location of the devices and not the central service

Will I benefit from it? Almost Definitely Yes

Whether “mere” content delivery or true edge computing is required, even as an organisation that targets only the British Isles, Amazon offers 12 edge locations (Dublin, London 9x, Manchester 2x) for you to choose from. At this scale and density and the given low cost, it is hard to argue that you won’t benefit from using edge computing.

Help

This was a lot of technical and IT architectural information. Digital Craftsmen excels in secure managed cloud services deployments and we are both ISO27000 and UK Cyber Essentials certified which means data is also secured and protected.  Digital Craftsmen is also cloud-agnostic, we’re not tied to any one cloud provider and this means we will always choose the best solution and underpinning provider for you from AWS, Azure, GCP or Oracle. 

Contact us on 020 3745 7706 or email the team at [email protected]

Latest Insights

Read the latest news, research and expert views from our master Craftsmen on cyber security and hosting issues, cyber risk, threat intelligence, network security, incident response and cyber strategy.