Cloud Trends — Where have we come from and where are we headed

adrian cockcroft
4 min readNov 23, 2016

I gave several talks over the last few years on this subject, mostly at the Structure conference. Now that I’ve joined @AWScloud my perspective shifts somewhat, but the same trends are playing out.

The cloud ecosystem continues to mature, and customers are looking for staying power, as small vendors and small clouds from some big vendors fade away and close down. The ability to support and migrate enterprise workloads is critical for winning the biggest new cloud deals. Extremely scalable cloud capacity is critical to provide room for the largest web scale customers to keep growing. Adding new regions around the world is critical to provide local jurisdiction support and low network latency.

In 2014 we saw many enterprises sign up for AWS, start proof of concept tests and launch green-field applications. In 2015 larger scale migrations started, and plans were made for entire datacenters to be replaced by public cloud accounts. In 2016 these changes moved from early adopter markets such as media and retail, and started to take root in finance, as banks, insurance companies and their regulators figured out how to run and audit public cloud applications. Next up: early adopters in the energy, transport, government, manufacturing and healthcare markets are leading the way to cloud.

What do modern applications look like? We are seeing the combination of rapid cloud based provisioning, a DevOps culture transformation, and the journey from waterfall through agile to continuous delivery product development processes give rise to a new application architecture pattern called microservices. This shares the same principles as the service oriented architecture movement from 10–15 years ago, but in those days, machines and networks were far slower, and XML/SOAP messaging standards were inefficient. The high latency and low messaging rates meant that applications ended up composed of relatively few large complex services. With much faster hardware and more efficient messaging formats, we have low latency and high messaging rates. This makes it practical to compose applications of many simple single function microservices, independently developed and continuously deployed by cloud native automation.

What does modern hardware look like? The capacity of individual physical systems continues to increase, and is far larger than most applications require. As I write this the largest single instance type on AWS is the x1.32xlarge with almost two Terabytes of RAM, four Terabytes of solid state disk, 128 vCPUs, and a 20GBit network interface. The most powerful p2.16xlarge instance type has raw performance of 70 Teraflops from about 40,000 GPU cores. Systems are sliced up using virtual machines to make appropriately sized instances that can be provisioned and boot an operating system in a few minutes.

A few minutes to get an instance used to feel amazingly fast, but now we use containers to get applications running in a few seconds, and to pack more small applications efficiently into large instances. The early cloud native architectures such as @NetflixOSS used an instance to host each microservice, but many people are now moving to use more lightweight Docker containers to host each microservice.

We’ve shrunk the size of microservices so that they each perform a single function, one of many that make up an application. There are plenty of situations where individual microservices sit idle most of the time, but need to be ready to respond quickly when something happens, and potentially scale up to handle a lot of requests in a burst. It’s inefficient to have lots of idle containers, or to provision them when a request arrives. To meet this need AWS Lambda was launched two years ago, and has helped to create a serverless or function as a service (FaaS) programming model that is emerging as a new pattern for cloud application development. By optimizing for rapid launch of a single shot invocation, there is no need to charge when the function isn’t running. By starting exactly as many functions as are needed to process incoming events there is no need to provision extra headroom or do capacity planning. For appropriate workloads AWS Lambda is simpler to operate and a small fraction of the cost of a set of permanently running microservices.

As we look forward into 2017, there is growing interest in serverless architectures and an ecosystem is developing around tooling to build, monitor and operate serverless applications. However AWS Lambda is more than just an application architecture, it has it’s roots in S3, where it provides methods that are triggered by actions on the object store. Lambda functions can be attached to an increasing number of AWS services, and there are exciting possibilities for event driven automation of cloud infrastructure and services.

As the rate of change in technology increases there are the usual worries about skills shortages. Part of the answer is the democratization of access to technology. Some of the most advanced technology available in areas like big data, machine learning and cloud exists now as a combination of web services and open source software packages. Just a few years ago, to have access to leading edge technology you would have had to be at a top university or industry research lab and have a large budget and skilled staff to operate the systems. Today, services and infrastructure are available by the hour, software can be downloaded for free, and there is a huge amount of online information to help you learn and keep up to date. One challenge I’m particularly interested in is how to find undeveloped talent, and connect it with the right opportunities to learn. A project manager, left behind by the journey from waterfall based projects to continuous delivery of products, can discover and develop latent talents as a data scientist. Schoolgirls can build mobile applications and could even leverage machine learning with a serverless cloud back-end. Unemployed workers can retrain on the latest technologies, and build up a reputation by entering coding contests and contributing to open source projects, as a pathway to new opportunities.

I will be attending AWS re:Invent, and I’m presenting at 3:30pm on Thursday in the architecture track - ARC213 Open Source at AWS — Contributions, Support and Engagement. More on that topic in future posts.

Paraphrasing William Gibson “The future is already here — and now it’s globally distributed.”

adrian cockcroft

Work: Technology strategy advisor, Partner at (ex Amazon Sustainability, AWS, Battery Ventures, Netflix, eBay, Sun Microsystems, CCL)