The Evolution of Infrastructure: How We Got to Containers
Today’s forward-thinking enterprises prioritize speed and flexibility. They know they must move quickly to stay competitive in today’s market, which means deploying new software continuously and being able to rapidly scale their IT infrastructure up and down as demands change.
The engineers charged with making sure this is possible require an environment that is dynamic, secure, and reliable. Furthermore, this population needs the applications they develop to run across several different types of environments. Not surprisingly, these requirements can be challenging to meet.
Over the past two decades, we have seen three major epochs in the world of application deployment. We started with personal computers and applications living on tangible machines. Years later, hardware evolved into virtual machines, meaning that applications could exist outside of physical hardware for the first time. Today, as we experience the rise of containers, we are seeing a dramatic increase in efficiency and agility. Below, we’ll explore each—including the benefits and limitations that have led the industry to continue iterating forward.
Evolution One: Personal Computers
As you probably know, the advent of personal computers has drastically changed not only the way we live our lives, but also the way businesses go about building and providing their goods and services. Today, software applications are one of the most common and powerful types of products on the market. But building them can be complicated.
Historically, deploying applications has required four essential components:
- Hardware (PCs and servers)
- An operating system (e.g. Linux, MacOS, Windows 10)
- Libraries (application dependencies)
- The application (business logic)
As you can probably tell, individual PCs are at the center of this deployment method. At first, in the early days of app development, this seemed to work pretty well. However, there are several challenges with a PC-centric application deployment strategy. It means that your infrastructure is:
- Vulnerable - With this setup, development and administrative teams are then responsible for managing an entire stack, all the way down to the hardware. This can be resource-intensive and requires a wide range of skill sets and experience. It can also be problematic because it creates a large potential attack surface, adding to an organization’s security concerns.
- Inconsistent - Additionally, PC-centric deployment is often plagued by inconsistencies between a developer’s local environment, the development and quality assurance (QA) setup, and production. This can lead to problems within the physical operating system, or with standard maintenance, such as updates.
- Inefficient - A PC-centric application deployment stack must always be configured to have the capacity available for peak load — which means most of the time you will be dramatically underutilizing your available resources. This is expensive and wasteful, especially factoring in human error--a machine or data center requires a long lead time for purchase and deployment.
- High-Maintenance - Another wrench in this setup is that servers must be treated like “pets.” They require care and attention and their components can rarely be shared since applications and application data are typically all part of the same system. This makes changes, deployments, and restorations expensive and time-consuming.
Evolution Two: Virtual Machines
In order to reduce the amount of necessary dependencies for application deployment and alleviate some of the challenges described above, around 2000 we saw the rise of virtualization. This is when virtual machines (VMs) became the go-to technology for organizations looking for agility. With the debut of VMWare in 1998, and then growing competition from Microsoft, Citrix and Oracle, over the past two decades organizations have been able to disassociate applications from physical pieces of hardware.
This solves some of the problems described above, including inconsistencies across environments and large attack surfaces.
However, they aren’t perfect. Virtual machines still take up a great deal of system resources. They require the configuration and maintenance of a full operating system with the application and its associated libraries.
To address this, many advanced platforms allow organizations to scale by adding more "virtual resources" to the machine. This way, they can be moved, resized, and migrated a bit more easily. This way, they emulate physical hardware and share CPUs, resulting in a higher application density (and thus better efficiency) compared to full, standalone PCs.
But, since each VM still includes a full, separate operating system, VMs are not ideal for configuration and deployment as microservice architectures.
Evolution Three: Containers
To combat some of the challenges of both PC-centric and VM-driven application deployment, over the last few years containers have moved front and center — exploding into mainstream popularity with the debut of the open-source Docker technology in 2013.
According to a study published on Cornell University's arXiv archive, it takes 23 minutes to create a cluster using Docker containers and 46 minutes using VMs--exactly 100% longer.
So what exactly are containers? Containers are an application deployment technology through which the kernel, or the OS core, can allow multiple, isolated processes to take place on a single box. Containers share the kernel of the underlying distribution and ride on top of that shared environment. That means each container consists of an application, as well as all its dependencies, libraries, and configuration files — but unlike VMs or PCs, any potentially problematic differences in the underlying infrastructure are removed.
- Developers have embraced containers wholeheartedly, because they are:
- Lightweight: Containers are easier to start, execute, and scale.
- Low-resource: Containers take up fewer resources, allowing for increased application density.
- Backwards-compatible: Containers can be used with existing applications with little to no modification.
- Modernized: Containers are well-matched with modern application architectures, such as microservices.
Today, more and more organizations are taking advantage of containerization: According to a 2018 Container Adoption Benchmark Survey from Diamanti, 44% of respondents plan to deploy containers in favor of virtual machines. 55% of these same respondents spend over $100,000 annually on VMWare fees with 34% spending more than $250,000 annually.
For an in-depth comparison of these two most recent technologies, read: Containers vs. VMs: What’s the Difference and When to Use Them.
Moving Forward with Containers
There’s no doubt that containers are the most important application deployment method in the landscape today. They are clearly the best way to create a portable, reliable environment for application development, testing, and deployment.
The growing complexity of business needs creates demand for speed and flexibility at lower costs. Containers allow technology organizations to more easily meet these demands. According to the same Diamanti Survey, management overhead (59%), performance (39%), and VMWare licensing fees (38%) are the top three factors driving container adoption in favor of virtual machines.
The container industry is evolving quickly. As it continues to address issues related to security, ease of use, and building of the overall container ecosystem, more organizations, including large enterprises, will dive headfirst into these exciting waters.