Understanding & using container orchestration

Learn the basics of container orchestration and how it can help you scale containerized applications more easily.

Container orchestration has become a hot topic over the last few years, with many enterprises publicly announcing their move to the cloud. Google, Facebook, Netflix, Capital One, and IBM are just a few examples of companies benefiting from using a container orchestration platform.  According to Forrester Consulting’s 2020 Container Adoption & Usage in the Enterprise study, 65% of tech leaders will turn to 3rd party platforms for container management rather than relying on internal expertise. So what is container orchestration and how does it work? This post will cover the following questions to help you understand what tools are available and the benefits of using these technologies. 

What is container orchestration?

Container orchestration automates the scheduling, deployment, networking, scaling, health monitoring, and management of containers. Containers are complete applications; each one packaging the necessary application code, libraries, dependencies, and system tools to run on a variety of platforms and infrastructure. Containers in some form have been around since the late 1970’s, but the tools used to create, manage, and secure them have dramatically changed.

We have come a long way from the days of building single-tier monolithic applications that only worked on a single platform. Today, developers have choices between microservices, containers, and virtual machines, all with options for different cloud providers or hybrid deployments including on-premises.  

Learn more about containers and VMs in our post Containers vs. VMs: What’s the Difference and When to Use Them.

What problems does container orchestration solve?

If you have ever tried to manually scale your deployments to maximize efficiency or secure your applications consistently across platforms, you have already experienced many of the pains a container orchestration platform can help solve. Scaling containers across an enterprise can be very challenging without automated methods for load-balancing, resource allocation, and security enforcement.  

Scaling containers across the enterprise

Suppose there are five applications running on a single server and you are the administrator responsible for the deployment, scaling, and security of these systems. Assuming the applications are all written in the same language and developed on the same operating system, this might not be too difficult. But what if you need to scale to a few hundred or thousand deployments and move them between local servers and your favorite cloud provider? This a very simple example, but here are some questions to help illustrate the challenges that come with scaling enterprise applications:

  • Do you know which hosts are overutilized?
  • Can you easily implement rollback and updates to all of your applications, regardless of where they are located?
  • Are your applications load-balanced across multiple servers?
  • Can you modify the deployments easily through a UI as well as CLI?
  • Are your company’s security standards enforced across the infrastructure and applications? 

Automation, a single repeatable task that can be performed routinely without human intervention, can make many of these tasks more efficient. Container orchestration systematically executes the workflows that control many independent automated processes.  

Limitations of containers without orchestration capabilities

Docker is one example of a containerization platform that packages your application and all its dependencies together. Containers are portable and flexible, and they empower developers to create better applications. Let's take a look at a few of the most common Docker CLIs:

  • pull an image or a repository from your local registry, private registry or Docker Hub
  • create a container from an image.
  • start one or more stopped containers
  • stop one or more running containers 

These commands are adequate for managing a small number of containers on a few hosts, but they fall short of automating the full lifecycle of complex deployments on multiple hosts.  

What if you could simply “declare” what you wanted to accomplish rather than having to code all of the intermediate steps in between? By using a container orchestration platform you achieve these benefits:

  • Scaling your applications and infrastructure easily 
  • Service discovery and container networking
  • Improved governance and security controls
  • Container health monitoring 
  • Load balancing of containers evenly among hosts
  • Optimal resource allocation
  • Container lifecycle management

How does container orchestration work?

Container orchestration tools are declarative by nature. You simply need to state the desired outcome and the platform will ensure that state is fulfilled. So why is this important to containers? Let’s start with a few simple statements and then an example to make it easier to comprehend the benefits. 

Declarative vs imperative infrastructure and programming

Imperative programming, in its simplest form, can be described as “how” an object state should change and the exact order in which those changes should be executed. Declarative programming is designed a little differently. In this case, we are only worried about the output or “what” we want to accomplish. In other words, what is the desired state you want to accomplish? With declarative programming, you define the output of the program without worrying about the steps needed to make it happen. Simply put, there is complete abstraction from the details of “how” to make something happen. 

Let’s use an example to illustrate: driving to the movies. How would you get to the movies using an imperative model? You would need step-by-step instructions for getting to the car, starting the engine, navigating to the location, and then parking. Those steps should be in some logical order. Now, consider the counter-example using a declarative model. To get to the movies you might simply order an Uber and define what your final location will be.   

Container orchestration tools ensure that the deployment always matches the declared state. If you want 10 applications exposed to the internet, the container orchestration platform will manage all of the “how” to reach the declared end state. 

Architecture of orchestration platforms

Each container orchestration platform is implemented uniquely. There are plenty of comparison guides, and we will discuss some of these differences in a bit, but let's start with what they have in common. This research article, Container-based cluster orchestration systems: A taxonomy and future directions, gives a great breakdown of the common components.  

  • Job - an application composed of interdependent and heterogeneous tasks defined by a user.
  • Cluster Manager - the core of the orchestration platform and responsible for resource monitoring, accounting, task scheduling, administration control, and task relocator decisions.  
  • Compute Cluster - all tasks are scheduled on a set of worker nodes, each running a worker node agent that signals container information back to the cluster manager.  
  • Infrastructure - the networking and resources upon which the orchestration platform is deployed. Because containers are so flexible and portable, this can be on-premise, private cloud, or public cloud.  

Orchestration tools rely on readily-available formats such as YAML and JSON for declarative definitions which can be source controlled for change management purposes. These configuration files describe where to find the container image, what hardware resources should be reserved, and how to establish networking.

When you deploy a new container using a container orchestration tool, the platform will manage the scheduling of the containers based on the best available host that matches any predefined constraints. If resources on one host become limited, the containers will be rescheduled automatically on a new host. 

What container orchestration tools are available?

Container orchestration support is possible through a variety of platforms like Kubernetes (often styled as K8s or k8s), Docker Swarm, Apache Mesos, and Amazon Elastic Container Service (ECS). Container orchestration with Kubernetes is one of the most popular. In the last five years, several other fully-managed tools have been built on the Kubernetes container orchestration platform. Some of these Kubernetes orchestration tools include Azure Kubernetes Service (AKS), Amazon Elastic Container Service for Kubernetes (EKS), and Google Kubernetes Engine (GKE).

How Do You Choose the Right Container Orchestration Tool?

With so many options available, what is the right tool for you? To understand what platform will help you support your workloads at scale, here are a few questions to consider: 

  • What is Your Cloud Strategy?
  • How Might Container Tools Impact Your Staffing and Talent Strategy?
  • What is Your Team’s Level of Expertise With Containers?
  • Are Your Organization’s Cloud Resources Over-Provisioned?
  • What Are Your Application Development Speed and Scaling Requirements?

For a deeper dive into these questions and how to answer them, read: 5 Questions to Ask When Evaluating Container Tools.

Enterprise Container Orchestration

A container orchestration platform should meet the needs of the developers and the enterprise. Developers need a simplified and consistent workflow, with an intuitive interface that will put deployed applications at their fingertips for quick and efficient management. The platform should be flexible to develop and deploy applications to, while giving freedom to write code, not manage infrastructure. Enterprises need compliance and regulatory control, plus the ability to enforce workflow standardization and reduce the technical complexity placed on developers.  

At Capital One, we’ve actually built our own solution to meet these needs. Critical Stack is a simple, secure container orchestration platform built to balance what developers want with the needs of our organization. By combining improved governance and application security with easier orchestration and an intuitive UI, we’ve been able to work with containers safely and effectively.   

Wrapping up: What is container orchestration in a nutshell

To summarize, container orchestration automates the container lifecycle management in large, dynamic environments. It has made a significant impact on the velocity, agility, and efficiency with which developers can deploy applications to the cloud. This is especially true for enterprises, which have complex security and governance requirements that need to be easily implemented and enforced with simple workflow standards. With proper resource management and load balancing, container orchestration can be an extremely valuable approach to running containers at scale, resulting in improved productivity and scalability for many organizations. 

I hope this post was helpful in understanding a little more about container orchestration and how it can be used in your journey to the cloud. Thank you for reading.  


John Conrad, Solutions Architect, Capital One Software

John Conrad is a Solutions Architect at Capital One. He has worked with containers and Kubernetes since 2016 where he helped deliver a K8s workshop for 300 partners. Prior to joining Capital One, John was a WorldWide Technical Sales Leader for IBM Collaboration Solutions where he had the privilege to travel to 20+ countries. John has 25 years experience in software and technical sales and holds a Computer Science degree from the University of Kentucky. You can connect with him on LinkedIn (https://www.linkedin.com/in/johnmartinconrad/).

Explore #LifeAtCapitalOne

Feeling inspired? So are we.

Related Content