Containers vs. serverless compute: 3 key considerations

Choosing the right option on the cloud compute spectrum.

Updated January 12, 2024

Have you ever been overwhelmed by the number of cloud compute options available to support containerized applications?  Do you run your containers yourself or on a cloud managed service?  Have you foregone containers entirely and are running a serverless function instead? If these thoughts seem familiar, and you’ve already decided that containerized or serverless architectures are the way to go, then this blog is for you! 

The spectrum of cloud compute offerings is varied. As shown in the figure below, the spectrum swings mainly along two vectors: your control and flexibility vs the cloud service’s standardization and limits. Examples from AWS, GCP and Microsoft Azure illustrate the variety of options. In short, the more that you need to control, the more to the self-managed side of the spectrum you land, and the more operations and maintenance burden you take on.

outlining the control & flexibility vs. standard and limits of containers

Before making a final decision on which compute option to use, there are additional criteria to consider.

#1 Technical requirements for cloud compute services

The first additional criterion is technical requirements. These technical requirements stem from the cloud compute service’s level of standardization, your multi-cloud approach, business needs for concurrency, and your application architecture.

Cloud service standardization

As alluded to above, the standardization of a cloud service with respect to supported runtimes, available hardware and operating systems increases towards the right side of the spectrum. These standards become requirements for your application. If your application cannot comply with the supported standards, and/or keep up maintenance with the provider’s end-of-life notices, the left-hand side of the spectrum becomes more feasible.

Portability / cloud switching

Another set of technical requirements occurs due to your cloud provider approach and whether that drives requirements for operating across multiple cloud providers. Serverless functions are code based, while other compute options on the spectrum are container-based. While code and containers are portable across multiple cloud providers, using the underlying cloud services may incur a cloud switching cost that differs across the spectrum. To reduce cloud switching costs, cloud provider agnostic serverless frameworks for serverless functions are emerging and multiple cloud providers support Kubernetes orchestration.

Optimizing concurrency

 Concurrency, the ability to handle multiple requests at a time, is another important technical requirement to consider.  While the ephemeral nature of serverless functions allows for a pay-as-you-use model, it is limited to supporting one request at a time and can suffer from cold start delays.  Serverless containers, managed containers, and self managed containers all support more persistency, which better supports concurrency. This also decreases the number of cold starts and optimizes resource consumption (e.g. while application waits for network response, CPU and memory can be used towards processing another request).   

Application architecture

Depending on the nature of your application, you may need to re-architect it to take full advantage of the compute offering. For instance, using serverless functions such as AWS Lambda in a multi-tier architecture requires focusing on decoupling and distributing business logic, potentially in conjunction with other services such as AWS Step Functions to provide orchestration workflow and AWS API Gateway.   

#2 Cloud services and associated cost

The second additional criteria is cost. In general, cloud services bill cost per resource on the left hand side of the spectrum, and cost per use on the right hand side. However, simply calculating cost using the number of requests to compare these two cost models is insufficient.  For a better cost perspective, consider the total cost of ownership, including the cost of API requests, storage, and networking. For instance, serverless functions in an event-based architecture tend to use API calls and network ingress/egress heavily.

Another cost factor is related to container density and the cost of supporting services.  For example, AWS Lambda natively uses AWS CloudWatch for logging, which incurs costs.  Each Lambda function is a distinct instantiation and the corresponding resource utilization per underlying compute may not be as efficient as managing container density and compute capacity through containerized compute. In containerized compute approaches, the cost of supporting services such as logging may be amortized over many containerized workloads sharing the same compute.

#3 White box monitoring

The third additional criteria is white box monitoring, which requires having access to system internals including logs and is essential for debugging issues. Your approach to enabling white box monitoring needs to be considered. For instance, if the approach relies on monitoring agents or sidecars for white box monitoring, it implies needing a containerized compute option.  Whereas, if the approach allows for tailing monitoring tooling to support serverless functions through instrumentation, distributed tracing and/or retrieving logs from cloud native logging service, white box monitoring can still be achieved.

Balancing control and agility: Navigating the compute spectrum for your application

The following figure describes the output when all of these additional criteria are considered: 

the output of additional criteria of container density and additional API, storage and network costs of containers.

Each approach has its benefits and trade-offs. Going towards the left hand side of the compute spectrum allows for greater control and flexibility. However, it comes with increased operational and maintenance burdens due to needing to manage more aspects of the infrastructure.  Going towards the right hand side of the compute spectrum allows for greater agility and reduces the need to manage many infrastructure aspects such as scalability. However, it incurs constraints induced by standards and limits. 

Considering your technical requirements, costs, and monitoring needs will further inform the decision making process as you pinpoint the right choice for your application.


Tanu McCabe, Architect, VP & Executive Distinguished Engineer

Tanu McCabe’s job as a solution architect allows her to provide leadership and guidance that leverages the latest technological developments. As part of her job, Tanu positions the company on the best solution designs, projects, and company-wide initiatives.

Capital One uses serverless at scale

See how we’re building and running serverless applications at a massive scale.