Quantcast
Channel: Splunk Blogs
Viewing all articles
Browse latest Browse all 2319

Earning a Seat at the Table: Why Containers Matter

$
0
0

Docker-logo Container technologies like Docker matter to the enterprise for three key reasons:

1. Density
2. Portability
3. DevOps

Density is about extracting as much value from your infrastructure as possible. Private clouds deployed using traditional VM’s are memory-bound, which is why most private clouds still run with single or low double digit CPU utilization. I was able to run my private cloud at roughly 2 VM’s per core, where each VM hosted an application server instance. Using containers, I was able to get roughly 10 containers per core, where each container hosted an application server instance with an identical configuration.

Screen Shot 2015-08-21 at 3.24.53 PM

Similar to virtual machines, containers are inherently portable – they abstract the underlying hardware from the app, enabling the app to run in different environments without change. The major difference however is the footprint of the “image” being passed around. Since a VM includes an entire copy of the OS, as well as other overhead, you’re passing around multi-GB images. This is merely annoying during the dev/test cycle, but is absolutely crippling as you move into hybrid cloud scenarios. Containers are a fraction of the footprint, making it far more seamless to pass around.

Containers serve as an interesting foundation for DevOps. Like virtual machines, once you’ve successfully containerized your application, you’ve essentially created a snapshot that maintains the integrity of the contents. The simple rule-of-thumb is that developers own what’s running inside the container, while operations owns everything outside the container. This clean separation creates a sustainable contract between dev and ops, and helps break down organizational barriers in a practical way. While a similar philosophy can be applied to virtual machines, the portability of containers makes this feasible in a cloud world.

Screen Shot 2015-08-21 at 3.31.19 PM

Given that smaller footprint, start times for containers are significantly faster than start times for virtual machines. Containers can therefore realistically be immutable – it’s tolerable to destroy a container and create a new, clean instance, versus the traditional model, where operations will do everything possible to keep a virtual machine running. Immutable infrastructure changes the way we do maintenance, logging, high-availability, and other practices that are deeply ingrained in our run books, therefore the ecosystem of partners and tools will be critical to the success of containers and the datacenter “operating systems” used to manage them. Emerging data center “operating systems” leverage immutable containers as a core construct. Only a few years ago, configuration management technologies like Chef, Puppet, and Ansible were considered bleeding edge, with enterprises committing significant resources to build next-generation automated infrastructure. Data Center “operating systems” built to manage containers relegate configuration management technologies to build-time tools; the landscape for enterprise computing is changing faster than ever.

In the Analytics and IoT worlds, containers can play a really interesting role:

First, analytics providers are no longer simple tools, they are platforms with an application ecosystem. The challenge with application ecosystems is isolation of the App from the core runtime for security and life-cycle management purposes. Containers provide a nice isolation construct, with clear security lines and manageability services.

Second, in the big data world, distribution of work across a collection of data nodes, like what Yarn or Torque/Condor do, is critical for scale an performance. In this world, the business logic is moved to the data, versus the data being moved to the business logic. You can encapsulate more complex business logic within a container, and leverage the container’s portability to distribute work across a data cluster.

Third is specific to data collection in IoT; as IoT data collectors become deeply embedded in industrial machinery, embedded containers become a critical construct that ensures sensors operate with security, isolation, and portability. Rather than have to write custom data collectors for each firmware OS, embedded containers provide a common execution layer on which a more standard sensor codebase could run.

The mainframe implemented the idea of “keyed memory” decades ago, and is a concept that will be critical in the containerized IoT world. Keyed Memory is the idea of encrypting the memory segments of a process with encryption keys that are enforced by the underlying hardware. This makes it (nearly) impossible for process A to attach itself to process B and access its RAM. Within the IoT world, hardware manufacturers are looking at applying Keyed Memory concepts to containers, providing hardware-enforced isolation of containers.

Thanks,
Snehal Antani
CTO, Splunk Inc.

SnehalRelated Reads:
Earning a Seat at the Table: Introduction
Earning a Seat at the Table: Responsibly Move at Market Speed
Earning a Seat at the Table: Hybrid Cloud with Continuous Delivery & Insights (Part 1)
Earning a Seat at the Table: Hybrid Cloud with Continuous Delivery & Insights (Part 2)


Viewing all articles
Browse latest Browse all 2319