TLDR; Containers are lightweight virtualized environments for developing solutions using the Microservices Architecture.
There was a time (not too long ago) when IT Admins would carry these keychains with numerous fobs. Each of these fobs would open some secret door, obscured behind empty cartons in some offshoot of the main hallway. On these doors, it would say, “No Food Allowed” (in polite workplaces) and more direct instructions like “Keep Out” and “Authorized Personnel Only.”
These were the dreaded Server Rooms or Data Centers which housed all the hardware and networking wizardry essential for running most of the company’s business-critical information systems (if not all) – from ERP systems to databases and web servers.
Running these data centers had its pros and cons.
- IT admins had a lot of precise control over things like networking, operating systems, scaling options, performance tuning, software updates, etc.
- No risk of cross-contamination of data (as in the case of external data centers).
- Well-defined policies for handling a breach, including early detection, etc.
- Increased headcount of highly skilled personnel for maintaining the data center.
- Operational expenses like cooling, flood prevention, fire safety, disaster recovery (using backups), physical security, etc.
- Scalability was hard to achieve. During peak usage, if all systems were running at capacity, further increases in traffic resulted in systems crashing. This resulted in the loss of revenue and a poor reputation.
- Underutilization of resources. Companies with deeper pockets purchased a lot of processing power in anticipation of high demand. But when the demand subsided, all these hardware and networking devices were simply idling.
The Cloud Adoption
With the meteoric rise of cloud computing, more and more companies have transferred their workloads to public clouds (Azure, AWS, Google Cloud Platform etc.) or have a hybrid cloud or even a mumulti-cloudetup. Most cloud resources are billed based on usage, and there is seldom a need for any upfront spending. Thus the spending model has shifted from CAPEX to OPEX.
Cloud offered various service options:
- Infrastructure-as-a-Service or IaaS. Examples: Virtual Machines, Private Servers etc.
- Platform-as-a-Service or PaaS. Examples: DataBricks, Azure Virtual Desktops, etc.
- Software-as-as-Service or SaaS. ExExamplesOffice 365, Salesforce, Quickbooks etc.
Virtual Machines: The Good, The Bad
The easiest way for someone to start their cloud journey was to follow the lift and shift architecture pattern. So what they did was spin up a few Virtual Machines (VM) with the OS of their choice and provisioned them identically (maybe even using the same image) as their on-premises servers. Once the VMs were secured, they behaved very similarly to their on-premises counterparts with improved scalability and availability.
Even though using VMs in the cloud still counts as a giant step forward, there are some areas where VMs are not the best tool for the job at hand. VMs are virtualized hardware. Anytime a VM is being provisioned, there is an overhead of running a full Operating System. Put simply, it can take up to a few minutes to spin up a new VM. That may not seem so bad, but when you need to scale (in a Microservices architecture) rapidly, the overhead of provisioning VMs is not acceptable.
Another disadvantage is that VMs are considered IaaS. In the Shared Responsibility Model, the user is responsible for the maintenance of the VM, including applying OS updates and installing all the software required to run a particular application. For example, if you have an application which requires .NET Core 3.0, it is your responsibility to install and configure .NET Core 3.0 correctly (including any dependencies it may have), or your application won’t work.
How do you solve these problems?
Containers to the Rescue
Containers are lightweight virtualization environments.
That’s a loaded statement and is best “consumed” using an example.
Imagine you can bundle your .NET 3.0 Core application’s source code, along with all of its dependencies, into an “image” which can be run on a host machine without any installation. Everything you can possibly need is packed into this image. It is an abstraction, but that’s the gist of it.
- A runnable instance of this image is called a Container.
- A container uses its own runtime, and the filesystem is provided by the image. Further, the container itself is isolated from all other processes on the host machine and even other containers.
- Unlike Virtual Machines, there is no overhead of installing an OS (Windows, Linux etc.). Being lightweight, you can run multiple containers on the same host machine.
- A container engine runs these containers. One of the most popular container engines is Docker
- Public cloud (e.g. Azure Container Instances) allow you to upload your containers, and then the service will run them for you.
- Containers are very popular in creating solutions by using Microservices Architecture.
Containers can be orchestrated using technologies like Kubernetes (K8s)
Sample Use Case of Containerization
Let’s say you have a web application using a 3-tier architecture. Each of the tiers could be run using a container (or multiple instances of them to manage traffic).
- One container for the front-end application (Angular, React, Vue, etc.)
- One container for the backend (Node.js, Python Flask, .NET Core etc.)
- One container for storage
By doing so, each tier can be maintained and scaled independently
At Mantrax, we have a ton of experience working with virtualization and containerization technologies. If you have any questions, please feel free to contact us at email@example.com.
About the Author
Kalyan Chatterjee is a Principal Solution Architect at Mantrax Software Solutions. Kalyan is an experienced and passionate software engineer with over 15 years of proven track record in developing a wide range of applications for retail, financial services, supply chain, automotive and start-ups.