In an earlier blog post, I discussed the transformation that is occurring in the virtualization industry. That article went on to describe the increased adoption of various open-source hypervisor platforms and how the virtualization market continues to grow at a remarkable rate. Along with that growth and increased adoption comes evolution and new ideas. One of those ideas that have sprouted out of virtualization is the development of containers.
Unlike VMs that stand alone and do not interact with each other, containers share resources with other containers on the host system, including the operating system kernel and libraries. By sitting on top of a physical server or a VM and its host OS, containers are much easier to manage than VMs. VMs each have their own OS that needs to be taken care of (bug fixes, upgrades, patch fixes). With containers, you will only have to manage the OS for the single host server instead of each and every container. Containers are also more lightweight and require far less time to boot up than VMs. This makes it possible to pack a greater number of applications into one server (hundreds or thousands of containers can run on a single server, while it is impossible to host this many VMs on the same hardware). Another benefit of using containers is their portability. Containers can move between cloud environments or even back onto an on-premise datacenter.
Now maybe you’re thinking, container technology is the “new shiny toy” for developers. “They really aren’t being used in production environments,” or “not enough people are using them for us to trust them.” These assumptions would be more incorrect by the day. In fact, the projected market for application container technologies at the end of 2019 will be more than $2.1 billion and expected to more than double, to $4.3 billion, by 2022. In a survey done by Portworx and Aqua Security, a whopping 87 percent of respondents said that they are now running container technologies. This is up from 55 percent in 2017. But some of the most obvious signs that containers have entered the mainstream are the following:
Nine out of ten companies that use container technology are doing so in production.
In 2018, only 17% of IT operations teams were driving container adoption, but in 2019 that number is up to 35%.
Long story short, containers are not just a fad being used by developers in test environments. IT teams as a whole are adopting container usage and ownership, dedicating large budgets to container-driven projects, and are using container technology in production environments.
As more and more IT teams are using container technology in production, especially Kubernetes and OpenShift containers, it is becoming increasingly important to make sure that this production data is being protected with some sort of backup solution. That is where vProtect comes in.
Developed by Storware Inc., vProtect provides an agentless, crash-consistent backup of deployments running in Kubernetes and OpenShift environments and stored on persistent volumes, and provides the ability to store these backups in a wide range of backup destinations, including mounted file systems, enterprise-level backup solutions (IBM Spectrum Protect, DellEMC NetWorker, Veritas NetBackup), cloud storage (Amazon S3, Google Cloud, Microsoft Azure), and many others.
During a backup job, vProtect automatically pauses the running deployments to ensure backups are crash consistent. It collects the information stored on persistent volumes, as well as the configuration meta-data, and exports these to the backup destination. During a restore, vProtect recreates the application from the backed-up metadata stored in the backup destination.
As previously mentioned, vProtect collects information during a backup job from the data stored on persistent volumes. A persistent volume is one of two storage abstracts available for Kubernetes. The other is referred to simply as a volume. A Kubernetes volume exists only while the containing pod exists, meaning once the pod is deleted the associated volume is also deleted. A persistent volume remains available outside of the pod lifecycle. The persistent volume will remain even after the pod is deleted. It can then be claimed by another pod if required, and the data living on that volume is retained. Long story short, if the data stored in the pods is temporary data and does not need to be retained regardless of the pods lifecycle, a Kubernetes volume may be fine, but if the data needs to be retained even after the pod has been deleted, a persistent volume is needed.
That is why vProtect performs data protection of Kubernetes containers by backing up data stored on those persistent volumes. This gives the end user the ability to restore that data, and recreate applications, even if the original pod no longer exists.
Due to the exponential growth of container-technology, and the adoption of containers in production environments, do not be surprised if you hear your developers or your IT operations team considering this option. Many organizations, including large enterprise companies, are deciding to transition to container-based deployments and are reaping the benefits of improved efficiency, even in production environments. Now, the important next step, especially in production deployments, is making sure that the data stored in those containers is protected. Let vProtect do this for you.
We also have a pre-recorded demo available on our website if you would like to see the product in action on your own time.0 Comments