Even though OpenStack environments were meant to be stateless by design (so that VMs could be easily recovered from templates) – in many cases are used as a regular virtualization platform. Stateful VMs still need to be protected and in this short article I’ll give you several ideas how you can accomplish it.
The obvious way to protect data in your environments is a classical agent-based backup. There are multiple solutions on the market including Bacula, IBM Spectrum Protect, Dell EMC NetWorker (or Avamar) and many many more. One of them is even an OpenStack project – Freezer. In general agents are able to periodically grab your ﬁles on each VM and store in central repository.
While agent-based data protection has many advantages, including already very mature software, the biggest concern is administration overhead, as you need to install and manage agents (manually, or automatically to some extend). This basically adds another service to worry about, which also consumes additional resources on every environment that you administer.
In general, when you protect a VM you’re mostly concerned about the data inside the VM. In OpenStack environments, cinder service is responsible for block storage management, and it provides a native backup mechanism. Syntax basically looks like this:
openstack volume backup create [--incremental] [--force] VOLUME
and it allows you to create full and incremental backups of a particular volume. This tool is also responsible for keeping track of your backups, so that they can be easily recovered.
By default, the Swift object store is used for the backup repository, but you can also use NFS export. Keep in mind that this is rather a stand-alone tool – it maintains its own repository and metadata, so it may be a bit more difﬁcult to have more direct influence, how the backups are actually performed.
KVM hosts and file based volumes scenario
OpenStack environments in most cases are KVM-based. In many cases VMs are also conﬁgured with QCOW2/RAW as the storage backend. In this case we can treat OpenStack environment as a pool of KVM stand-alone servers and we can do backups exactly as for a regular KVM with libvirt toolkit. The advantage of this approach is ﬂexibility, as we can deploy our solution outside of the OpenStack and transfer data directly from the hosts.
We also need to collect metadata from the OpenStack to recreate a VM later using appropriate APIs. Keep in mind that you need to recreate volumes via Cinder during restore, as OpenStack needs to be aware that they exist before running a VM.
Ceph RBD based volumes scenario
Ceph RBD is a commonly used storage backend in OpenStack deployments. Mostly because of ifs ﬂexibility and scalability. Block storage is exposed via RBD interface and the good news is that you’re able to access this data directly over NBD. So you can install your solution outside of the OpenStack, and grab data directly from Ceph over network (and metadata as usual from OpenStack API). This scenario also allows you to have incremental backups, as Ceph RBD exposes CBT-like API, so you are able to read just incremental changes.
Any virtual environment, regardless of their API capabilities can be protected from the storage backend side. In general, if you’re able to match your physical volumes with their representation in API, you only need to perform snapshot of the volume and expose it to your data mover. This is how big databases can be protected without stopping the production.
Similar principle applies in here, but instead of integrating with each storage vendor – you can use cinder which was meant to be a storage abstraction layer. Disk attachment requires just snapshot and attachment/dettachment of the volumes while the VM is running.
The good news is that cinder should handle all of the hardware-speciﬁc aspects, so that you don’t have to implement them on your own. The downside is that you need to have Proxy running in your OpenStack environment. And unfortunately, there is no incremental API available, so only full backups are possible.
OpenStack environments tend to differ among the implementations. Different storage backends, different versions. In this short article I’ve summarized few commonly used. Do you use a different approach? Let us know!0 Comments