In this article by Karan Singh, the author of the book Ceph Cookbook, we will see how storage space or capacity are assigned to physical or virtual servers in detail. We’ll also cover the various storage formats supported by Ceph.
In this article, we will cover the following recipes:
- Working with the RADOS Block Device
- Configuring the Ceph client
- Creating RADOS Block Device
- Mapping RADOS Block Device
- Ceph RBD Resizing
- Working with RBD snapshots
- Working with RBD clones
- A quick look at OpenStack
- Ceph – the best match for OpenStack
- Configuring OpenStack as Ceph clients
- Configuring Glance for the Ceph backend
- Configuring Cinder for the Ceph backend
- Configuring Nova to attach the Ceph RBD
- Configuring Nova to boot the instance from the Ceph RBD
(For more resources related to this topic, see here.)
Once you have installed and configured your Ceph storage cluster, the next task is performing storage provisioning. Storage provisioning is the process of assigning storage space or capacity to physical or virtual servers, either in the form of blocks, files, or object storages. A typical computer system or server comes with a limited local storage capacity that might not be enough for your data storage needs. Storage solutions such as Ceph provide virtually unlimited storage capacity to these servers, making them capable of storing all your data and making sure that you do not run out of space. Using a dedicated storage system instead of local storage gives you the much needed flexibility in terms of scalability, reliability, and performance.
Ceph can provision storage capacity in a unified way, which includes block, filesystem, and object storage. The following diagram shows storage formats supported by Ceph, and depending on your use case, you can select one or more storage options:
We will discuss each of these options in detail in this article, and we will focus mainly on Ceph block storage.
Working with the RADOS Block Device
The RADOS Block Device (RBD), which is now known as the Ceph Block Device, provides reliable, distributed, and high performance block storage disks to clients. A RADOS block device makes use of the librbd library and stores a block of data in sequential form striped over multiple OSDs in a Ceph cluster. RBD is backed by the RADOS layer of Ceph, thus every block device is spread over multiple Ceph nodes, delivering high performance and excellent reliability. RBD has native support for Linux kernel, which means that RBD drivers are well integrated with the Linux kernel since the past few years. In addition to reliability and performance, RBD also provides enterprise features such as full and incremental snapshots, thin provisioning, copy on write cloning, dynamic resizing, and so on. RBD also supports In-Memory caching, which drastically improves its performance.
The industry leading open source hypervisors, such as KVM and Zen, provide full support to RBD and leverage its features to their guest virtual machines. Other proprietary hypervisors, such as VMware and Microsoft HyperV will be supported very soon. There has been a lot of work going on in the community for support to these hypervisors. The Ceph block device provides full support to cloud platforms such as OpenStack, Cloud stack, as well as others. It has been proven successful and feature-rich for these cloud platforms. In OpenStack, you can use the Ceph block device with cinder (block) and glance (imaging) components. Doing so, you can spin 1000s of Virtual Machines (VMs) in very little time, taking advantage of the copy on write feature of the Ceph block storage.
All these features make RBD an ideal candidate for cloud platforms such as OpenStack and CloudStack. We will now learn how to create a Ceph block device and make use of it.
Configuring the Ceph client
Any regular Linux host (RHEL- or Debian-based) can act as a Ceph client. The Client interacts with the Ceph storage cluster over the network to store or retrieve user data. Ceph RBD support has been added to the Linux mainline kernel, starting with 2.6.34 and later versions.
How to do it
As we have done earlier, we will set up a Ceph client machine using vagrant and VirtualBox. We will use the Vagrantfile. Vagrant will then launch an Ubuntu 14.04 virtual machine that we will configure as a Ceph client:
- From the directory where we have cloned ceph-cookbook git repository, launch the client virtual machine using Vagrant:
$ vagrant status client-node1
$ vagrant up client-node1 - Log in to client-node1:
$ vagrant ssh client-node1
Note: The username and password that vagrant uses to configure virtual machines is vagrant, and vagrant has sudo rights. The default password for root user is vagrant.
- Check OS and kernel release (this is optional):
$ lsb_release -a
$ uname -r -
Check for RBD support in the kernel:
$ sudo modprobe rbd
- Allow the ceph-node1 monitor machine to access client-node1 over ssh. To do this, copy root ssh keys from the ceph-node1 to client-node1 vagrant user. Execute the following commands from the ceph-node1 machine until otherwise specified:
## Login to ceph-node1 machine $ vagrant ssh ceph-node1 $ sudo su - # ssh-copy-id vagrant@client-node1
Provide a one-time vagrant user password, that is, vagrant, for client-node1. Once the ssh keys are copied from ceph-node1 to client-node1, you should able to log in to client-node1 without a password.
- Use the ceph-deploy utility from ceph-node1 to install Ceph binaries on client-node1:
# cd /etc/ceph
# ceph-deploy --username vagrant install client-node1 - Copy the Ceph configuration file (ceph.conf) to client-node1:
# ceph-deploy --username vagrant config push client-node1
- The client machine will require Ceph keys to access the Ceph cluster. Ceph creates a default user, client.admin, which has full access to the Ceph cluster. It’s not recommended to share client.admin keys with client nodes. The better approach is to create a new Ceph user with separate keys and allow access to specific Ceph pools:
In our case, we will create a Ceph user, client.rbd, with access to the rbd pool. By default, Ceph block devices are created on the rbd pool:
# ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=rbd'
- Add the key to the client-node1 machine for the client.rbd user:
# ceph auth get-or-create client.rbd | ssh vagrant@client-node1 sudo tee /etc/ceph/ceph.client.rbd.keyring
- By this step, client-node1 should be ready to act as a Ceph client. Check the cluster status from the client-node1 machine by providing the username and secret key:
$ vagrant ssh client-node1 $ sudo su - # cat /etc/ceph/ceph.client.rbd.keyring >> /etc/ceph/keyring
### Since we are not using the default user client.admin we need to supply username that will connect to Ceph cluster.
# ceph -s --name client.rbd
Creating RADOS Block Device
Till now, we have configured Ceph client, and now we will demonstrate creating a Ceph block device from the client-node1 machine.
How to do it
- Create a RADOS Block Device named rbd1 of size 10240 MB:
# rbd create rbd1 --size 10240 --name client.rbd
- There are multiple options that you can use to list RBD images:
## The default pool to store block device images is 'rbd', you can also specify the pool name with the rbd command using the -p option: # rbd ls --name client.rbd # rbd ls -p rbd --name client.rbd # rbd list --name client.rbd
- Check the details of the rbd image:
# rbd --image rbd1 info --name client.rbd
Mapping RADOS Block Device
Now that we have created block device on Ceph cluster, in order to use this block device, we need to map it to the client machine. To do this, execute the following commands from the client-node1 machine.
How to do it
- Map the block device to the client-node1:
- # rbd map –image rbd1 –name client.rbd
Check the mapped block device:# rbd showmapped --name client.rbd
- To make use of this block device, we should create a filesystem on this and mount it:
# fdisk -l /dev/rbd1 # mkfs.xfs /dev/rbd1 # mkdir /mnt/ceph-disk1 # mount /dev/rbd1 /mnt/ceph-disk1 # df -h /mnt/ceph-disk1
- Test the block device by writing data to it:
# dd if=/dev/zero of=/mnt/ceph-disk1/file1 count=100 bs=1M
- To map the block device across reboot, you should add the init-rbdmap script to the system startup, add the Ceph user and keyring details to /etc/ceph/rbdmap, and finally, update the /etc/fstab file:
# wget https://raw.githubusercontent.com/ksingh7/ ceph-cookbook/master/rbdmap -O /etc/init.d/rbdmap # chmod +x /etc/init.d/rbdmap # update-rc.d rbdmap defaults ## Make sure you use correct keyring value in /etc/ceph/rbdmap file, which is generally unique for an environment. # echo "rbd/rbd1 id=rbd, keyring=AQCLEg5VeAbGARAAE4ULXC7M5Fwd3BGFDiHRTw==" >> /etc/ceph/rbdmap # echo "/dev/rbd1 /mnt/ceph-disk1 xfs defaults, _netdev 0 0 " >> /etc/fstab # mkdir /mnt/ceph-disk1 # /etc/init.d/rbdmap start
Ceph RBD Resizing
Ceph supports thin provisioned block devices, which means that the physical storage space will not get occupied until you begin storing data on the block device. The Ceph RADOS block device is very flexible; you can increase or decrease the size of an RBD on the fly from the Ceph storage end. However, the underlying filesystem should support resizing. Advance filesystems such as XFS, Btrfs, EXT, ZFS, and others support filesystem resizing to a certain extent. Please follow filesystem specific documentation to know more on resizing.
How to do it
To increase or decrease Ceph RBD image size, use the –size <New_Size_in_MB> option with the rbd resize command, this will set the new size for the RBD image:
- The original size of the RBD image that we created earlier was 10 GB. We will now increase its size to 20 GB:
# rbd resize --image rbd1 --size 20480 --name client.rbd # rbd info --image rbd1 --name client.rbd
- Grow the filesystem so that we can make use of increased storage space. It’s worth knowing that the filesystem resize is a feature of the OS as well as the device filesystem. You should read filesystem documentation before resizing any partition. The XFS filesystem supports online resizing. Check system message to know the filesystem size change:
# dmesg | grep -i capacity # xfs_growfs -d /mnt/ceph-disk1
Working with RBD Snapshots
Ceph extends full support to snapshots, which are point-in-time, read-only copies of an RBD image. You can preserve the state of a Ceph RBD image by creating snapshots and restoring the snapshot to get the original data.
How to do it
Let’s see how a snapshot works with Ceph.
- To test the snapshot functionality of Ceph, let’s create a file on the block device that we created earlier:
# echo "Hello Ceph This is snapshot test" > /mnt/ ceph-disk1/snapshot_test_file
- Create a snapshot for the Ceph block device:
Syntax: rbd snap create <pool-name>/<image-name>@<snap-name>
# rbd snap create rbd/rbd1@snapshot1 --name client.rbd - To list snapshots of an image, use the following:
Syntax: rbd snap ls <pool-name>/<image-name> # rbd snap ls rbd/rbd1 --name client.rbd
- To test the snapshot restore functionality of Ceph RBD, let’s delete files from filesystem:
# rm -f /mnt/ceph-disk1/*
- We will now restore the Ceph RBD snapshot to get back the files that deleted in the last step. Please note that a rollback operation will overwrite current the version of the RBD image and its data with the snapshot version. You should perform this operation carefully:
Syntax: rbd snap rollback <pool-name>/<image-name>@<snap-name># rbd snap rollback rbd/rbd1@snapshot1 --name client.rbd
- Once the snapshot rollback operation is completed, remount the Ceph RBD filesystem to refresh the filesystem state. You should be able to get your deleted files back:
# umount /mnt/ceph-disk1 # mount /dev/rbd1 /mnt/ceph-disk1 # ls -l /mnt/ceph-disk1
- When you no longer need snapshots, you can remove a specific snapshot using the following syntax. Deleting the snapshot will not delete your current data on the Ceph RBD image:
Syntax: rbd snap rm <pool-name>/<image-name>@<snap-name>
- # rbd snap rm rbd/rbd1@snapshot1 –name client.rbd
If you have multiple snapshots of an RBD image, and you wish to delete all the snapshots with a single command, then use the purge sub command:Syntax: rbd snap purge <pool-name>/<image-name>
# rbd snap purge rbd/rbd1 --name client.rbd
Working with RBD Clones
Ceph supports a very nice feature for creating Copy-On-Write (COW) clones from RBD snapshots. This is also known as Snapshot Layering in Ceph. Layering allows clients to create multiple instant clones of Ceph RBD. This feature is extremely useful for cloud and virtualization platforms such as OpenStack, CloudStack, and Qemu/KVM, and so on. These platforms usually protect Ceph RBD images containing an OS / VM image in the form of a snapshot. Later, this snapshot is cloned multiple times to spawn new virtual machines / instances. Snapshots are read-only, but COW clones are fully writable; this feature of Ceph provides a greater level of flexibility and is extremely useful among cloud platforms:
Every cloned image (child image) stores references of its parent snapshot to read image data. Hence, the parent snapshot should be protected before it can be used for cloning. At the time of data writing on the COW cloned image, it stores new data references to itself. COW cloned images are as good as RBD. They are quite flexible like RBD, which means that they are writable, resizable, and support snapshots and further cloning.
In Ceph RBD, images are of two types: format-1 and format-2. The RBD snapshot feature is available on both types that is, in format-1 as well as in format-2 RBD images. However, the layering feature (the COW cloning feature) is available only for the RBD image with format-2. The default RBD image format is format-1.
How to do it
To demonstrate RBD cloning, we will intentionally create a format-2 RBD image, then create and protect its snapshot, and finally, create COW clones out of it:
- Create a format-2 RBD image and check its detail:
# rbd create rbd2 --size 10240 --image-format 2 --name client.rbd # rbd info --image rbd2 --name client.rbd
- Create a snapshot of this RBD image:
# rbd snap create rbd/rbd2@snapshot_for_cloning --name client.rbd
- To create a COW clone, protect the snapshot. This is an important step, we should protect the snapshot because if the snapshot gets deleted, all the attached COW clones will be destroyed:
# rbd snap protect rbd/rbd2@snapshot_for_cloning --name client.rbd
- Next, we will create a cloned RBD image using this snapshot:
Syntax: rbd clone <pool-name>/<parent-image>@<snap-name> <pool-name>/<child-image-name>
- # rbd clone rbd/rbd2@snapshot_for_cloning rbd/clone_rbd2 –name client.rbd
Creating a clone is a quick process. Once it’s completed, check new image information. You would notice that its parent pool, image, and snapshot information would be displayed:# rbd info rbd/clone_rbd2 --name client.rbd
At this point, we have a cloned RBD image, which is dependent upon its parent image snapshot. To make the cloned RBD image independent of its parent, we need to flatten the image, which involves copying the data from the parent snapshot to the child image. The time it takes to complete the flattening process depends on the size of the data present in the parent snapshot. Once the flattening process is completed, there is no dependency between the cloned RBD image and its parent snapshot.
- To initiate the flattening process, use the following:
# rbd flatten rbd/clone_rbd2 --name client.rbd # rbd info --image clone_rbd2 --name client.rbd
After the completion of the flattening process, if you check image information, you will notice that the parent image/snapshot name is not present and the clone is independent.
- You can also remove the parent image snapshot if you no longer require it. Before removing the snapshot, you first have to unprotect it:
# rbd snap unprotect rbd/rbd2@snapshot_for_cloning --name client.rbd
- Once the snapshot is unprotected, you can remove it:
# rbd snap rm rbd/rbd2@snapshot_for_cloning --name client.rbd
A quick look at OpenStack
OpenStack is an open source software platform for building and managing public and private cloud infrastructure. It is being governed by an independent, non-profit foundation known as the OpenStack foundation. It has the largest and the most active community, which is backed by technology giants such as, HP, Red Hat, Dell, Cisco, IBM, Rackspace, and many more. OpenStack’s idea for cloud is that it should be simple to implement and massively scalable.
OpenStack is considered as the cloud operating system where users are allowed to instantly deploy hundreds of virtual machines in an automated way. It also provides an efficient way of hassle free management of these machines. OpenStack is known for its dynamic scale up, scale out, and distributed architecture capabilities, making your cloud environment robust and future-ready. OpenStack provides an enterprise class Infrastructure-as-a-service (IaaS) platform for all your cloud needs.
As shown in the preceding diagram, OpenStack is made up of several different software components that work together to provide cloud services. Out of all these components, in this article, we will focus on Cinder and Glance, which provide block storage and image services respectively. For more information on OpenStack components, please visit http://www.openstack.org/.
Ceph – the best match for OpenStack
Since the last few years, OpenStack has been getting amazingly popular, as it’s based on software defined on a wide range, whether it’s computing, networking, or even storage. And when you talk storage for OpenStack, Ceph will get all the attraction. An OpenStack user survey, conducted in May 2015, showed Ceph dominating the block storage driver market with a whopping 44% production usage. Ceph provides a robust, reliable storage backend that OpenStack was looking for. Its seamless integration with OpenStack components such as cinder, glance, nova, and keystone provides all in one cloud storage backend for OpenStack. Here are some key benefits that make Ceph the best match for OpenStack:
- Ceph provides enterprise grade, feature rich storage backend at a very low cost per gigabyte, which helps to keep the OpenStack cloud deployment price down.
- Ceph is a unified storage solution for Block, File, or Object storage for OpenStack, allowing applications to use storage as they need.
- Ceph provides advance block storage capabilities for OpenStack clouds, which includes the easy and quick spawning of instances, as well as the backup and cloning of VMs.
- It provides default persistent volumes for OpenStack instances that can work like traditional servers, where data will not flush on rebooting the VMs.
- Ceph supports OpenStack in being host-independent by supporting VM migrations, scaling up storage components without affecting VMs.
- It provides the snapshot feature to OpenStack volumes, which can also be used as a means of backup.
- Ceph’s copy-on-write cloning feature provides OpenStack to spin up several instances at once, which helps the provisioning mechanism function faster.
- Ceph supports rich APIs for both Swift and S3 Object storage interfaces.
Ceph and OpenStack communities have been working closely since the last few years to make the integration more seamless, and to make use of new features as they are landed. In the future, we can expect that OpenStack and Ceph will be more closely associated due to Red Hat’s acquisition of Inktank, the company behind Ceph; Red Hat is one of the major contributor of OpenStack project.
OpenStack is a modular system, which is a system that has a unique component for a specific set of tasks. There are several components that require a reliable storage backend, such as Ceph, and extend full integration to it, as shown in the following diagram. Each of these components uses Ceph in their own way to store block devices and objects. The majority of cloud deployment based on OpenStack and Ceph use the Cinder, glance, and Swift integrations with Ceph. Keystone integration is used when you need an S3-compatible object storage on the Ceph backend. Nova integration allows boot from Ceph volume capabilities for your OpenStack instances.
Setting up OpenStack
The OpenStack setup and configuration is beyond the scope of this article; however, for ease of demonstration, we will use a virtual machine preinstalled with the OpenStack RDO Juno release. If you like, you can also use your own OpenStack environment and can perform Ceph integration.
How to do it
In this section, we will demonstrate setting up a preconfigured OpenStack environment using vagrant, and accessing it via CLI and GUI:
- Launch openstack-node1 using Vagrantfile. Make sure that you are on the host machine and are under the ceph-cookbook repository before bringing up openstack-node1 using vagrant:
# cd ceph-cookbook # vagrant up openstack-node1
- Once openstack-node1 is up, check the vagrant status and log in to the node:
$ vagrant status openstack-node1 $ vagrant ssh openstack-node1
- We assume that you have some knowledge of OpenStack and are aware of its operations. We will source the keystone_admin file, which has been placed under /root, and to do this, we need to switch to root:
$ sudo su - $ source keystone_admin
We will now run some native OpenStack commands to make sure that OpenStack is set up correctly. Please note that some of these commands do not show any information, since this is a fresh OpenStack environment and does not have instances or volumes created:
# nova list # cinder list # glance image-list
- You can also log in to the OpenStack horizon web interface (https://192.168.1.111/dashboard) with the username as admin and password as vagrant.
- After logging in the Overview page opens:
Configuring OpenStack as Ceph clients
OpenStack nodes should be configured as Ceph clients in order to access the Ceph cluster. To do this, install Ceph packages on OpenStack nodes and make sure it can access the Ceph cluster.
How to do it
In this section, we are going to configure OpenStack as a Ceph client, which will be later used to configure cinder, glance, and nova:
- We will use ceph-node1 to install Ceph binaries on os-node1 using ceph-deploy. To do this, we should set up an ssh password-less login to os-node1. The root password is again the same (vagrant):
$ vagrant ssh ceph-node1 $ sudo su - # ping os-node1 -c 1 # ssh-copy-id root@os-node1
- Next, we will install Ceph packages to os-node1 using ceph-deploy:
# cd /etc/ceph # ceph-deploy install os-node1
- Push the Ceph configuration file, ceph.conf, from ceph-node1 to os-node1. This configuration file helps clients reach the Ceph monitor and OSD machines. Please note that you can also manually copy the ceph.conf file to os-node1 if you like:
# ceph-deploy config push os-node1
Make sure that the ceph.conf file that we have pushed to os-node1 should have the permission of 644.
- Create Ceph pools for cinder, glance, and nova. You may use any available pool, but it’s recommended that you create separate pools for OpenStack components:
# ceph osd pool create images 128 # ceph osd pool create volumes 128 # ceph osd pool create vms 128
- Set up client authentication by creating a new user for cinder and glance:
# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' # ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
- Add the keyrings to os-node1 and change their ownership:
# ceph auth get-or-create client.glance | ssh os-node1 sudo tee /etc/ceph/ceph.client.glance.keyring # ssh os-node1 sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring # ceph auth get-or-create client.cinder | ssh os-node1 sudo tee /etc/ceph/ceph.client.cinder.keyring # ssh os-node1 sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
- The libvirt process requires accessing the Ceph cluster while attaching or detaching a block device from Cinder. We should create a temporary copy of the client.cinder key that will be needed for the cinder and nova configuration later in this article:
# ceph auth get-key client.cinder | ssh os-node1 tee /etc/ceph/temp.client.cinder.key
- At this point, you can test the previous configuration by accessing the Ceph cluster from os-node1 using the client.glance and client.cinder Ceph users. Log in to os-node1 and run the following commands:
$ vagrant ssh openstack-node1 $ sudo su - # cd /etc/ceph # ceph -s --name client.glance --keyring ceph.client.glance.keyring # ceph -s --name client.cinder --keyring ceph.client.cinder.keyring
- Finally, generate uuid, then create, define, and set the secret key to libvirt and remove temporary keys:
- Generate a uuid by using the following:
# cd /etc/ceph # uuidgen
- Create a secret file and set this uuid number to it:
cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>bb90381e-a4c5-4db7-b410-3154c4af486e</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF
Make sure that you use your own uuid generated for your environment./
- Define the secret and keep the generated secret value safe. We would require this secret value in the next steps:
# virsh secret-define --file secret.xml
- Set the secret value that was generated in the last step to virsh and delete temporary files. Deleting the temporary files is optional; it’s done just to keep the system clean:
# virsh secret-set-value --secret bb90381e-a4c5-4db7-b410-3154c4af486e --base64 $(cat temp.client.cinder.key) && rm temp.client.cinder.key secret.xml # virsh secret-list
- Generate a uuid by using the following:
Configuring Glance for the Ceph backend
We have completed the configuration required from the Ceph side. In this section, we will configure the OpenStack glance to use Ceph as a storage backend.
How to do it
This section talks about configuring the glance component of OpenStack to store virtual machine images on Ceph RBD:
- Log in to os-node1, which is our glance node, and edit /etc/glance/glance-api.conf for the following changes:
- Under the [DEFAULT] section, make sure that the following lines are present:
default_store=rbd show_image_direct_url=True
- Execute the following command to verify entries:
# cat /etc/glance/glance-api.conf | egrep -i "default_store|image_direct"
- Under the [glance_store] section, make sure that the following lines are present under RBD Store Options:
stores = rbd rbd_store_ceph_conf=/etc/ceph/ceph.conf rbd_store_user=glance rbd_store_pool=images rbd_store_chunk_size=8
- Execute the following command to verify the previous entries:
# cat /etc/glance/glance-api.conf | egrep -v "#|default" | grep -i rbd
- Under the [DEFAULT] section, make sure that the following lines are present:
- Restart the OpenStack glance services:
# service openstack-glance-api restart
- Source the keystone_admin file for OpenStack and list the glance images:
# source /root/keystonerc_admin # glance image-list
- Download the cirros image from the Internet, which will later be stored in Ceph:
# wget http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img
- Add a new glance image using the following command:
# glance image-create --name cirros_image --is-public=true --disk-format=qcow2 --container-format=bare < cirros-0.3.1-x86_64-disk.img
- List the glance images using the following command; you will notice there are now two glance images:
# glance image-list
- You can verify that the new image is stored in Ceph by querying the image ID in the Ceph images pool:
# rados -p images ls --name client.glance --keyring /etc/ceph/ceph.client.glance.keyring | grep -i id
- Since we have configured glance to use Ceph for its default storage, all the glance images will now be stored in Ceph. You can also try creating images from the OpenStack horizon dashboard:
- Finally, we will try to launch an instance using the image that we have created earlier:
# nova boot --flavor 1 --image b2d15e34-7712-4f1d-b48d-48b924e79b0c vm1
While you are adding new glance images or creating an instance from the glance image stored on Ceph, you can check the IO on the Ceph cluster by monitoring it using the # watch ceph -s command.
Configuring Cinder for the Ceph backend
The Cinder program of OpenStack provides block storage to virtual machines. In this section, we will configure OpenStack Cinder to use Ceph as a storage backend. OpenStack Cinder requires a driver to interact with the Ceph block device. On the OpenStack node, edit the /etc/cinder/cinder.conf configuration file by adding the code snippet given in the following section.
How to do it
In the last section, we learned to configure glance to use Ceph. In this section, we will learn to use the Ceph RBD with the Cinder service of OpenStack:
- Since in this demonstration we are not using multiple backend cinder configurations, comment the enabled_backends option from the /etc/cinder/cinder.conf file:
- Navigate to the Options defined in cinder.volume.drivers.rbd section of the /etc/cinder/cinder.conf file and add the following.(replace the secret uuid with your environments value):
volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_pool = volumes rbd_user = cinder rbd_secret_uuid = bb90381e-a4c5-4db7-b410-3154c4af486e rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 glance_api_version = 2
- Execute the following command to verify the previous entries:
# cat /etc/cinder/cinder.conf | egrep "rbd|rados|version" | grep -v "#"
- Restart the OpenStack cinder services:
# service openstack-cinder-volume restart
- Source the keystone_admin files for OpenStack:
# source /root/keystonerc_admin
# cinder list
- To test this configuration, create your first cinder volume of 2 GB, which should now be created on your Ceph cluster:
# cinder create --display-name ceph-volume01 --display-description "Cinder volume on CEPH storage" 2
- Check the volume by listing the cinder and Ceph volumes pool:
# cinder list # rados -p volumes --name client.cinder --keyring ceph.client.cinder.keyring ls | grep -i id
- Similarly, try creating another volume using the OpenStack Horizon dashboard.
Configuring Nova to attach the Ceph RBD
In order to attach the Ceph RBD to OpenStack instances, we should configure the nova component of OpenStack by adding the rbd user and uuid information that it needs to connect to the Ceph cluster. To do this, we need to edit /etc/nova/nova.conf on the OpenStack node and perform the steps that are given in the following section.
How to do it
The cinder service that we configured in the last section creates volumes on Ceph, however, to attach these volumes to OpenStack instances, we need to configure NOVA:
- Navigate to the Options defined in nova.virt.libvirt.volume section and add the following lines of code (replace the secret uuid with your environments value):
rbd_user=cinder rbd_secret_uuid= bb90381e-a4c5-4db7-b410-3154c4af486e
- Restart the OpenStack nova services:
# service openstack-nova-compute restart
- To test this configuration, we will attach the cinder volume to an OpenStack instance. List the instance and volumes to get the ID:
# nova list # cinder list
- Attach the volume to the instance:
# nova volume-attach 1cadffc0-58b0-43fd-acc4-33764a02a0a6 1337c866-6ff7-4a56-bfe5-b0b80abcb281 # cinder list
- You can now use this volume as a regular block disk from your OpenStack instance:
Configuring Nova to boot the instance from the Ceph RBD
In order to boot all OpenStack instances into Ceph, that is, for the boot-from-volume feature, we should configure an ephemeral backend for nova. To do this, edit /etc/nova/nova.conf on the OpenStack node and perform the changes shown next.
How to do it
This section deals with configuring NOVA to store entire virtual machine on the Ceph RBD:
- Navigate to the [libvirt] section and add the following:
inject_partition=-2 images_type=rbd images_rbd_pool=vms images_rbd_ceph_conf=/etc/ceph/ceph.conf
- Verify your changes:
# cat /etc/nova/nova.conf|egrep "rbd|partition" | grep -v "#"
- Restart the OpenStack nova services:
# service openstack-nova-compute restart
- To boot a virtual machine in Ceph, the glance image format must be RAW. We will use the same cirros image that we downloaded earlier in this article and convert this image from the QCOW to RAW format (this is important). You can also use any other image, as long as it’s in the RAW format:
# qemu-img convert -f qcow2 -O raw cirros-0.3.1-x86_64-disk.img cirros-0.3.1-x86_64-disk.raw
- Create a glance image using a RAW image:
# glance image-create --name cirros_raw_image --is-public=true --disk-format=raw --container-format=bare < cirros-0.3.1-x86_64-disk.raw
- To test the boot from the Ceph volume feature, create a bootable volume:
# nova image-list # cinder create --image-id ff8d9729-5505-4d2a-94ad-7154c6085c97 --display-name cirros-ceph-boot-volume 1
- List cinder volumes to check if the bootable field is true:
# cinder list
- Now, we have a bootable volume, which is stored on Ceph, so let’s launch an instance with this volume:
# nova boot --flavor 1 --block_device_mapping vda=fd56314b-e19b-4129-af77-e6adf229c536::0 --image 964bd077-7b43-46eb-8fe1-cd979a3370df vm2_on_ceph --block_device_mapping vda = <cinder bootable volume id >
--image = <Glance image associated with the bootable volume> - Finally, check the instance status:
# nova list
- At this point, we have an instance running from a Ceph volume. Try to log in to the instance from the horizon dashboard:
Summary
In this article, we have covered the various storage formats supported by Ceph in detail and how they were assigned to other physical or virtual servers.
Resources for Article:
Further resources on this subject:
- Ceph Instant Deployment [article]
- GNU Octave: Data Analysis Examples [article]
- Interacting with GNU Octave: Operators [article]