Description of problem: The OSP Cinder's configuration in staypuft was set to Ceph & LVM back ends. The LVM driver (set as ISCSI driver) doesn't have the volume_driver parameter set in this back end section in the configuration file. See the Conder configuration is available here: http://pastebin.test.redhat.com/234828 Version-Release number of selected component (if applicable): ruby193-rubygem-staypuft-0.3.5-1.el6ost.noarch How reproducible: 100% Steps to Reproduce: 1. enable the Ceph & LVM as Cinder's back ends 2. Install OSP 3. Check the cinder configuration file Actual results: [iscsi] iscsi_ip_address=192.168.0.6 volume_backend_name=iscsi_backend volume_group=cinder-volumes iscsi_helper=lioadm [rbd] volume_backend_name=rbd_backend volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_pool=volumes rbd_max_clone_depth=5 rbd_flatten_volume_from_snapshot=False rbd_user=volumes rbd_ceph_conf=/etc/ceph/ceph.conf rbd_secret_uuid=047ba0cd-1df6-4112-8792-778d5a8fc2f5 Expected results: [iscsi] iscsi_ip_address=192.168.0.6 volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver volume_backend_name=iscsi_backend volume_group=cinder-volumes iscsi_helper=lioadm [rbd] volume_backend_name=rbd_backend volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_pool=volumes rbd_max_clone_depth=5 rbd_flatten_volume_from_snapshot=False rbd_user=volumes rbd_ceph_conf=/etc/ceph/ceph.conf rbd_secret_uuid=047ba0cd-1df6-4112-8792-778d5a8fc2f5 Additional info:
I explored the OpenStack deployment in question and found this in the cinder-volume logs: zář 18 11:30:11 maca25400702875.example.com cinder-volume[15236]: 2014-09-18 11:30:11.151 15347 INFO cinder.volume.manager [req-41a05dce-8e15-4aa2-9b8f-401d3affa9ab - - - - -] Starting volume driver LVMISCSIDriver (2.0.0) which suggests that the LVM backend got started fine. The LVM driver is the default one for single backend and it seems that for multi backend it works the same way. I think the volume_driver option should be set for LVM backend too as a good practice, but it doesn't seem to prevent the backend from working when the config option is missing.
Jiri is right. I tested StayPuft/OSP5 bits and verified multibackend works with LVM and Ceph. If the volume_driver is not specified, it will default to LVMISCSIDriver and the seemed to work fine.
It is not just a matter of is it working fine, we should look for a good configuration, that follows good practices. We can't always assume that the default driver would be the LVM driver. To add an additional line to a back end is a reasonable price to assure that we are configuring exactly what we wish to use and might prevent future bugs.
Patch upstream: https://review.openstack.org/#/c/126271/
Backport to upstream stable branch in review: https://review.openstack.org/#/c/128283
*** Bug 1127711 has been marked as a duplicate of this bug. ***
Merged to upstream puppet-cinder stable/icehouse branch. Pull request to openstack-puppet-modules icehouse branch to update its cinder module to lastest stable/icehouse branch: https://github.com/redhat-openstack/openstack-puppet-modules/pull/124
Verified: Environment: openstack-puppet-modules-2014.1-24.2.el6ost.noarch rhel-osp-installer-0.4.7-1.el6ost.noarch ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el6ost.noarch openstack-foreman-installer-2.0.32-1.el6ost.noarch ruby193-rubygem-staypuft-0.4.14-1.el6ost.noarch The line from the expected result is there: Checking the controller: [root@maca25400702875 ~]# tail -n 16 /etc/cinder/cinder.conf [iscsi] iscsi_ip_address=192.168.0.8 volume_backend_name=iscsi volume_group=cinder-volumes volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver iscsi_helper=lioadm [rbd] volume_backend_name=rbd volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_pool=volumes rbd_max_clone_depth=5 rbd_flatten_volume_from_snapshot=False rbd_user=volumes rbd_ceph_conf=/etc/ceph/ceph.conf rbd_secret_uuid=aec86a4a-58a1-4971-9e3b-b0221e50dde7
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2014-1931.html