Red Hat Bugzilla – Bug 1271266
[Director] Director is deploying the lvm backend in a loop device that is not supported by GSS
Last modified: 2017-11-01 21:47:24 EDT
Description of problem:
RHOS Director is deploying the LVM backed by using a loop device for the disk. This is not aligned with the best practices we recommend to our customers to follow neither supported by GSS
Version-Release number of selected component (if applicable):
RHOS Director 7
Steps to Reproduce:
1. Deploy an environment using LVM backend
the cinder-volumes volume group is deployed over a loop device
cinder-volumes volume group deployed in a supported way by GSS like cinder-volumes using his own physical disk
This is not a feature request, it's a bug, director is not deploying the LVM in a supported way. You can use the sosreports attached in the attached customer portal case as a guide how customers are using openstack director deployed environemnts.
- LVM should be deployed in a secondary disk by default, if not available, fail the installation and complain as mandatory
- We need also a plan to move from the current unsupported way to a supported way
this is easy to workaround after deployment. (at least for packstack configurations)
# parted /dev/sdb mklabel msdos mkpart primary 0% 100%
# pvcreate /dev/sdb1
Extend over new PV
# vgextend cinder-volumes /dev/sdb1
Move data off of the loop device
# pvmove /dev/loop0
Remove loop from volume group:
# vgreduce cinder-volumes /dev/loop0
Delete loop device:
# losetup -d /dev/loop0
Delete backing file:
# rm /path/to/cinder-volumes/file
Remove losetup line from rc.local:
# vi /etc/rc.local
The suggested fix here is to make ceph the default. We probably want to clone this to docs once we have that fix so we can update the defaults with notes to change this.
I am moving this BZ to the tripleoclient. Do we want to switch the default in the templates? Also, there is a deployment param to use the controllers as Ceph OSDs. By using such a parameter we can do 1ctrl + 1compute, with Ceph, without the need to deploy an additional node as Ceph OSD. Is this doable?
This bug did not make the OSP 8.0 release. It is being deferred to OSP 10.
Since this is only a data integrity concern for production deployments and LVM is not supported for production deployments, we can defer this to OSP11.
We have 2 threads going on here:
1 - LVM loopback issue, which I think has a workaround and it's not a supported back end.
2 - requests to make ceph a default instead as means of avoiding unsupported LVM customer case bugs - again, LVM is not a supported back end for RHOS. This is a larger discussion and we had started to discuss options but had not closed on a plan.
Dropping the priority and pushing to 12 as we are not planning to address this right now.
We need to consider if we will ever fix the original issue (1) and my preference will be to open a new bug or RFE around "ensuring customers know LVM is not supported on RHOS".
Giulio posed a question.