Description of problem: Setting HA-All-in-One hostgroup parameter "multiple_backends" to false creates the wrong value in cinder.conf for volume_backend_name. When the multiple_backends" is set to false the volume_backend_name is set to DEFAULT. The Foreman GUI setting shows the volume_backend_name is set to rbd_backend. When we set "multiple_backends" to true, the rbd stanza in cinder.conf is created and the volume_backend_name is set to rbd_backend as expected. However, it also adds netapp to the enabled_backends (enabled_backends=netapp,rbd) which is yet another bug, that creates (config name netapp) driver is uninitialized warnings in /var/log/messages file. Version-Release number of selected component (if applicable): found in RH7-RHOS-7.0-OFI-2015-07-15.1.repo How reproducible: Very reproducible. Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
(In reply to John Williams from comment #0) > Description of problem: > Setting HA-All-in-One hostgroup parameter "multiple_backends" to false > creates the wrong value in cinder.conf for volume_backend_name. When the > multiple_backends" is set to false the volume_backend_name is set to > DEFAULT. The Foreman GUI setting shows the volume_backend_name is set to > rbd_backend. When we set "multiple_backends" to true, the rbd stanza in > cinder.conf is created and the volume_backend_name is set to rbd_backend as > expected. > I do not have an rbd setup to replicate this against, however, from looking at the puppet-cinder code, it appears the same behavior would be seen for ANY cinder backend not using the multi-backend workflow. To verify this, I deployed with multiple_backends=false, and backend_nfs=true. I see the exact same setting you refer to above in conder.conf, namely: volume_backend_name=DEFAULT Now, given that, I am wondering how exactly this failed for you? Is there a specific error message from either the cli or one of the logs (if so, which)? The difference appears to be that quickstack uses cinder::volume::<name> _classes_ for the single backend case. These _all_ set the 'DEFAULT' value, so the behavior is the same across backends. What I don't know is how this then manifests as an error or why it has not been reported as such against puppet-cinder, either here in bugzilla or upstream in launchpad. The multibackend case instead uses the _define_ cinder::backend::<name>[1], which does indeed take and use the volume_backend_name parameter. However, the volume::rbd[2] class names this instance of the define to 'DEFAULT' and does not take a parameter for volume_backend_name. The define then sees now override, and uses the $name value for setting volume_backend_name (thus resulting in 'DEFAULT'). This backend::rbd define also does some extra things [3] that the volume class does not (the define only sets values in the conf file), and I am unsure if they are valid for both cases. If so, we could perhaps switch to using the cinder::backend::<name> for both multi and single backend sets, but I am not sure what ramifications such a change would have, or if it is truly necessary. > However, it also adds netapp to the enabled_backends > (enabled_backends=netapp,rbd) which is yet another bug, that creates (config > name netapp) driver is uninitialized warnings in /var/log/messages file. > This one, we already have a patch for from rajini[4], which I will be testing and merging (assuming all is well) today. [1] https://github.com/openstack/puppet-cinder/blob/master/manifests/backend/rbd.pp [2] https://github.com/openstack/puppet-cinder/blob/master/manifests/volume/rbd.pp#L53 [3] https://github.com/openstack/puppet-cinder/blob/master/manifests/backend/rbd.pp#L71 [4] https://github.com/redhat-openstack/astapor/pull/555
So, I talked with some of the puppet team, and they tell me 'DEFAULT' is in fact, not an error in configuration, it is merely a legacy backend configuration for the single backend case. Cinder will simply use whatever default driver is set up. On mine, for instance, I see: volume_driver=cinder.volume.drivers.nfs.NfsDriver You can cause this to behave more like you expect by simply choosing multiple_backends=true, but only setting one backend. This should get you the configuration you want to see with a named volume_backend_name. We may remove the old behavior in a future version of OFI if there is demand, but unless you see an actual error, I think this should either become a future feature request, or be closed.
I guess I wasn't clear. Our desire is to have a single backend that uses Ceph block storage. If I understand you note correctly, our workaround would be to set: multiple_backends=true volume_driver=cinder.volume.drivers.rbd.RBDDriver Is there a HA-All-in-One hostgroup parameter option in the OFI installer (AKA foreman installer) that we can set to specify the cinder.volume.drivers.rbd.RBDDriver option? Will there be an option in the future to specify a single backend which is not DEFAULT?
(In reply to John Williams from comment #6) > I guess I wasn't clear. Our desire is to have a single backend that uses > Ceph block storage. > That is what I was describing above. The fact that it has a different value than you expect for volume_backend_name is what is causing confusion here, I think. You have two options, neither is a workaround, it shoudl work fine either way. == Option 1 == In the UI: multiple_backends=false backend_rbd=true Results in the cinder.conf getting: volume_backend_name=DEFAULT enabled_backends= volume_driver=cinder.volume.drivers.rbd.RBDDriver In this case, the volume_backend_name doesn't matter, it just uses the single driver set in volume_driver setting. == Option 2 == (this would get you more what you are expecting for your config) multiple_backends=true backend_rbd=true Results in the cinder.conf getting: [rbd] volume_backend_name=rbd_backend enabled_backends=rbd volume_driver=cinder.volume.drivers.rbd.RBDDriver ...other rbd-specific settings... This configuration can be used for one or more valid backends. So, in future, we can completely do away the the whole 'multiple_backends' switch and do it all the same way (which is option 2). Again, neither setup should cause error, just choose the path your prefer. If there is an actual error (not related to the errant 'netapp' inclusion in enabled_backends, that patch is merged and will go in next build), please let me know, otherwise I think we can close this. Does this explanation make more sense than my last attempt?
I have confirmed that with: multiple_backends=false backend_rbd=true This resultss in the cinder.conf getting: volume_backend_name=DEFAULT enabled_backends= volume_driver=cinder.volume.drivers.rbd.RBDDriver This works, please close the bug.