I figured it out, I don't see a bug or much of a documentation problem (though I will write a KCS). The specific answer is in section 3 of the Custom Block Storage Backend Deployment Guide[1] I used this to model my implementation[2], and it worked[3]. [1]https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/custom_block_storage_back_end_deployment_guide/index#envfile [2]~~~ parameter_defaults: ControllerExtraConfig: cinder::config::cinder_config: rbd/rbd_flatten_volume_from_snapshot: value: true ~~~ [3]~~~ [root@overcloud-controller-0 heat-admin]# grep -vE "^#" /etc/cinder/cinder.conf | grep rbd [rbd] rbd_flatten_volume_from_snapshot=True ~~~
I think this needs to be handled as an RFE (you could update the bz title, and no need for it to be private). puppet-cinder supports configuring the rbd_flatten_volume_from_snapshot parameter, but it needs to be associated with a corresponding new TripleO parameter. Unfortunately, due to the way puppet handles defined resources, you cannot override the default value using hiera data, as was attempted in the bz description. Using the cinder::config::cinder_config method in comment #2 is a good workaround, but you need to be careful. Unless you're using a non-standard tripleo deployment, the name associated with the rbd backend is likely to be "tripleo_ceph" (not "rbd"), and therefore the RBD driver will not use settings in the "[rbd]" section of cinder.conf. But, be careful, this will not work: parameter_defaults: ControllerExtraConfig: cinder::config::cinder_config: tripleo_ceph/rbd_flatten_volume_from_snapshot: value: true Puppet will throw an error because it won't allow the value to be set in multiple puppet resources. Recent releases of cinder support "[backend_defaults]" section, but this is not available in OSP-10 (Newton). I just tested this, and it works with ISP-10. parameter_defaults: ControllerExtraConfig: cinder::config::cinder_config: DEFAULT/rbd_flatten_volume_from_snapshot: value: true
Hi Alan, I've relayed this caveat and warning to the customer, but I was hoping you could clarify things for me. The example[1] from our Custom Block Storage Back End Deployment Guide[2] shows using cinder::config::cinder_config to configure settings for external backends. My thinking based on this documentation and your comment follows. Please correct me where I'm wrong: a. If you have an external Ceph cluster (ie, not managed by TripleO), you would be able to set these parameters for that cluster b. If you are managing your Ceph cluster (or clusters?) with TripleO, you can set certain Ceph/RBD parameters globally, but not per cluster c. Could you also explain: "Puppet will throw an error because it won't allow the value to be set in multiple puppet resources." Do you mean you can't set "$tripleo_ceph/rbd_flatten_volume_from_snapshot" twice? Just trying to understand it structurally and I'm just getting started with Puppet. Apologize if any of this is elementary, I just want to make sure I relay the correct information to the customer. Thanks, Nathan Curry [1]~~~ parameter_defaults: ControllerExtraConfig: # 3 cinder::config::cinder_config: netapp1/volume_driver: # 4 value: cinder.volume.drivers.netapp.common.NetAppDriver netapp1/netapp_storage_family: value: ontap_7mode netapp1/netapp_storage_protocol: value: iscsi netapp1/netapp_server_hostname: value: 10.35.64.11 netapp1/netapp_server_port: value: 80 netapp1/netapp_login: value: root netapp1/netapp_password: value: p@$$w0rd netapp1/volume_backend_name: value: netapp1 (...) cinder_user_enabled_backends: ['netapp1','netapp2'] # ~~~ [2]https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/custom_block_storage_back_end_deployment_guide/index#envfile
Hi Nathan, Great questions! Hope this helps. a. (question) If you have an external Ceph cluster (ie, not managed by TripleO), you would be able to set these parameters for that cluster The custom storage guide describes a generic way of configuring cinder parameters that otherwise cannot be set via tripleo. This method can be used to configure cinder backends that either have no direct support in tripleo, or when you want to deploy multiple instances of the same backend (the document shows how you can deploy multiple netapp backends). However, ceph is special, and isn't suited for using this technique. The RBD driver requires access to a ceph.conf (or similarly named) file, in which it expects to find the IP address(es) of the ceph cluster's monitor node(s), as well as the client keyring required to access the cluster. So, you can use the "cinder_config" method to configure cinder.conf, but that's not sufficient for adding it as a cinder backend. The good news is tripleo supports deploying an overcloud with an RBD backend that uses an external ceph cluster. There's no need to use cinder_config. b. (question) If you are managing your Ceph cluster (or clusters?) with TripleO, you can set certain Ceph/RBD parameters globally, but not per cluster Sort of, but the devil is in the details. As I noted above, cinder_config can be used to set arbitrary values in cinder.conf. But, a big restriction is the setting cannot be already managed elsewhere by puppet, or else you'll get a duplicate resource error (see my answer to question c.). Therefore, you *can* set some RBD parameters on a specific backend (not globally), but only if that parameter isn't already managed by puppet. You need to review the puppet-cinder code (which can change from release to release) to see which settings it manages. These will be a subset of the full list of RBD driver settings (which can also change between releases). If there's a driver setting that is not managed by puppet-cinder, then you safely set it in the driver's portion of cinder.conf, i.e. in the [tripleo_ceph] section. The technique I described in comment #3 sets the rbd_flatten_volume_from_snapshot in the [DEFAULT] section. This a clever (sneaky?) way of avoiding conflicts with puppet-cinder's desire to manage the setting in the [tripleo_ceph] section. But, you need to bear in mind how cinder processes settings that can appear in different sections of cinder.conf. The general rule is it's a hierarchy, whereby values in lower (backend-specific) sections override values in higher (DEFAULT) sections. In a later OpenStack release (I don't recall which), another [backend_defaults] section was introduced, which sits between [DEFAULT] and backend driver sections. Lastly, if you find a need to control a setting that is not managed by puppet (or does not have an associated TripleO parameter), then please file an RFE !! It's always better to have full TripleO support for controlling things than having to resort to using low level hooks like cinder_config. c. (question) Could you also explain: "Puppet will throw an error because it won't allow the value to be set in multiple puppet resources." You are absolutely correct, puppet does not allow you to set "$tripleo_ceph/rbd_flatten_volume_from_snapshot" more than once, even if it's set to the same value. Each setting (the string) gets associated with a puppet resource, and puppet does not allow duplicates.
Please update the bz to indicate whether this helps address the customer's issue. I would like to close the bz or convert it to an RFE.
This does address the customer's issue. I'm not very familiar with bugzilla workflows, but I updated the title to indicate RFE, and lowered Priority and Severity to medium. Thanks for your help.
Patches have merged in upstream Train.
Greg & Tzach, The code is present in OSP-16 builds, but QE hasn't ack'ed this BZ. You folks can set the target release based on QE's ability to test. That should be pretty straightforward, as it basically entails checking whether tripleo adds the setting to cinder.conf
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:0283
*** Bug 1829960 has been marked as a duplicate of this bug. ***