Description of problem: It looks like the documentation for RHEL-OSP 7 includes a guide for integrating an equallogic san but it was written for the older versions of OSP and also involves manually editing config on the controller nodes. Here is the page: https://access.redhat.com/documentation/en/red-hat-enterprise-linux-openstack-platform/version-7/red-hat-enterprise-linux-openstack-platform-7-dell-equallogic-back-end-guide/dell-equallogic-back-end-guide ____ Manually updating overcloud nodes is not recommended. We should update overcloud configuration from undercloud node. The manual changes will be overwritten by OSP director during the puppet-based software deployment stages, this is not a good practice. ___ 1. Document should recommend to update the configuration from overcloud node from undercloud. 2. we should provide correct configuration/yaml which needs to use from undercloud system to configure Dell EqualLogic Back End for cinder/glance/nova. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: 1. Document should recommend to update the configuration from overcloud node from undercloud. 2. we should provide correct configuration/yaml which needs to use from undercloud system to configure Dell EqualLogic Back End for cinder/glance/nova. Additional info:
Assigning to Don for review.
Created attachment 1093556 [details] Dell StorageCenter draft environment file
Created attachment 1093557 [details] Dell EqualLogic draft environment file
Created attachment 1094696 [details] draft ENV file for declaring multiple back ends As I understand it, to declare multiple cinder back ends via Director requires: * one environment file per back end definition * a separate ENV file declaring all enabled back ends. In the attachment, I believe VALUE would be the comma-delimited list of all back ends' volume_backend_name * all environment files to be called thru 'openstack deploy -e'
Created attachment 1094700 [details] draft ENV file for declaring multiple back ends
Created attachment 1106279 [details] Puppet Manifest for Multiple Backends
Created attachment 1106280 [details] Heat Template for Multiple Backends
Created attachment 1106281 [details] Environment File for Multiple Backends
Have attached the files for multiple backends. Just a bit of explanation: == Puppet Manifest for Multiple Backends (cinder-eqlx.pp) This is the Puppet manifest for configuring the multiple backend params on Cinder. All it does is passes values per backend to the cinder::backend::eqlx puppet class, which adds a new section (in our case "eqlx_1" and "eqlx_2") in the cinder.conf file with the required params. You can essentially run this manually of a node by running the following: [heat-admin@overcloud-controller-0 ~]$ sudo puppet apply cinder_eqlx.pp Notice: Compiled catalog for overcloud-controller-0.localdomain in environment production in 0.42 seconds Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/eqlx_pool]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/eqlx_cli_timeout]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/san_login]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/san_ip]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/san_password]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/san_thin_provision]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/eqlx_use_chap]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/eqlx_group_name]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/volume_backend_name]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/volume_driver]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/eqlx_cli_max_retries]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/eqlx_cli_max_retries]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/eqlx_group_name]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/eqlx_pool]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/volume_driver]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/san_login]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/san_password]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/volume_backend_name]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/san_thin_provision]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/eqlx_use_chap]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/eqlx_cli_timeout]/ensure: created Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/san_ip]/ensure: created Notice: Finished catalog run in 2.37 seconds This results in the following added to the cinder.conf file: [eqlx_1] eqlx_pool=default eqlx_cli_timeout=30 san_login=admin san_ip=192.168.1.20 san_password=p@55w0rd! san_thin_provision=True eqlx_use_chap=False eqlx_group_name=group-0 volume_backend_name=main volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver eqlx_cli_max_retries=5 [eqlx_2] eqlx_cli_max_retries=5 eqlx_group_name=group-0 eqlx_pool=default volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver san_login=admin san_password=p@55w0rd! volume_backend_name=main san_thin_provision=True eqlx_use_chap=False eqlx_cli_timeout=30 san_ip=192.168.1.21 == Heat Template for Multiple Backends (eqlx-config.yaml) This is a file that defines extra configuration to add to the Overcloud. The OS::Heat::SoftwareConfig defines the configuration to use (in our case, the cinder-eqlx.pp manifest) and the OS::Heat::SoftwareDeployments aplies it to our servers. == Environment File for Multiple Backends (eqlx-environment.yaml) This is the file we use to call our Heat template. Include this file with the director's Overcloud deployment command like so: $ openstack overcloud deploy --templates -e /home/stack/templates/eqlx-environment.yaml == Notes * I should note that this example has hardcoded data in the manifest, which isn't the best way of doing it I think. A better way of doing it is to pass data from the Heat templates to the Puppet manifest. * This applies the config to all nodes, which might cause a failure on the Compute nodes (because there's no cinder.conf file to edit). We might need to add logic so that this only runs on the Controller nodes. * We might also need to add the following to the eqlx-environment.yaml file: parameters: CinderEnableIscsiBackend: false CinderEnableRbdBackend: false CinderEnableNfsBackend: false NovaEnableRbdBackend: false GlanceBackend: file This disables all backends directly supported by the TripleO Heat Templates. Also set GlanceBackend to either: * 'file' and use a mount for glance storage at /var/lib/glance/images on each Controller node * 'swift' which makes Glance use Switch for image storage * 'cinder', which makes Glance use Cinder for image storage
Created attachment 1118677 [details] updated Heat template for multiple EqualLogic back ends This [proposed] Heat template contains the following custom resources: * EqlxConfig - calls the custom puppet manifest cinder-eqlx.pp, where we define the back ends for our deployment. * CinderRestartConfig - restarts the Cinder service after orchestrating the back end configuration.
Created attachment 1118678 [details] updated puppet manifest for multiple EqualLogic back ends Aside from the back end definitions, this updated puppet manifest includes the following: * a regex check to prevent the manifest from running on non-Controller nodes * a function that passes each volume's name to the 'enabled_backends' parameter in /etc/cinder/cinder.conf
Closing this BZ, as it is now up on the portal: https://access.redhat.com/documentation/en/red-hat-openstack-platform/version-8/custom-block-storage-back-end-deployment-guide/#envfile