Bug 1278721 - [Docs] [Dell EqualLogic] Manually updating overcloud nodes is not recommended. We should update overcloud configuration from undercloud node.
Summary: [Docs] [Dell EqualLogic] Manually updating overcloud nodes is not recommended...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: documentation
Version: 7.0 (Kilo)
Hardware: All
OS: Linux
medium
medium
Target Milestone: z4
: 7.0 (Kilo)
Assignee: Don Domingo
QA Contact:
URL:
Whiteboard:
Depends On: 1273462
Blocks: 1290662 1302085 1339413
TreeView+ depends on / blocked
 
Reported: 2015-11-06 09:35 UTC by Pratik Pravin Bandarkar
Modified: 2019-09-12 09:14 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-02 11:27:04 UTC
Target Upstream Version:
Embargoed:
vaggarwa: needinfo+


Attachments (Terms of Use)
Dell StorageCenter draft environment file (744 bytes, text/plain)
2015-11-13 06:58 UTC, Don Domingo
no flags Details
Dell EqualLogic draft environment file (576 bytes, text/plain)
2015-11-13 06:58 UTC, Don Domingo
no flags Details
draft ENV file for declaring multiple back ends (83 bytes, text/plain)
2015-11-16 01:23 UTC, Don Domingo
no flags Details
draft ENV file for declaring multiple back ends (82 bytes, text/plain)
2015-11-16 01:33 UTC, Don Domingo
no flags Details
Puppet Manifest for Multiple Backends (942 bytes, text/plain)
2015-12-16 04:01 UTC, Dan Macpherson
no flags Details
Heat Template for Multiple Backends (532 bytes, text/plain)
2015-12-16 04:02 UTC, Dan Macpherson
no flags Details
Environment File for Multiple Backends (93 bytes, text/plain)
2015-12-16 04:02 UTC, Dan Macpherson
no flags Details
updated Heat template for multiple EqualLogic back ends (1007 bytes, text/plain)
2016-01-27 04:39 UTC, Don Domingo
no flags Details
updated puppet manifest for multiple EqualLogic back ends (1.93 KB, text/plain)
2016-01-27 04:43 UTC, Don Domingo
no flags Details

Description Pratik Pravin Bandarkar 2015-11-06 09:35:41 UTC
Description of problem:
It looks like the documentation for RHEL-OSP 7 includes a guide for integrating an equallogic san but it was written for the older versions of OSP and also involves manually editing config on the controller nodes. Here is the page:

https://access.redhat.com/documentation/en/red-hat-enterprise-linux-openstack-platform/version-7/red-hat-enterprise-linux-openstack-platform-7-dell-equallogic-back-end-guide/dell-equallogic-back-end-guide

____

Manually updating overcloud nodes is not recommended. We should update overcloud configuration from undercloud node.

The manual changes will be overwritten by OSP director during the puppet-based software deployment stages, this is not a good practice.
___


1. Document should recommend to update the configuration from overcloud node from undercloud.

2. we should provide correct configuration/yaml which needs to use from undercloud system to configure Dell EqualLogic Back End for cinder/glance/nova.



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:

1. Document should recommend to update the configuration from overcloud node from undercloud.

2. we should provide correct configuration/yaml which needs to use from undercloud system to configure Dell EqualLogic Back End for cinder/glance/nova.


Additional info:

Comment 2 Andrew Dahms 2015-11-12 02:22:31 UTC
Assigning to Don for review.

Comment 3 Don Domingo 2015-11-13 06:58:10 UTC
Created attachment 1093556 [details]
Dell StorageCenter draft environment file

Comment 4 Don Domingo 2015-11-13 06:58:41 UTC
Created attachment 1093557 [details]
Dell EqualLogic draft environment file

Comment 6 Don Domingo 2015-11-16 01:23:16 UTC
Created attachment 1094696 [details]
draft ENV file for declaring multiple back ends

As I understand it, to declare multiple cinder back ends via Director requires:

* one environment file per back end definition
* a separate ENV file declaring all enabled back ends. In the attachment, I believe VALUE would be the comma-delimited list of all back ends' volume_backend_name
* all environment files to be called thru 'openstack deploy -e'

Comment 7 Don Domingo 2015-11-16 01:33:22 UTC
Created attachment 1094700 [details]
draft ENV file for declaring multiple back ends

Comment 11 Dan Macpherson 2015-12-16 04:01:41 UTC
Created attachment 1106279 [details]
Puppet Manifest for Multiple Backends

Comment 12 Dan Macpherson 2015-12-16 04:02:12 UTC
Created attachment 1106280 [details]
Heat Template for Multiple Backends

Comment 13 Dan Macpherson 2015-12-16 04:02:42 UTC
Created attachment 1106281 [details]
Environment File for Multiple Backends

Comment 14 Dan Macpherson 2015-12-16 04:23:47 UTC
Have attached the files for multiple backends. Just a bit of explanation:

== Puppet Manifest for Multiple Backends (cinder-eqlx.pp)

This is the Puppet manifest for configuring the multiple backend params on Cinder. All it does is passes values per backend to the cinder::backend::eqlx puppet class, which adds a new section (in our case "eqlx_1" and "eqlx_2") in the cinder.conf file with the required params.

You can essentially run this manually of a node by running the following:

[heat-admin@overcloud-controller-0 ~]$ sudo puppet apply cinder_eqlx.pp 
Notice: Compiled catalog for overcloud-controller-0.localdomain in environment production in 0.42 seconds
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/eqlx_pool]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/eqlx_cli_timeout]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/san_login]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/san_ip]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/san_password]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/san_thin_provision]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/eqlx_use_chap]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/eqlx_group_name]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/volume_backend_name]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/volume_driver]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_1]/Cinder_config[eqlx_1/eqlx_cli_max_retries]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/eqlx_cli_max_retries]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/eqlx_group_name]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/eqlx_pool]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/volume_driver]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/san_login]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/san_password]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/volume_backend_name]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/san_thin_provision]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/eqlx_use_chap]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/eqlx_cli_timeout]/ensure: created
Notice: /Stage[main]/Main/Cinder::Backend::Eqlx[eqlx_2]/Cinder_config[eqlx_2/san_ip]/ensure: created
Notice: Finished catalog run in 2.37 seconds

This results in the following added to the cinder.conf file:

[eqlx_1]
eqlx_pool=default
eqlx_cli_timeout=30
san_login=admin
san_ip=192.168.1.20
san_password=p@55w0rd!
san_thin_provision=True
eqlx_use_chap=False
eqlx_group_name=group-0
volume_backend_name=main
volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
eqlx_cli_max_retries=5

[eqlx_2]
eqlx_cli_max_retries=5
eqlx_group_name=group-0
eqlx_pool=default
volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
san_login=admin
san_password=p@55w0rd!
volume_backend_name=main
san_thin_provision=True
eqlx_use_chap=False
eqlx_cli_timeout=30
san_ip=192.168.1.21

== Heat Template for Multiple Backends (eqlx-config.yaml)

This is a file that defines extra configuration to add to the Overcloud. The OS::Heat::SoftwareConfig defines the configuration to use (in our case, the cinder-eqlx.pp manifest) and the OS::Heat::SoftwareDeployments aplies it to our servers.

== Environment File for Multiple Backends (eqlx-environment.yaml) 

This is the file we use to call our Heat template. Include this file with the director's Overcloud deployment command like so:

$ openstack overcloud deploy --templates -e /home/stack/templates/eqlx-environment.yaml

== Notes

* I should note that this example has hardcoded data in the manifest, which isn't the best way of doing it I think. A better way of doing it is to pass data from the Heat templates to the Puppet manifest.

* This applies the config to all nodes, which might cause a failure on the Compute nodes (because there's no cinder.conf file to edit). We might need to add logic so that this only runs on the Controller nodes.

* We might also need to add the following to the eqlx-environment.yaml file:

parameters:
  CinderEnableIscsiBackend: false
  CinderEnableRbdBackend: false
  CinderEnableNfsBackend: false
  NovaEnableRbdBackend: false
  GlanceBackend: file

This disables all backends directly supported by the TripleO Heat Templates. Also set GlanceBackend to either:

* 'file' and use a mount for glance storage at /var/lib/glance/images on each Controller node
* 'swift' which makes Glance use Switch for image storage
* 'cinder', which makes Glance use Cinder for image storage

Comment 16 Don Domingo 2016-01-27 04:39:58 UTC
Created attachment 1118677 [details]
updated Heat template for multiple EqualLogic back ends

This [proposed] Heat template contains the following custom resources:

* EqlxConfig - calls the custom puppet manifest cinder-eqlx.pp, where we define the back ends for our deployment.

* CinderRestartConfig - restarts the Cinder service after orchestrating the back end configuration.

Comment 17 Don Domingo 2016-01-27 04:43:54 UTC
Created attachment 1118678 [details]
updated puppet manifest for multiple EqualLogic back ends

Aside from the back end definitions, this updated puppet manifest includes the following:

* a regex check to prevent the manifest from running on non-Controller nodes

* a function that passes each volume's name to the 'enabled_backends' parameter in /etc/cinder/cinder.conf


Note You need to log in before you can comment on or make changes to this bug.