Bug 1266461 - When using dedicated BlockStorage nodes it does not disable Cinder backend creation in the controller [NEEDINFO]
When using dedicated BlockStorage nodes it does not disable Cinder backend cr...
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director (Show other bugs)
7.0 (Kilo)
Unspecified Unspecified
medium Severity medium
: ---
: 12.0 (Pike)
Assigned To: John Trowbridge
Arik Chernetsky
Depends On:
Blocks: 1301859
  Show dependency treegraph
Reported: 2015-09-25 06:47 EDT by Pedro Navarro
Modified: 2017-11-01 10:10 EDT (History)
12 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2017-11-01 10:10:42 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
pgrist: needinfo? (jtrowbri)

Attachments (Terms of Use)

External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 1757613 None None None Never

  None (edit)
Description Pedro Navarro 2015-09-25 06:47:47 EDT
Description of problem: When deploying dedicated block storage nodes using the block storage flavor it does not disable cinder-volume in the controllers. 

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. openstack flavor create --id auto --ram 4096 --disk 18 --vcpus 2 block
2. openstack flavor set --property "cpu_arch"="x86_64" \
    --property "capabilities:boot_option"="local" \
    --property "capabilities:profile"="block" block
3. ironic node-update 17578838-a0be-4e61-9a3a-38b6d8e86162 replace properties/capabilities='profile:block,boot_option:local'
4. Enable CinderEnableIscsiBackend and disable CinderEnableRbdBackend storage-environments file
5. openstack overcloud deploy --templates ~/block_templates/ \
    --ntp-server \
    --control-flavor control --compute-flavor compute --block-storage-flavor block \
    --control-scale 3 --compute-scale 2 --block-storage-scale 1 \
    --neutron-tunnel-types vxlan --neutron-network-type vxlan \
    -e ~/block_templates/environments/storage-environment.yaml \
    -e ~/block_templates/advanced-networking.yaml \
    -e ~/block_templates/firstboot-environment.yaml

Actual results:
Cinder-volume service is enabled in controller node and in the block storage node

Expected results:
Cinder-volume is only enabled in the block storage node

Additional info:

It would be good that block node was included in pacemaker too, the controller is the only one enabled:

 openstack-cinder-volume	(systemd:openstack-cinder-volume):	Started overcloud-controller-0
Comment 2 Jaison Raju 2015-09-28 08:10:05 EDT
Hello Team,

Do we have any workaround for this bug ?

Jaison R
Comment 3 chris alfonso 2015-09-30 12:21:35 EDT
What is the net effect of having the cinder volume service running on the controller nodes when you've configured block storage?
Comment 5 Pedro Navarro 2015-10-01 08:38:03 EDT
The net effect is that when I create a volume I was expecting to be created only in the block storage node and not in the controller. Moreover, cinder-volume worker from the block storage node is not in the pacemaker cluster.
Comment 7 Hugh Brock 2016-02-28 02:51:13 EST
John, I think you were working on a fix for this already, weren't you? If so can you close it duplicate with whatever bug is driving that fix?
Comment 8 John Trowbridge 2016-02-29 07:09:58 EST
The only driver for what I am working on is the trello card for this feature. 

I did immediately find this behavior and thought it is unlikely desired though.

I will use this BZ for tracking a fix.
Comment 9 John Trowbridge 2016-02-29 14:11:52 EST
I have investigated this a bit more, and there is a pretty easy workaround. The following can be included in an environment file passed to the deploy:

        cinder::volume::manage_service: false

This will cause the cinder-volume service not to run on the controller nodes.

So far I have only tested this on a single controller setup. I will investigate the HA case today.
Comment 10 John Trowbridge 2016-03-01 10:17:54 EST
The above heat environment is actually not working for me on OSP8. (I had tested on upstream originally). Looking at the code for the HA case, this simple fix won't work there either. (There are hard-coded constraints referencing cinder-volume on the controller)

As far as the environment not working it seems like either a puppet-cinder or cinder issue, as directly running puppet apply on the controller does not stop the service:

[root@overcloud-controller-0 heat-admin]# cat /etc/hiera.yaml 
  - json
  - yaml
  :datadir: /etc/puppet/hieradata
  :datadir: /etc/puppet/hieradata
  - controller_extraconfig

[root@overcloud-controller-0 heat-admin]# cat /etc/puppet/hieradata/controller_extraconfig.yaml 
cinder::volume::manage_service: true
cinder::volume::enabled: false
nova::debug: true

[root@overcloud-controller-0 heat-admin]# puppet apply --hiera_config /etc/hiera.yaml /etc/puppet/modules/cinder/manifests/volume.pp
Could not retrieve fact='apache_version', resolution='<anonymous>': undefined method `[]' for nil:NilClass
Could not retrieve fact='apache_version', resolution='<anonymous>': undefined method `[]' for nil:NilClass
Notice: Compiled catalog for overcloud-controller-0.localdomain in environment production in 0.01 seconds
Notice: Finished catalog run in 0.28 seconds

[root@overcloud-controller-0 heat-admin]# systemctl status openstack-cinder-volume
● openstack-cinder-volume.service - Cluster Controlled openstack-cinder-volume
   Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; disabled; vendor preset: disabled)
  Drop-In: /run/systemd/system/openstack-cinder-volume.service.d
   Active: active (running) since Tue 2016-03-01 14:32:40 UTC; 38min ago
 Main PID: 21188 (cinder-volume)
   CGroup: /system.slice/openstack-cinder-volume.service
           ├─21188 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --conf...
           └─21203 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --conf...
Comment 11 John Trowbridge 2016-03-01 12:16:12 EST
I have confirmed that the behavior above is specific to OSP8, and works as expected on RDO Liberty. I think that is its own bug though.

I also confirmed that HA is not helped by this simple workaround. Fixing the HA case is not trivial because we have two pacemaker constraints referencing cinder-volume: 

Comment 12 Mike Burns 2016-04-07 16:50:54 EDT
This bug did not make the OSP 8.0 release.  It is being deferred to OSP 10.
Comment 15 Alan Bishop 2017-11-01 10:10:42 EDT
The bug was filed against OSP-7, which predates support for composible roles. I'm closing this because the target release supports composible roles, and because the customer case has been closed.

Note You need to log in before you can comment on or make changes to this bug.