Bug 1266461

Summary: When using dedicated BlockStorage nodes it does not disable Cinder backend creation in the controller
Product: Red Hat OpenStack Reporter: Pedro Navarro <pnavarro>
Component: rhosp-directorAssignee: John Trowbridge <jtrowbri>
Status: CLOSED CURRENTRELEASE QA Contact: Arik Chernetsky <achernet>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.0 (Kilo)CC: abishop, eharney, hbrock, jcoufal, jraju, jtrowbri, mburns, mori, pgrist, pnavarro, rhel-osp-director-maint, tvignaud
Target Milestone: ---   
Target Release: 12.0 (Pike)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-11-01 14:10:42 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1301859    

Description Pedro Navarro 2015-09-25 10:47:47 UTC
Description of problem: When deploying dedicated block storage nodes using the block storage flavor it does not disable cinder-volume in the controllers. 


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. openstack flavor create --id auto --ram 4096 --disk 18 --vcpus 2 block
2. openstack flavor set --property "cpu_arch"="x86_64" \
    --property "capabilities:boot_option"="local" \
    --property "capabilities:profile"="block" block
3. ironic node-update 17578838-a0be-4e61-9a3a-38b6d8e86162 replace properties/capabilities='profile:block,boot_option:local'
4. Enable CinderEnableIscsiBackend and disable CinderEnableRbdBackend storage-environments file
5. openstack overcloud deploy --templates ~/block_templates/ \
    --ntp-server 10.5.26.10 \
    --control-flavor control --compute-flavor compute --block-storage-flavor block \
    --control-scale 3 --compute-scale 2 --block-storage-scale 1 \
    --neutron-tunnel-types vxlan --neutron-network-type vxlan \
    -e ~/block_templates/environments/storage-environment.yaml \
    -e ~/block_templates/advanced-networking.yaml \
    -e ~/block_templates/firstboot-environment.yaml

Actual results:
Cinder-volume service is enabled in controller node and in the block storage node

Expected results:
Cinder-volume is only enabled in the block storage node

Additional info:

It would be good that block node was included in pacemaker too, the controller is the only one enabled:

 openstack-cinder-volume	(systemd:openstack-cinder-volume):	Started overcloud-controller-0

Comment 2 Jaison Raju 2015-09-28 12:10:05 UTC
Hello Team,

Do we have any workaround for this bug ?

Regards,
Jaison R

Comment 3 chris alfonso 2015-09-30 16:21:35 UTC
What is the net effect of having the cinder volume service running on the controller nodes when you've configured block storage?

Comment 5 Pedro Navarro 2015-10-01 12:38:03 UTC
The net effect is that when I create a volume I was expecting to be created only in the block storage node and not in the controller. Moreover, cinder-volume worker from the block storage node is not in the pacemaker cluster.

Comment 7 Hugh Brock 2016-02-28 07:51:13 UTC
John, I think you were working on a fix for this already, weren't you? If so can you close it duplicate with whatever bug is driving that fix?

Comment 8 John Trowbridge 2016-02-29 12:09:58 UTC
The only driver for what I am working on is the trello card for this feature. 

I did immediately find this behavior and thought it is unlikely desired though.

I will use this BZ for tracking a fix.

Comment 9 John Trowbridge 2016-02-29 19:11:52 UTC
I have investigated this a bit more, and there is a pretty easy workaround. The following can be included in an environment file passed to the deploy:

parameters:
    controllerExtraConfig:
        cinder::volume::manage_service: false

This will cause the cinder-volume service not to run on the controller nodes.

So far I have only tested this on a single controller setup. I will investigate the HA case today.

Comment 10 John Trowbridge 2016-03-01 15:17:54 UTC
The above heat environment is actually not working for me on OSP8. (I had tested on upstream originally). Looking at the code for the HA case, this simple fix won't work there either. (There are hard-coded constraints referencing cinder-volume on the controller)

As far as the environment not working it seems like either a puppet-cinder or cinder issue, as directly running puppet apply on the controller does not stop the service:

[root@overcloud-controller-0 heat-admin]# cat /etc/hiera.yaml 
---
:backends:
  - json
  - yaml
:json:
  :datadir: /etc/puppet/hieradata
:yaml:
  :datadir: /etc/puppet/hieradata
:hierarchy:
  - controller_extraconfig

[root@overcloud-controller-0 heat-admin]# cat /etc/puppet/hieradata/controller_extraconfig.yaml 
cinder::volume::manage_service: true
cinder::volume::enabled: false
nova::debug: true

[root@overcloud-controller-0 heat-admin]# puppet apply --hiera_config /etc/hiera.yaml /etc/puppet/modules/cinder/manifests/volume.pp
Could not retrieve fact='apache_version', resolution='<anonymous>': undefined method `[]' for nil:NilClass
Could not retrieve fact='apache_version', resolution='<anonymous>': undefined method `[]' for nil:NilClass
Notice: Compiled catalog for overcloud-controller-0.localdomain in environment production in 0.01 seconds
Notice: Finished catalog run in 0.28 seconds

[root@overcloud-controller-0 heat-admin]# systemctl status openstack-cinder-volume
● openstack-cinder-volume.service - Cluster Controlled openstack-cinder-volume
   Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; disabled; vendor preset: disabled)
  Drop-In: /run/systemd/system/openstack-cinder-volume.service.d
           └─50-pacemaker.conf
   Active: active (running) since Tue 2016-03-01 14:32:40 UTC; 38min ago
 Main PID: 21188 (cinder-volume)
   CGroup: /system.slice/openstack-cinder-volume.service
           ├─21188 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --conf...
           └─21203 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --conf...

Comment 11 John Trowbridge 2016-03-01 17:16:12 UTC
I have confirmed that the behavior above is specific to OSP8, and works as expected on RDO Liberty. I think that is its own bug though.

I also confirmed that HA is not helped by this simple workaround. Fixing the HA case is not trivial because we have two pacemaker constraints referencing cinder-volume: 

https://github.com/openstack/tripleo-heat-templates/blob/stable/liberty/puppet/manifests/overcloud_controller_pacemaker.pp#L1110-L1125

Comment 12 Mike Burns 2016-04-07 20:50:54 UTC
This bug did not make the OSP 8.0 release.  It is being deferred to OSP 10.

Comment 15 Alan Bishop 2017-11-01 14:10:42 UTC
The bug was filed against OSP-7, which predates support for composible roles. I'm closing this because the target release supports composible roles, and because the customer case has been closed.

Comment 16 Red Hat Bugzilla 2023-09-14 03:05:52 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days