Description of problem: When deploying dedicated block storage nodes using the block storage flavor it does not disable cinder-volume in the controllers. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. openstack flavor create --id auto --ram 4096 --disk 18 --vcpus 2 block 2. openstack flavor set --property "cpu_arch"="x86_64" \ --property "capabilities:boot_option"="local" \ --property "capabilities:profile"="block" block 3. ironic node-update 17578838-a0be-4e61-9a3a-38b6d8e86162 replace properties/capabilities='profile:block,boot_option:local' 4. Enable CinderEnableIscsiBackend and disable CinderEnableRbdBackend storage-environments file 5. openstack overcloud deploy --templates ~/block_templates/ \ --ntp-server 10.5.26.10 \ --control-flavor control --compute-flavor compute --block-storage-flavor block \ --control-scale 3 --compute-scale 2 --block-storage-scale 1 \ --neutron-tunnel-types vxlan --neutron-network-type vxlan \ -e ~/block_templates/environments/storage-environment.yaml \ -e ~/block_templates/advanced-networking.yaml \ -e ~/block_templates/firstboot-environment.yaml Actual results: Cinder-volume service is enabled in controller node and in the block storage node Expected results: Cinder-volume is only enabled in the block storage node Additional info: It would be good that block node was included in pacemaker too, the controller is the only one enabled: openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-0
Hello Team, Do we have any workaround for this bug ? Regards, Jaison R
What is the net effect of having the cinder volume service running on the controller nodes when you've configured block storage?
The net effect is that when I create a volume I was expecting to be created only in the block storage node and not in the controller. Moreover, cinder-volume worker from the block storage node is not in the pacemaker cluster.
John, I think you were working on a fix for this already, weren't you? If so can you close it duplicate with whatever bug is driving that fix?
The only driver for what I am working on is the trello card for this feature. I did immediately find this behavior and thought it is unlikely desired though. I will use this BZ for tracking a fix.
I have investigated this a bit more, and there is a pretty easy workaround. The following can be included in an environment file passed to the deploy: parameters: controllerExtraConfig: cinder::volume::manage_service: false This will cause the cinder-volume service not to run on the controller nodes. So far I have only tested this on a single controller setup. I will investigate the HA case today.
The above heat environment is actually not working for me on OSP8. (I had tested on upstream originally). Looking at the code for the HA case, this simple fix won't work there either. (There are hard-coded constraints referencing cinder-volume on the controller) As far as the environment not working it seems like either a puppet-cinder or cinder issue, as directly running puppet apply on the controller does not stop the service: [root@overcloud-controller-0 heat-admin]# cat /etc/hiera.yaml --- :backends: - json - yaml :json: :datadir: /etc/puppet/hieradata :yaml: :datadir: /etc/puppet/hieradata :hierarchy: - controller_extraconfig [root@overcloud-controller-0 heat-admin]# cat /etc/puppet/hieradata/controller_extraconfig.yaml cinder::volume::manage_service: true cinder::volume::enabled: false nova::debug: true [root@overcloud-controller-0 heat-admin]# puppet apply --hiera_config /etc/hiera.yaml /etc/puppet/modules/cinder/manifests/volume.pp Could not retrieve fact='apache_version', resolution='<anonymous>': undefined method `[]' for nil:NilClass Could not retrieve fact='apache_version', resolution='<anonymous>': undefined method `[]' for nil:NilClass Notice: Compiled catalog for overcloud-controller-0.localdomain in environment production in 0.01 seconds Notice: Finished catalog run in 0.28 seconds [root@overcloud-controller-0 heat-admin]# systemctl status openstack-cinder-volume ● openstack-cinder-volume.service - Cluster Controlled openstack-cinder-volume Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; disabled; vendor preset: disabled) Drop-In: /run/systemd/system/openstack-cinder-volume.service.d └─50-pacemaker.conf Active: active (running) since Tue 2016-03-01 14:32:40 UTC; 38min ago Main PID: 21188 (cinder-volume) CGroup: /system.slice/openstack-cinder-volume.service ├─21188 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --conf... └─21203 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --conf...
I have confirmed that the behavior above is specific to OSP8, and works as expected on RDO Liberty. I think that is its own bug though. I also confirmed that HA is not helped by this simple workaround. Fixing the HA case is not trivial because we have two pacemaker constraints referencing cinder-volume: https://github.com/openstack/tripleo-heat-templates/blob/stable/liberty/puppet/manifests/overcloud_controller_pacemaker.pp#L1110-L1125
This bug did not make the OSP 8.0 release. It is being deferred to OSP 10.
The bug was filed against OSP-7, which predates support for composible roles. I'm closing this because the target release supports composible roles, and because the customer case has been closed.
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days