Bug 1266461
Summary: | When using dedicated BlockStorage nodes it does not disable Cinder backend creation in the controller | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Pedro Navarro <pnavarro> |
Component: | rhosp-director | Assignee: | John Trowbridge <jtrowbri> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Arik Chernetsky <achernet> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 7.0 (Kilo) | CC: | abishop, eharney, hbrock, jcoufal, jraju, jtrowbri, mburns, mori, pgrist, pnavarro, rhel-osp-director-maint, tvignaud |
Target Milestone: | --- | ||
Target Release: | 12.0 (Pike) | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-11-01 14:10:42 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1301859 |
Description
Pedro Navarro
2015-09-25 10:47:47 UTC
Hello Team, Do we have any workaround for this bug ? Regards, Jaison R What is the net effect of having the cinder volume service running on the controller nodes when you've configured block storage? The net effect is that when I create a volume I was expecting to be created only in the block storage node and not in the controller. Moreover, cinder-volume worker from the block storage node is not in the pacemaker cluster. John, I think you were working on a fix for this already, weren't you? If so can you close it duplicate with whatever bug is driving that fix? The only driver for what I am working on is the trello card for this feature. I did immediately find this behavior and thought it is unlikely desired though. I will use this BZ for tracking a fix. I have investigated this a bit more, and there is a pretty easy workaround. The following can be included in an environment file passed to the deploy: parameters: controllerExtraConfig: cinder::volume::manage_service: false This will cause the cinder-volume service not to run on the controller nodes. So far I have only tested this on a single controller setup. I will investigate the HA case today. The above heat environment is actually not working for me on OSP8. (I had tested on upstream originally). Looking at the code for the HA case, this simple fix won't work there either. (There are hard-coded constraints referencing cinder-volume on the controller) As far as the environment not working it seems like either a puppet-cinder or cinder issue, as directly running puppet apply on the controller does not stop the service: [root@overcloud-controller-0 heat-admin]# cat /etc/hiera.yaml --- :backends: - json - yaml :json: :datadir: /etc/puppet/hieradata :yaml: :datadir: /etc/puppet/hieradata :hierarchy: - controller_extraconfig [root@overcloud-controller-0 heat-admin]# cat /etc/puppet/hieradata/controller_extraconfig.yaml cinder::volume::manage_service: true cinder::volume::enabled: false nova::debug: true [root@overcloud-controller-0 heat-admin]# puppet apply --hiera_config /etc/hiera.yaml /etc/puppet/modules/cinder/manifests/volume.pp Could not retrieve fact='apache_version', resolution='<anonymous>': undefined method `[]' for nil:NilClass Could not retrieve fact='apache_version', resolution='<anonymous>': undefined method `[]' for nil:NilClass Notice: Compiled catalog for overcloud-controller-0.localdomain in environment production in 0.01 seconds Notice: Finished catalog run in 0.28 seconds [root@overcloud-controller-0 heat-admin]# systemctl status openstack-cinder-volume ● openstack-cinder-volume.service - Cluster Controlled openstack-cinder-volume Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; disabled; vendor preset: disabled) Drop-In: /run/systemd/system/openstack-cinder-volume.service.d └─50-pacemaker.conf Active: active (running) since Tue 2016-03-01 14:32:40 UTC; 38min ago Main PID: 21188 (cinder-volume) CGroup: /system.slice/openstack-cinder-volume.service ├─21188 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --conf... └─21203 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --conf... I have confirmed that the behavior above is specific to OSP8, and works as expected on RDO Liberty. I think that is its own bug though. I also confirmed that HA is not helped by this simple workaround. Fixing the HA case is not trivial because we have two pacemaker constraints referencing cinder-volume: https://github.com/openstack/tripleo-heat-templates/blob/stable/liberty/puppet/manifests/overcloud_controller_pacemaker.pp#L1110-L1125 This bug did not make the OSP 8.0 release. It is being deferred to OSP 10. The bug was filed against OSP-7, which predates support for composible roles. I'm closing this because the target release supports composible roles, and because the customer case has been closed. The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |