| Summary: | After overcloud deployment with dedicated block-storage-node - new cinder volumes are still being created on a controller. | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Omri Hochman <ohochman> |
| Component: | rhosp-director | Assignee: | Elise Gafford <egafford> |
| Status: | CLOSED NOTABUG | QA Contact: | Tzach Shefi <tshefi> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 7.0 (Kilo) | CC: | abishop, egafford, eharney, jcoufal, kbasil, mburns, mcornea, ohochman, pgrist, rhel-osp-director-maint, sasha, sclewis, tonishim, tshefi, tvignaud |
| Target Milestone: | --- | Keywords: | Triaged |
| Target Release: | 12.0 (Pike) | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-11-22 21:01:52 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Omri Hochman
2016-01-08 21:02:11 UTC
First the cinder volumes are created on a controller with active openstack-cinder-volume and after the VG is filled, the volumes are created on the blockstorage node. (In reply to Alexander Chuzhoy from comment #2) > First the cinder volumes are created on a controller with active > openstack-cinder-volume and after the VG is filled, the volumes are created > on the blockstorage node. By default, Cinder will just schedule volumes to be created wherever it finds space for them among its volume nodes. To ensure that volumes are created on the blockstorage node, you will have to create a Cinder volume type with extra specs that are only satisfied by that node such as storage_protocol=ceph or volume_backend_name=cephstorage0, etc. Then, set default_volume_type in cinder.conf to that type, or pass it in with the volume create request. I can advise more specifically on how to do this but I probably need more context to understand the desired behavior in the deployment. essentially, this is expected. cinder-volume runs on the controllers irregardless if you've also deployed block storage nodes or not. This bug did not make the OSP 8.0 release. It is being deferred to OSP 10. No recent progress on this issue. Moving to RHOS 11 for triage. Will give it a try and update. Might take a while we lent some of our servers out awaiting return. Keeping needinfo to not forget this. Omri, I'm confused, you mentioned volumes were also created on your controller rather than block storage node, as you checked it with lvdisplay. From that I gather Cinder's backend was LVM correct? I'm asking as I also noticed a CEPH node in your deployment command, did it not serv Cinder or did you have some multi backend config on Cinder for CEPH+LVM? Paul, On OSP11, we still hit this. Installed 3 controllers 1 compute 1 block-storage node. Created 3 Cinder volumes, 2 of which were created on block-storage node as should be, yet 1 of them was created on a controller node. I'll try on Pike and update as soon as Infrared supports Pike code. Thanks for the testing Tzach. As I read through this one, I'm not sure what the priority/severity should be. Seems logical that if you have storage nodes, that's the place to go, but any thoughts/comments on how important it is to fix this issue? I'm not sure about priority/severity. IMHO if the customer "went to the trouble" of installing a dedicated storage node they probably had a good enough reason to do so (probably LVM deployment), as well as expect all the volumes created on it, in a use case of LVM this would suggest a medium/high priority. As storage node would probably include the disks intended to support LVM volumes. On the other hand Cinder's most commonly used backend by far is CEPH (or any other NFS/EMC/netapp.. ) in either of which actual volumes aren't created on the storage node any way but on remote storage system thus in none LVM deployments I'd guess this would be of lower priority. FYI I just noticed openstack-cinder-volume.service Is also (still) running running on all controller nodes as well as on that dedicated Cinder block-storage node. Would explain how 1 of the volumes was created on a controller. So this bug might be caused by a wrong (or incorrect) deployment of Cinder stand along node. I'll continue to research this and update if I find anything else. This update is a result of discussions in a storage team bug scrub. Closing this as NOTABUG for a few reasons. - As noted, deploying a blockstorage node does not automatically stop the cinder-volume service from being deployed on the controllers. - The composible roles feature in recent OSP releases would allow removing the cinder-volume service from the controllers, but not as far back as OSP-7. - Even when volume services are running on both the controller and blockstorage nodes, you can manipulate where volumes are created using cinder volume types. |