Bug 1297046 - After overcloud deployment with dedicated block-storage-node - new cinder volumes are still being created on a controller.
Summary: After overcloud deployment with dedicated block-storage-node - new cinder vol...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director
Version: 7.0 (Kilo)
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
: 12.0 (Pike)
Assignee: Elise Gafford
QA Contact: Tzach Shefi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-01-08 21:02 UTC by Omri Hochman
Modified: 2017-11-22 21:01 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-11-22 21:01:52 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Omri Hochman 2016-01-08 21:02:11 UTC
After overcloud deployment with dedicated block-storage-node - new cinder volumes are still being created on a controller. 


Environment ( 7.2GA ) :
----------------------
python-rdomanager-oscplugin-0.0.10-22.el7ost.noarch
python-cinderclient-1.2.1-1.el7ost.noarch
python-cinder-2015.1.2-5.el7ost.noarch
openstack-cinder-2015.1.2-5.el7ost.noarch
openstack-tripleo-heat-templates-0.8.6-94.el7ost.noarch
openstack-heat-templates-0-0.8.20150605git.el7ost.noarch


Description:
------------
After overcloud deployment with dedicated block-storage node, when attempting to create a new cinder volumes - some of the new volumes are being created on a controller instead of on the dedicated block-storage-node. 


deployment command was :
-------------------------
openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 --ceph-storage-scale 1 --block-storage-scale 1 --swift-storage-scale 1 --ntp-server 10.5.26.10 --timeout 90


[stack@undercloud72 ~]$ nova list
+--------------------------------------+---------------------------+--------+------------+-------------+-----------------------+
| ID                                   | Name                      | Status | Task State | Power State | Networks              |
+--------------------------------------+---------------------------+--------+------------+-------------+-----------------------+
| fe0e66e5-761b-421a-8b7e-fd8402080e19 | overcloud-blockstorage-0  | ACTIVE | -          | Running     | ctlplane=192.168.0.10 |
| 7b2babb7-36ea-49f7-8d8c-64479c2e6dc7 | overcloud-cephstorage-0   | ACTIVE | -          | Running     | ctlplane=192.168.0.9  |
| fe94ea23-cf84-408f-83c9-428435b1f357 | overcloud-compute-0       | ACTIVE | -          | Running     | ctlplane=192.168.0.12 |
| 0f160978-d148-49c1-a619-dfd60f6583c5 | overcloud-controller-0    | ACTIVE | -          | Running     | ctlplane=192.168.0.11 |
| 11af0fe0-3c36-49cb-ba05-94c509d8e433 | overcloud-controller-1    | ACTIVE | -          | Running     | ctlplane=192.168.0.13 |
| a15b2066-7228-471d-b69e-7170d334ecb1 | overcloud-controller-2    | ACTIVE | -          | Running     | ctlplane=192.168.0.14 |
| 2e8bc705-c095-4466-a8b6-691856dfd15c | overcloud-objectstorage-0 | ACTIVE | -          | Running     | ctlplane=192.168.0.7  |
+--------------------------------------+---------------------------+--------+------------+-------------+-----------------------+



on the controller-02: 
[root@overcloud-controller-2 ~]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/cinder-volumes/volume-a630f7a3-3942-47b5-a325-3f286b886b86
  LV Name                volume-a630f7a3-3942-47b5-a325-3f286b886b86
  VG Name                cinder-volumes
  LV UUID                rkZbkI-aMl2-Z3hb-hOsV-Og2j-Zb1U-DfSKKo
  LV Write Access        read/write
  LV Creation host, time overcloud-controller-2.localdomain, 2016-01-08 13:52:15 -0500
  LV Status              available
  # open                 1
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0

  --- Logical volume ---
  LV Path                /dev/cinder-volumes/volume-ae79cc0d-6855-462c-832f-4c33d01e9fc5
  LV Name                volume-ae79cc0d-6855-462c-832f-4c33d01e9fc5
  VG Name                cinder-volumes
  LV UUID                BwGeAc-71aY-0hN4-HeKS-dObn-rBWY-I3ozZi
  LV Write Access        read/write
  LV Creation host, time overcloud-controller-2.localdomain, 2016-01-08 14:54:27 -0500
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:1

Comment 2 Alexander Chuzhoy 2016-01-08 21:29:53 UTC
First the cinder volumes are created on a controller with active openstack-cinder-volume and after the VG is filled, the volumes are created on the blockstorage node.

Comment 4 Eric Harney 2016-02-03 19:19:48 UTC
(In reply to Alexander Chuzhoy from comment #2)
> First the cinder volumes are created on a controller with active
> openstack-cinder-volume and after the VG is filled, the volumes are created
> on the blockstorage node.

By default, Cinder will just schedule volumes to be created wherever it finds space for them among its volume nodes.

To ensure that volumes are created on the blockstorage node, you will have to create a Cinder volume type with extra specs that are only satisfied by that node such as storage_protocol=ceph or volume_backend_name=cephstorage0, etc.  Then, set default_volume_type in cinder.conf to that type, or pass it in with the volume create request.

I can advise more specifically on how to do this but I probably need more context to understand the desired behavior in the deployment.

Comment 7 James Slagle 2016-02-17 17:35:56 UTC
essentially, this is expected. cinder-volume runs on the controllers irregardless if you've also deployed block storage nodes or not.

Comment 8 Mike Burns 2016-04-07 21:03:37 UTC
This bug did not make the OSP 8.0 release.  It is being deferred to OSP 10.

Comment 10 Elise Gafford 2016-11-01 16:58:48 UTC
No recent progress on this issue. Moving to RHOS 11 for triage.

Comment 13 Tzach Shefi 2017-04-06 08:29:37 UTC
Will give it a try and update. 
Might take a while we lent some of our servers out awaiting return.   

Keeping needinfo to not forget this.

Comment 14 Tzach Shefi 2017-04-24 09:29:16 UTC
Omri, 

I'm confused, you mentioned volumes were also created on your controller rather than block storage node, as you checked it with lvdisplay. 

From that I gather Cinder's backend was LVM correct? 

I'm asking as I also noticed a CEPH node in your deployment command, did it not serv Cinder or did you have some multi backend config on Cinder for CEPH+LVM?

Comment 15 Tzach Shefi 2017-04-24 11:15:17 UTC
Paul, 

On OSP11, we still hit this. 

Installed 3 controllers 1 compute 1 block-storage node. 
Created 3 Cinder volumes, 2 of which were created on block-storage node as should be, yet 1 of them was created on a controller node. 

I'll try on Pike and update as soon as Infrared supports Pike code.

Comment 16 Paul Grist 2017-04-30 23:55:35 UTC
Thanks for the testing Tzach.  As I read through this one, I'm not sure what the priority/severity should be. Seems logical that if you have storage nodes, that's the place to go, but any thoughts/comments on how important it is to fix this issue?

Comment 17 Tzach Shefi 2017-05-01 08:03:25 UTC
I'm not sure about priority/severity. 

IMHO if the customer "went to the trouble" of installing a dedicated storage node they probably had a good enough reason to do so (probably LVM deployment), as well as expect all the volumes created on it, in a use case of LVM this would suggest a medium/high priority. As storage node would probably include the disks intended to support LVM volumes. 

On the other hand Cinder's most commonly used backend by far is CEPH (or any other NFS/EMC/netapp.. ) in either of which actual volumes aren't created on the storage node any way but on remote storage system thus in none LVM deployments I'd guess this would be of lower priority.

Comment 18 Tzach Shefi 2017-05-03 11:05:09 UTC
FYI I just noticed openstack-cinder-volume.service 
Is also (still) running running on all controller nodes as well as on that dedicated Cinder block-storage node. 

Would explain how 1 of the volumes was created on a controller. 

So this bug might be caused by a wrong (or incorrect) deployment of Cinder stand along node. 

I'll continue to research this and update if I find anything else.

Comment 19 Alan Bishop 2017-11-22 21:01:52 UTC
This update is a result of discussions in a storage team bug scrub. Closing this as NOTABUG for a few reasons.

- As noted, deploying a blockstorage node does not automatically stop the cinder-volume service from being deployed on the controllers.
- The composible roles feature in recent OSP releases would allow removing the cinder-volume service from the controllers, but not as far back as OSP-7.
- Even when volume services are running on both the controller and blockstorage nodes, you can manipulate where volumes are created using cinder volume types.


Note You need to log in before you can comment on or make changes to this bug.