Bug 1844532 - Fixes in glance_store for Cinder NFS volumes
Summary: Fixes in glance_store for Cinder NFS volumes
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-glance-store
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z13
: 13.0 (Queens)
Assignee: Rajat Dhasmana
QA Contact: Mike Abrams
URL:
Whiteboard:
Depends On: 1807123
Blocks: 1741730
TreeView+ depends on / blocked
 
Reported: 2020-06-05 16:05 UTC by Luigi Toscano
Modified: 2023-10-17 09:25 UTC (History)
11 users (show)

Fixed In Version: python-glance-store-0.23.1-0.20190916165252.cc7ecc1.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1807123
Environment:
Last Closed: 2021-04-07 14:58:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-1950 0 None None None 2023-10-17 09:25:44 UTC

Description Luigi Toscano 2020-06-05 16:05:49 UTC
+++ This bug was initially created as a clone of Bug #1807123 +++

This bug was initially created as a copy of Bug #1741730


Description of problem:

glance_store needs work to support Cinder NFS volumes when using the Cinder backend for Glance.

--- Additional comment from Cyril Roelandt on 2020-04-01 21:00:37 UTC ---

The first patch was merged.

A few questions:

1) Do we have to backport all three patches? The refactoring patch seems "optional" to me.

2) Can Mike test the fix just by setting the cinder_mount_point_base to a valid value, or are there other steps needed?

Thanks.

--- Additional comment from Rajat Dhasmana on 2020-04-02 15:05:49 UTC ---

Hi Cyril,

There is one more patch i've added that will prevent nfs mount races per share. As for answers to your queries :

1) There are 3 patches, 709300,714391 and 716874 that needs to be backported to have the glance cinder store (with cinder nfs backend) work for OSP13.

2) To check all the changes, we could define cinder_mount_point_base value and create multiple images concurrently (given glance is using cinder store) and the expected output is : a) all volumes should be created in the folder '<cinder_mount_point_base>/nfs/<share hex uuid>' folder b) all images should be in active state

Thanks
Rajat Dhasmana

Comment 3 Lon Hohberger 2020-06-25 10:48:21 UTC
According to our records, this should be resolved by python-glance-store-0.23.1-0.20190916165252.cc7ecc1.el7ost.  This build is available now.

Comment 6 Tzach Shefi 2020-09-22 12:53:32 UTC
Verification steps/results cloned from Cinder's related bz1741730

Verified on:
openstack-cinder-12.0.10-19.el7ost.noarch
python2-glance-store-0.23.1-0.20190916165252.cc7ecc1.el7ost.noarch

On a system with OSP13, Glance uses Cinder as it's backend. 
Cinder uses Netapp NFS as it's backend. 
Image_volume_cache is enabled on Cinder.conf 


first test simultaneously create 10 volumes from same image:
#for i in {1..10}; do cinder create 1 --image cirros ; done

Resulting 10 volumes all in available state as expected:
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+
| ID                                   | Status    | Name                                       | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+
| 11a62820-6745-4876-ab32-71363e42e353 | available | -                                          | 1    | tripleo     | true     |             |
| 2a46f3c7-fa9f-4b72-9dee-4bd4dea7a631 | available | -                                          | 1    | tripleo     | true     |             |
| 3c5b8cfb-75d5-4c09-98b6-5ecfed4c1972 | available | -                                          | 1    | tripleo     | true     |             |
| 4e68f832-bce2-43e9-a547-8d29242e1512 | available | -                                          | 1    | tripleo     | true     |             |
| 5eff40e0-5eda-407b-b084-63604d6c521d | available | -                                          | 1    | tripleo     | true     |             |
| 6c58d022-0ee6-4f64-9bcd-d744612c6d78 | available | -                                          | 1    | tripleo     | true     |             |
| 7eb389b6-ff05-4c83-a6f3-c3f7cebc5274 | available | -                                          | 1    | tripleo     | true     |             |
| aebf32e4-0855-494e-9ae9-fd9aa054d6ff | available | -                                          | 1    | tripleo     | true     |             |
| c7714e66-5a91-45da-a531-d4097f13081b | available | -                                          | 1    | tripleo     | true     |             |
| ccd6fd5c-4738-4363-9364-a680531abadf | available | -                                          | 1    | tripleo     | true     |             |
| edfb3cdb-c608-4b0f-a6ac-0a964772b16f | available | image-1e30b583-a24d-45ea-a9b8-a6f7699f605a | 1    | tripleo     | false    |             |
+--------------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+

Worked as expected, all volumes are available.
Delete all volumes:
#for i in $(cinder list | grep true | awk '{print $2}'); do  cinder delete $i; done


Now lets simulate the original issue, simultaneously booting up 3 instances.
We use boot from image create volume, thus multiple volumes are created from same image. 
#nova boot --flavor tiny --block-device source=image,id=1e30b583-a24d-45ea-a9b8-a6f7699f605a,dest=volume,size=1,shutdown=remove,bootindex=0 ints1  --nic net-id=af6fedd3-2926-4aaf-bcb2-8b1296bb02db --min-count 3

After a while:
(overcloud) [stack@undercloud-0 ~]$ nova list
+--------------------------------------+---------+--------+----------------------+-------------+-----------------------+
| ID                                   | Name    | Status | Task State           | Power State | Networks              |
+--------------------------------------+---------+--------+----------------------+-------------+-----------------------+
| 16547b6e-fecd-4e88-b9b3-506b6b79cb13 | ints1-1 | BUILD  | block_device_mapping | NOSTATE     | internal=192.168.0.14 |
| 8641cb2b-0d48-4eec-914c-c0f1a54b4e2f | ints1-2 | BUILD  | block_device_mapping | NOSTATE     | internal=192.168.0.19 |
| e5f0e8a4-3c94-450d-9059-56d493927e62 | ints1-3 | BUILD  | block_device_mapping | NOSTATE     | internal=192.168.0.25 |
+--------------------------------------+---------+--------+----------------------+-------------+-----------------------+

After a few more seconds:
(overcloud) [stack@undercloud-0 ~]$ nova list
+--------------------------------------+---------+--------+------------+-------------+-----------------------+
| ID                                   | Name    | Status | Task State | Power State | Networks              |
+--------------------------------------+---------+--------+------------+-------------+-----------------------+
| 16547b6e-fecd-4e88-b9b3-506b6b79cb13 | ints1-1 | ACTIVE | -          | Running     | internal=192.168.0.14 |
| 8641cb2b-0d48-4eec-914c-c0f1a54b4e2f | ints1-2 | ACTIVE | -          | Running     | internal=192.168.0.19 |
| e5f0e8a4-3c94-450d-9059-56d493927e62 | ints1-3 | ACTIVE | -          | Running     | internal=192.168.0.25 |
+--------------------------------------+---------+--------+------------+-------------+-----------------------+


(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+--------------------------------------------+------+-------------+----------+--------------------------------------+
| ID                                   | Status    | Name                                       | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+-----------+--------------------------------------------+------+-------------+----------+--------------------------------------+
| 7358f758-c242-44ac-8dd8-13a304c2b016 | in-use    |                                            | 1    | tripleo     | true     | 8641cb2b-0d48-4eec-914c-c0f1a54b4e2f |
| dc943647-c9c0-436d-b15a-ddceec0bd9f6 | in-use    |                                            | 1    | tripleo     | true     | 16547b6e-fecd-4e88-b9b3-506b6b79cb13 |
| edfb3cdb-c608-4b0f-a6ac-0a964772b16f | available | image-1e30b583-a24d-45ea-a9b8-a6f7699f605a | 1    | tripleo     | false    |                                      |
| fb96d987-69ae-4031-97de-6827f261c117 | in-use    |                                            | 1    | tripleo     | true     | e5f0e8a4-3c94-450d-9059-56d493927e62 |
+--------------------------------------+-----------+--------------------------------------------+------+-------------+----------+--------------------------------------+

We were able to boot up 3 instances from same image and create 3 boot volumes. 
What failed before is now working, albeit in a small scale. 

Again due to limited compute resources I couldn't bump up the instance count considerably. 
Reducing Nova flavor's RAM usage I was able to boot up 6 instances/boot volumes simultaneously:

(overcloud) [stack@undercloud-0 ~]$ nova list
+--------------------------------------+---------+--------+------------+-------------+-----------------------+
| ID                                   | Name    | Status | Task State | Power State | Networks              |
+--------------------------------------+---------+--------+------------+-------------+-----------------------+
| 25fde923-c9f4-4e23-b96a-99f61d95dae3 | ints1-1 | ACTIVE | -          | Running     | internal=192.168.0.18 |
| 0069eb6d-98b2-4502-a0f5-7cefc9417db5 | ints1-2 | ACTIVE | -          | Running     | internal=192.168.0.21 |
| cfcb602c-70ac-49c6-9fbc-d112c9e3971d | ints1-3 | ACTIVE | -          | Running     | internal=192.168.0.26 |
| 9d199d28-bdf6-4b83-8be3-2cfaaacd722d | ints1-4 | ACTIVE | -          | Running     | internal=192.168.0.16 |
| adab78d8-58bd-4405-ae7c-a8ab2521b96f | ints1-5 | ACTIVE | -          | Running     | internal=192.168.0.14 |
| 9990979c-0fba-4297-aa44-98f138fdc682 | ints1-6 | ACTIVE | -          | Running     | internal=192.168.0.27 |
+--------------------------------------+---------+--------+------------+-------------+-----------------------+
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+--------------------------------------------+------+-------------+----------+--------------------------------------+
| ID                                   | Status    | Name                                       | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+-----------+--------------------------------------------+------+-------------+----------+--------------------------------------+
| 24c4ba7f-0511-4c2b-b614-ee4a2f867abc | in-use    |                                            | 1    | tripleo     | true     | 25fde923-c9f4-4e23-b96a-99f61d95dae3 |
| 3eb59a2e-cb81-4f11-bdcb-c8a4ae994689 | in-use    |                                            | 1    | tripleo     | true     | 0069eb6d-98b2-4502-a0f5-7cefc9417db5 |
| 40ca43d9-ee31-4f98-b718-3f8dadec06e0 | in-use    |                                            | 1    | tripleo     | true     | cfcb602c-70ac-49c6-9fbc-d112c9e3971d |
| 8ac016bb-68d6-4812-b172-6e53fe457923 | in-use    |                                            | 1    | tripleo     | true     | adab78d8-58bd-4405-ae7c-a8ab2521b96f |
| c14603be-9985-42f5-b901-2e8074679c7d | in-use    |                                            | 1    | tripleo     | true     | 9d199d28-bdf6-4b83-8be3-2cfaaacd722d |
| edfb3cdb-c608-4b0f-a6ac-0a964772b16f | available | image-1e30b583-a24d-45ea-a9b8-a6f7699f605a | 1    | tripleo     | false    |                                      |
| f1c29bf5-30d1-42e6-808a-306fc5b168a0 | in-use    |                                            | 1    | tripleo     | true     | 9990979c-0fba-4297-aa44-98f138fdc682 |
+--------------------------------------+-----------+--------------------------------------------+------+-------------+----------+--------------------------------------+


I'd also tested another scenario, simultaneously create nova snapshot of all the instances. 

$ for i in $(nova list | grep Running  | awk '{print $2}'); do nova image-create  $i snap-$i;done
$ glance image-list
+--------------------------------------+-------------------------------------------+
| ID                                   | Name                                      |
+--------------------------------------+-------------------------------------------+
| 1e30b583-a24d-45ea-a9b8-a6f7699f605a | cirros                                    |
| 8810456d-4876-4bd2-aba7-0dd5eaf7e98e | snap-0069eb6d-98b2-4502-a0f5-7cefc9417db5 |
| b141d4d8-190b-4dac-be66-94447921aaf5 | snap-25fde923-c9f4-4e23-b96a-99f61d95dae3 |
| f08c0411-90fd-4274-9b69-861cb32644b0 | snap-9990979c-0fba-4297-aa44-98f138fdc682 |
| 0e0b6369-8c25-4fdd-83c4-b72df9d756ce | snap-9d199d28-bdf6-4b83-8be3-2cfaaacd722d |
| 5e3db635-5724-47f1-912c-ba5e394d73dc | snap-adab78d8-58bd-4405-ae7c-a8ab2521b96f |
| 898160bc-6bf8-49c6-b83d-0c2f6426a2e9 | snap-cfcb602c-70ac-49c6-9fbc-d112c9e3971d |
+--------------------------------------+-------------------------------------------+

cinder snapshot-list
+--------------------------------------+--------------------------------------+-----------+--------------------------------------------------------+------+
| ID                                   | Volume ID                            | Status    | Name                                                   | Size |
+--------------------------------------+--------------------------------------+-----------+--------------------------------------------------------+------+
| 0ba24d77-56d4-44ad-aba0-af2cc7525636 | 40ca43d9-ee31-4f98-b718-3f8dadec06e0 | available | snapshot for snap-cfcb602c-70ac-49c6-9fbc-d112c9e3971d | 1    |
| 37afbe74-188b-4dcd-8843-13b5b9179627 | 8ac016bb-68d6-4812-b172-6e53fe457923 | available | snapshot for snap-adab78d8-58bd-4405-ae7c-a8ab2521b96f | 1    |
| 8ba35672-fb7a-4dce-a1bb-e30c2b5d886d | c14603be-9985-42f5-b901-2e8074679c7d | available | snapshot for snap-9d199d28-bdf6-4b83-8be3-2cfaaacd722d | 1    |
| 8dacc788-ee48-4f98-9e6d-e147d65b03df | f1c29bf5-30d1-42e6-808a-306fc5b168a0 | available | snapshot for snap-9990979c-0fba-4297-aa44-98f138fdc682 | 1    |
| 939ba84a-5e7a-4687-a44a-e7b795fc563f | 24c4ba7f-0511-4c2b-b614-ee4a2f867abc | available | snapshot for snap-25fde923-c9f4-4e23-b96a-99f61d95dae3 | 1    |
| e48b6edb-31e0-4ef8-9746-b6a881c33747 | 3eb59a2e-cb81-4f11-bdcb-c8a4ae994689 | available | snapshot for snap-0069eb6d-98b2-4502-a0f5-7cefc9417db5 | 1    |
+--------------------------------------+--------------------------------------+-----------+--------------------------------------------------------+------+
This too works as expected. 

Confirm simultaneously booting multiple instance from an image, creating persistent boot volumes, where Glance is backed my Cinder
and Cinder is backed by Netapp NFS backend works as expected.


Note You need to log in before you can comment on or make changes to this bug.