Bug 1744099 - Multi-attach readonly volume is readonly on first instance but R/W on second instance
Summary: Multi-attach readonly volume is readonly on first instance but R/W on second ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Jon Bernard
QA Contact: Evelina Shames
Andy Stillman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-08-21 10:52 UTC by Tzach Shefi
Modified: 2023-07-11 20:49 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-07-11 20:49:50 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Logs (707.61 KB, application/gzip)
2019-08-21 10:52 UTC, Tzach Shefi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-2185 0 None None None 2022-11-24 08:46:45 UTC

Description Tzach Shefi 2019-08-21 10:52:01 UTC
Created attachment 1606457 [details]
Logs

Description of problem: 
While test MA RBD backport to osp13 bz1700882 hit a new issue.
Unsure if it's related to MA backport or RBD or a Nova bug or a cinder bug. 
A RO multi-attach volume is attached to two instances, 
on first instance volume is indeed RO, however on second instance same volume is RW. 

Version-Release number of selected component (if applicable):
[root@controller-0 ~]# rpm -qa | grep -e cinder -e nova 
python2-novaclient-10.1.0-1.el7ost.noarch
openstack-nova-common-17.0.10-6.el7ost.noarch
openstack-nova-compute-17.0.10-6.el7ost.noarch
openstack-nova-migration-17.0.10-6.el7ost.noarch
openstack-nova-scheduler-17.0.10-6.el7ost.noarch
openstack-cinder-12.0.7-5.el7ost.noarch
openstack-nova-novncproxy-17.0.10-6.el7ost.noarch
openstack-nova-conductor-17.0.10-6.el7ost.noarch
python2-cinderclient-3.5.0-1.el7ost.noarch
puppet-nova-12.4.0-23.el7ost.noarch
openstack-nova-placement-api-17.0.10-6.el7ost.noarch
openstack-nova-api-17.0.10-6.el7ost.noarch
puppet-cinder-12.4.1-5.el7ost.noarch
python-nova-17.0.10-6.el7ost.noarch
python-cinder-12.0.7-5.el7ost.noarch
openstack-nova-console-17.0.10-6.el7ost.noarch



How reproducible:
Everytime

Steps to Reproduce:
1. Create a MA volume, mount to isnt1, create ext4 fs, create a text file on disk.Detach volume from isnt1

2. Set RO flag on volume:
(overcloud) [stack@undercloud-0 ~]$ openstack volume set MA_vol1 --read-only
(overcloud) [stack@undercloud-0 ~]$ cinder show MA_vol1
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attached_servers               | []                                   |
| attachment_ids                 | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2019-08-21T01:04:16.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | 14d0c9e2-7dbf-4459-881e-c332daecac98 |
| metadata                       | readonly : True                      |
| migration_status               | None                                 |
| multiattach                    | True                                 |
| name                           | MA_vol1                              |
| os-vol-host-attr:host          | hostgroup@tripleo_ceph#tripleo_ceph  |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | 151d730a60664f039ec34dbb1728df72     |
| readonly                       | True                                 |
| replication_status             | None                                 |
| size                           | 1                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | available                            |
| updated_at                     | 2019-08-21T08:35:49.000000           |
| user_id                        | eaade82e8aef438aa796530ed7e694aa     |
| volume_type                    | multiattach                          |
+--------------------------------+--------------------------------------+

3. Reattach volume to isnt1/3

(overcloud) [stack@undercloud-0 ~]$ nova volume-attach inst1 14d0c9e2-7dbf-4459-881e-c332daecac98 auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | 14d0c9e2-7dbf-4459-881e-c332daecac98 |
| serverId | de218391-f19d-4fb5-af8e-d234c1c0eb2e |
| volumeId | 14d0c9e2-7dbf-4459-881e-c332daecac98 |
+----------+--------------------------------------+

(overcloud) [stack@undercloud-0 ~]$ nova volume-attach inst3 14d0c9e2-7dbf-4459-881e-c332daecac98 auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | 14d0c9e2-7dbf-4459-881e-c332daecac98 |
| serverId | 7338f7d4-588a-4339-b658-7838a08d364b |
| volumeId | 14d0c9e2-7dbf-4459-881e-c332daecac98 |
+----------+--------------------------------------+

4. Login to inst1

Inside inst1 we get the correct RO status:
# cat /sys/block/vdb/ro
1  -> meaning vol is RO
I can't edit txt file VIM reports it as RO

Also on compute we see:
[root@compute-0 ~]# virsh dumpxml instance-00000083 | grep vdb -C 3
        <host name='172.17.3.26' port='6789'/>
        <host name='172.17.3.33' port='6789'/>
      </source>
      <target dev='vdb' bus='virtio'/>
      <readonly/>


5. Login to inst3

On inst3 we get the wrong status:
# cat /sys/block/vdb/ro
01  -> meaning vol is RW

When I open text file I can edit it. 

[root@compute-0 ~]# virsh dumpxml instance-00000089 | grep vdb -C 3
        <host name='172.17.3.26' port='6789'/>
        <host name='172.17.3.33' port='6789'/>
      </source>
      <target dev='vdb' bus='virtio'/>
      <shareable/>                  ------------> !readonly!
      <serial>14d0c9e2-7dbf-4459-881e-c332daecac98</serial>
      <alias name='virtio-disk1'/>



Actual results:
Well RO MA volume is RW on second instance. 

Expected results:
RO only MA volume should be RO on all attached instances. 

Additional info:

We tested MA on other backends on OSP15 or 14, don't recall hitting this issue, if I remember correctly RO volume worked fine.

Comment 4 Tzach Shefi 2019-10-31 08:09:30 UTC
Reconfirming this only hits OSP13. 

I'd just tested another bz for OSP14 multi-attach RBD backport.
A RO MA volume attached to two instance, as expected volume is indeed RO on both instances.

Comment 5 Luigi Toscano 2020-02-03 14:38:16 UTC
(In reply to Tzach Shefi from comment #4)
> Reconfirming this only hits OSP13. 
> 
> I'd just tested another bz for OSP14 multi-attach RBD backport.
> A RO MA volume attached to two instance, as expected volume is indeed RO on
> both instances.

When you have a 16 environment around, can you please check this there as well just to be sure? So we can retarget this accordingly.

Comment 6 Tzach Shefi 2020-02-09 10:51:00 UTC
Luigi, 
Re-reconfirm tested just now on OSP16 with LVM MA RO volume, attached to both instance. 
Both of them one per compute show as read only as excepted. 


[root@compute-0 ~]#  virsh dumpxml instance-0000000b | grep vdb -C 3 
setlocale: No such file or directory
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/sda'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <readonly/>
      <shareable/>
      <serial>5180721d-01cd-456d-abe6-8869cf8cc411</serial>


[root@compute-1 ~]#  virsh dumpxml instance-00000008 | grep vdb -C 3                                                 
setlocale: No such file or directory                                                                                 
      <driver name='qemu' type='raw' cache='none' io='native'/>                                                      
      <source dev='/dev/sda'/>                                                                                       
      <backingStore/>                                                                                                
      <target dev='vdb' bus='virtio'/>
      <readonly/>                                                                                                    
      <shareable/>                                                                                                   
      <serial>5180721d-01cd-456d-abe6-8869cf8cc411</serial>

Comment 7 Tzach Shefi 2020-04-08 22:02:16 UTC
Luigi, 
Retested on OSP16 RBD, works fine both instances report RO volume.

Comment 10 Lon Hohberger 2023-07-11 20:49:50 UTC
This issue was targeted to OSP13, which was retired on June 27, 2023. While no fix was made available for that release, it was fixed the current versions of Red Hat OpenStack Platform.


Note You need to log in before you can comment on or make changes to this bug.