Summary - No need of backporting Launchpad 532702 and 1790840 as they're already on ops14. - "Swap volume of multi-attached volume will corrupt data" (Launchpad 1775418) is still on-dev and can't backport because of it. - We aren't supporting solidfire as part of this backport so I didn't backport https://review.openstack.org/#/c/641125/ (It will eventually flow into OSP14 as part of a rebase to 13.0.4.)
Verified on: openstack-cinder-13.0.3-2.el7ost.noarch Create a multiattached volume unmanage and remanage it to check mutliattach flag. (overcloud) [stack@undercloud-0 ~]$ cinder show e9145dc4-ac90-433a-a3e4-0339b79ec9c8 +--------------------------------+---------------------------------------+ | Property | Value | +--------------------------------+---------------------------------------+ | attached_servers | [] | | attachment_ids | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2019-04-21T11:34:12.000000 | | description | None | | encrypted | False | | id | e9145dc4-ac90-433a-a3e4-0339b79ec9c8 | | metadata | | | migration_status | None | | multiattach | True | | name | None | | os-vol-host-attr:host | hostgroup@tripleo_iscsi#tripleo_iscsi | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | ad8fccbb3def40f694ec9084152bdec5 | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | updated_at | 2019-04-21T11:34:12.000000 | | user_id | 3d99dcddbc564f00a9df041b0269970f | | volume_type | lvmulti | +--------------------------------+---------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder unmanage e9145dc4-ac90-433a-a3e4-0339b79ec9c8 (overcloud) [stack@undercloud-0 ~]$ cinder list +----+--------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +----+--------+------+------+-------------+----------+-------------+ +----+--------+------+------+-------------+----------+-------------+ (overcloud) [stack@undercloud-0 ~]$ cinder manage hostgroup@tripleo_iscsi#tripleo_iscsi volume-e9145dc4-ac90-433a-a3e4-0339b79ec9c8 --volume-type lvmulti +--------------------------------+---------------------------------------+ | Property | Value | +--------------------------------+---------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2019-04-21T11:37:25.000000 | | description | None | | encrypted | False | | id | 059b75b6-9b48-4069-8e9a-a987e35f60eb | | metadata | {} | | migration_status | None | | multiattach | True | | name | None | | os-vol-host-attr:host | hostgroup@tripleo_iscsi#tripleo_iscsi | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | ad8fccbb3def40f694ec9084152bdec5 | | replication_status | None | | size | 0 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | 2019-04-21T11:37:25.000000 | | user_id | 3d99dcddbc564f00a9df041b0269970f | | volume_type | lvmulti | +--------------------------------+---------------------------------------+ We see that after importing(manage) multiattach | True BTW I had a prepatched system onhand and it's import didn't return the multiattach true. https://bugs.launchpad.net/cinder/+bug/1783790 -> covered. Now I've created a none multiattach volume (overcloud) [stack@undercloud-0 ~]$ cinder create 1 +--------------------------------+---------------------------------------+ | Property | Value | +--------------------------------+---------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2019-04-21T11:43:46.000000 | | description | None | | encrypted | False | | id | b1d2b09b-e94c-4efd-bd26-6cb513d950fb | | metadata | {} | | migration_status | None | | multiattach | False | | name | None | | os-vol-host-attr:host | hostgroup@tripleo_iscsi#tripleo_iscsi | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | ad8fccbb3def40f694ec9084152bdec5 | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | 2019-04-21T11:43:47.000000 | | user_id | 3d99dcddbc564f00a9df041b0269970f | | volume_type | tripleo | +--------------------------------+---------------------------------------+ I'l retype it to multiattach type #cinder retype b1d2b09b-e94c-4efd-bd26-6cb513d950fb lvmulti --migration-policy on-demand And as can bee seen it moved to multiattach including the flag set to true (overcloud) [stack@undercloud-0 ~]$ cinder show b1d2b09b-e94c-4efd-bd26-6cb513d950fb +--------------------------------+---------------------------------------+ | Property | Value | +--------------------------------+---------------------------------------+ | attached_servers | [] | | attachment_ids | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2019-04-21T11:43:46.000000 | | description | None | | encrypted | False | | id | b1d2b09b-e94c-4efd-bd26-6cb513d950fb | | metadata | | | migration_status | None | | multiattach | True | | name | None | | os-vol-host-attr:host | hostgroup@tripleo_iscsi#tripleo_iscsi | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | ad8fccbb3def40f694ec9084152bdec5 | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | updated_at | 2019-04-21T11:45:07.000000 | | user_id | 3d99dcddbc564f00a9df041b0269970f | | volume_type | lvmulti | +--------------------------------+---------------------------------------+ https://bugs.launchpad.net/cinder/+bug/1790840 -> also covered Retyping back to tripleo (none multiattach) set flag multiattach flag back to false as expected. Looking good to verify.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0946