Bug 1461187 - primary image is allowed to be removed during in progress up conversion.
primary image is allowed to be removed during in progress up conversion.
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2 (Show other bugs)
7.4
x86_64 Linux
unspecified Severity high
: rc
: ---
Assigned To: Jonathan Earl Brassow
cluster-qe@redhat.com
: Regression
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-06-13 14:56 EDT by Corey Marthaler
Modified: 2017-08-01 17:54 EDT (History)
8 users (show)

See Also:
Fixed In Version: lvm2-2.02.171-5.el7
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-08-01 17:54:18 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Corey Marthaler 2017-06-13 14:56:33 EDT
Description of problem:
SCENARIO (raid1) - [remove_primary_raid_leg_during_inprogress_upconvert]
Create a linear, attempt to upconvert it to a raid, then attempt to down convert it by removing the primary device (which would corrupt the LV)

host-086: lvcreate  -n remove_outta_sync_primary -L 4G raid_sanity

lvconvert --yes --type raid1 -m 1 raid_sanity/remove_outta_sync_primary &

sleeping a bit...

  Logical volume raid_sanity/remove_outta_sync_primary successfully converted.
current copy percent: 2.06

attempting down convert to remove primary with up convert in progress
lvconvert --yes -m 0 raid_sanity/remove_outta_sync_primary /dev/sda1
should not have been able to downconvert and remove the primary device


[root@host-086 ~]# lvs -a -o +devices
  LV                        VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices       
  remove_outta_sync_primary raid_sanity   -wi-a-----   4.00g                                                     /dev/sda2(1)  


Version-Release number of selected component (if applicable):
3.10.0-679.el7.bz1443999.x86_64

lvm2-2.02.171-4.el7    BUILT: Wed Jun  7 09:16:17 CDT 2017
lvm2-libs-2.02.171-4.el7    BUILT: Wed Jun  7 09:16:17 CDT 2017
lvm2-cluster-2.02.171-4.el7    BUILT: Wed Jun  7 09:16:17 CDT 2017
device-mapper-1.02.140-4.el7    BUILT: Wed Jun  7 09:16:17 CDT 2017
device-mapper-libs-1.02.140-4.el7    BUILT: Wed Jun  7 09:16:17 CDT 2017
device-mapper-event-1.02.140-4.el7    BUILT: Wed Jun  7 09:16:17 CDT 2017
device-mapper-event-libs-1.02.140-4.el7    BUILT: Wed Jun  7 09:16:17 CDT 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 10:15:46 CDT 2017
Comment 2 Corey Marthaler 2017-06-13 15:05:02 EDT
Proper behavior:

3.10.0-679.el7.bz1443999.x86_64
lvm2-2.02.171-2.el7    BUILT: Wed May 24 09:02:34 CDT 2017
lvm2-libs-2.02.171-2.el7    BUILT: Wed May 24 09:02:34 CDT 2017
lvm2-cluster-2.02.171-2.el7    BUILT: Wed May 24 09:02:34 CDT 2017
device-mapper-1.02.140-2.el7    BUILT: Wed May 24 09:02:34 CDT 2017
device-mapper-libs-1.02.140-2.el7    BUILT: Wed May 24 09:02:34 CDT 2017
device-mapper-event-1.02.140-2.el7    BUILT: Wed May 24 09:02:34 CDT 2017
device-mapper-event-libs-1.02.140-2.el7    BUILT: Wed May 24 09:02:34 CDT 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 10:15:46 CDT 2017


[root@host-085 ~]#  lvcreate  -n remove_outta_sync_primary -L 4G raid_sanity
  Logical volume "remove_outta_sync_primary" created.
[root@host-085 ~]# lvs -a -o +devices
  LV                        VG          Attr       LSize Cpy%Sync Devices
  remove_outta_sync_primary raid_sanity -wi-a----- 4.00g          /dev/sda1(0)

[root@host-085 ~]# lvconvert --yes --type raid1 -m 1 raid_sanity/remove_outta_sync_primary &
[1] 16561
Logical volume raid_sanity/remove_outta_sync_primary successfully converted.

[root@host-085 ~]# lvs -a -o +devices
  LV                                   VG          Attr       LSize Cpy%Sync Devices
  remove_outta_sync_primary            raid_sanity rwi-a-r--- 4.00g 36.72    remove_outta_sync_primary_rimage_0(0),remove_outta_sync_primary_rimage_1(0)
  [remove_outta_sync_primary_rimage_0] raid_sanity Iwi-aor--- 4.00g          /dev/sda1(0)
  [remove_outta_sync_primary_rimage_1] raid_sanity Iwi-aor--- 4.00g          /dev/sdb1(1)
  [remove_outta_sync_primary_rmeta_0]  raid_sanity ewi-aor--- 4.00m          /dev/sda1(1024)
  [remove_outta_sync_primary_rmeta_1]  raid_sanity ewi-aor--- 4.00m          /dev/sdb1(0)

[root@host-085 ~]#  lvconvert --yes -m 0 raid_sanity/remove_outta_sync_primary /dev/sda1
  Unable to extract primary RAID image while RAID array is not in-sync (use --force option to replace).
  Failed to extract images from raid_sanity/remove_outta_sync_primary.

[root@host-085 ~]# lvs -a -o +devices
  LV                                   VG          Attr       LSize Cpy%Sync Devices
  remove_outta_sync_primary            raid_sanity rwi-a-r--- 4.00g 51.17    remove_outta_sync_primary_rimage_0(0),remove_outta_sync_primary_rimage_1(0)
  [remove_outta_sync_primary_rimage_0] raid_sanity Iwi-aor--- 4.00g          /dev/sda1(0)
  [remove_outta_sync_primary_rimage_1] raid_sanity Iwi-aor--- 4.00g          /dev/sdb1(1)
  [remove_outta_sync_primary_rmeta_0]  raid_sanity ewi-aor--- 4.00m          /dev/sda1(1024)
  [remove_outta_sync_primary_rmeta_1]  raid_sanity ewi-aor--- 4.00m          /dev/sdb1(0)

[root@host-085 ~]#  lvconvert --yes -m 0 --force raid_sanity/remove_outta_sync_primary /dev/sda1
  Unable to extract primary RAID image while RAID array is not in-sync (use --force option to replace).
  Failed to extract images from raid_sanity/remove_outta_sync_primary.
Comment 4 Jonathan Earl Brassow 2017-06-14 09:52:58 EDT
Fix committed upstream:
commit ddb14b6b05e0f75a97ab8ab1ed99091268c239ba
Author: Jonathan Brassow <jbrassow@redhat.com>
Date:   Wed Jun 14 08:41:05 2017 -0500

    lvconvert: Disallow removal of primary when up-converting (recovering)

    This patch ensures that under normal conditions (i.e. not during repair
    operations) that users are prevented from removing devices that would
    cause data loss.

    When a RAID1 is undergoing its initial sync, it is ok to remove all but
    one of the images because they have all existed since creation and
    contain all the data written since the array was created.  OTOH, if the
    RAID1 was created as a result of an up-convert from linear, it is very
    important not to let the user remove the primary image (the source of
    all the data).  They should be allowed to remove any devices they want
    and as many as they want as long as one original (primary) device is left
    during a "recover" (aka up-convert).

    This fixes bug 1461187 and includes the necessary regression tests.


[
PACKAGER BEWARE:
The above commit is preceded by and requires:
4c0e908 RAID (lvconvert/dmeventd):  Cleanly handle primary failure during 'recover' op
d34d206 lvconvert:  Don't require a 'force' option during RAID repair.
c87907d lvconvert:  linear -> raid1 upconvert should cause "recover" not "resync
]
Comment 6 Corey Marthaler 2017-06-19 17:47:57 EDT
Fix verified in the latest rpms. Proper behavior restored.

lvm2-2.02.171-5.el7    BUILT: Wed Jun 14 10:33:32 CDT 2017
lvm2-libs-2.02.171-5.el7    BUILT: Wed Jun 14 10:33:32 CDT 2017
lvm2-cluster-2.02.171-5.el7    BUILT: Wed Jun 14 10:33:32 CDT 2017
device-mapper-1.02.140-5.el7    BUILT: Wed Jun 14 10:33:32 CDT 2017
device-mapper-libs-1.02.140-5.el7    BUILT: Wed Jun 14 10:33:32 CDT 2017
device-mapper-event-1.02.140-5.el7    BUILT: Wed Jun 14 10:33:32 CDT 2017
device-mapper-event-libs-1.02.140-5.el7    BUILT: Wed Jun 14 10:33:32 CDT 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 10:15:46 CDT 2017



SCENARIO (raid1) - [remove_primary_raid_leg_during_inprogress_upconvert]                                                                                                               
Create a linear, attempt to upconvert it to a raid, then attempt to down convert it by removing the primary device (which would corrupt the LV)                                        
host-083: lvcreate  -n remove_outta_sync_primary -L 4G raid_sanity                                                                                                                     
lvconvert --yes --type raid1 -m 1 raid_sanity/remove_outta_sync_primary &                                                                                                              
                                                                                                                                                                                       
sleeping a bit...                                                                                                                                                                      
                                                                                                                                                                                       
  Logical volume raid_sanity/remove_outta_sync_primary successfully converted.                                                                                                         
current copy percent: 27.69                                                                                                                                                            
                                                                                                                                                                                       
[root@host-083 ~]# lvs -a -o +devices                                                                                                                                                  
  LV                                   VG          Attr       LSize   Cpy%Sync Devices
  remove_outta_sync_primary            raid_sanity rwi-a-r---   4.00g 27.64    remove_outta_sync_primary_rimage_0(0),remove_outta_sync_primary_rimage_1(0)
  [remove_outta_sync_primary_rimage_0] raid_sanity iwi-aor---   4.00g          /dev/sdf2(0)
  [remove_outta_sync_primary_rimage_1] raid_sanity Iwi-aor---   4.00g          /dev/sdf1(1)
  [remove_outta_sync_primary_rmeta_0]  raid_sanity ewi-aor---   4.00m          /dev/sdf2(1024)
  [remove_outta_sync_primary_rmeta_1]  raid_sanity ewi-aor---   4.00m          /dev/sdf1(0)

attempting down convert to remove primary with up convert in progress
lvconvert --yes -m 0 raid_sanity/remove_outta_sync_primary /dev/sdf2

[root@host-083 ~]# lvconvert --yes -m 0 raid_sanity/remove_outta_sync_primary /dev/sdf2
  Unable to remove all primary source devices
  Failed to extract images from raid_sanity/remove_outta_sync_primary.
Comment 7 errata-xmlrpc 2017-08-01 17:54:18 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2222

Note You need to log in before you can comment on or make changes to this bug.