Bug 1514146

Summary: RAID: need better error when conversion due to snapshot is unavailable
Product: Red Hat Enterprise Linux 7 Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: Heinz Mauelshagen <heinzm>
lvm2 sub component: Mirroring and RAID QA Contact: cluster-qe <cluster-qe>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: unspecified CC: agk, heinzm, jbrassow, msnitzer, prajnoha, rbednar, zkabelac
Version: 7.5   
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: lvm2-2.02.180-2.el7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-10-30 11:02:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Corey Marthaler 2017-11-16 17:51:30 UTC
Description of problem:
This is similar to bug 1439399. I'm sure there are other convert operations that should be "properly disallowed" as well while the raid volume is under snapshot.


[root@host-117 ~]# lvcreate --type raid1 -m 1 -n raid -L 100M VG
  Logical volume "raid" created.
[root@host-117 ~]# lvcreate -s VG/raid -n snap -L 10M
  Using default stripesize 64.00 KiB.
  Rounding up size to full physical extent 12.00 MiB
  Logical volume "snap" created.

[root@host-117 ~]# lvs -a -o +devices
  LV              VG  Attr       LSize   Pool Origin Data% Cpy%Sync Devices
  raid            VG  owi-a-r--- 100.00m                   100.00   raid_rimage_0(0),raid_rimage_1(0)
  [raid_rimage_0] VG  iwi-aor--- 100.00m                            /dev/sda1(1)
  [raid_rimage_1] VG  iwi-aor--- 100.00m                            /dev/sdb1(1)
  [raid_rmeta_0]  VG  ewi-aor---   4.00m                            /dev/sda1(0)
  [raid_rmeta_1]  VG  ewi-aor---   4.00m                            /dev/sdb1(0)
  snap            VG  swi-a-s---  12.00m      raid   0.00           /dev/sda1(26)

[root@host-117 ~]# lvconvert --yes --thinpool VG/raid
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting VG/raid to thin pool's data volume with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Internal error: #LVs (10) != #visible LVs (2) + #snapshots (1) + #internal LVs (5) in VG VG



#metadata/pv_manip.c:421         /dev/sda1 0:      0      1: raid_tdata_rmeta_0(0:0)
#metadata/pv_manip.c:421         /dev/sda1 1:      1     25: raid_tdata_rimage_0(0:0)
#metadata/pv_manip.c:421         /dev/sda1 2:     26      3: snap(0:0)
#metadata/pv_manip.c:421         /dev/sda1 3:     29      1: lvol0(0:0)
#metadata/pv_manip.c:421         /dev/sda1 4:     30      1: lvol1(0:0)
#metadata/pv_manip.c:421         /dev/sda1 5:     31      1: raid_tmeta(0:0)
#metadata/pv_manip.c:421         /dev/sda1 6:     32      1: lvol2(0:0)
#metadata/pv_manip.c:421         /dev/sda1 7:     33   6364: NULL(0:0)
#metadata/pv_manip.c:421         /dev/sdb1 0:      0      1: raid_tdata_rmeta_1(0:0)
#metadata/pv_manip.c:421         /dev/sdb1 1:      1     25: raid_tdata_rimage_1(0:0)
#metadata/pv_manip.c:421         /dev/sdb1 2:     26   6371: NULL(0:0)
#metadata/pv_manip.c:421         /dev/sdc1 0:      0   6397: NULL(0:0)
#metadata/pv_manip.c:421         /dev/sdd1 0:      0   6397: NULL(0:0)
#metadata/pv_manip.c:421         /dev/sde1 0:      0   6397: NULL(0:0)
#metadata/pv_manip.c:421         /dev/sdf1 0:      0   6397: NULL(0:0)
#metadata/pv_manip.c:421         /dev/sdg1 0:      0   6397: NULL(0:0)
#metadata/pv_manip.c:421         /dev/sdh1 0:      0   6397: NULL(0:0)
#metadata/metadata.c:2512   Internal error: #LVs (12) != #visible LVs (4) + #snapshots (1) + #internal LVs (5) in VG VG
#metadata/metadata.c:2988         <backtrace>
#metadata/lv_manip.c:7773         <backtrace>
#metadata/lv_manip.c:8049         <backtrace>
#metadata/pool_manip.c:660         <backtrace>
#metadata/pool_manip.c:706         <backtrace>



Version-Release number of selected component (if applicable):
3.10.0-772.el7.x86_64

lvm2-2.02.176-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
lvm2-libs-2.02.176-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
lvm2-cluster-2.02.176-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
lvm2-lockd-2.02.176-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
lvm2-python-boom-0.8-3.el7    BUILT: Fri Nov 10 07:16:45 CST 2017
cmirror-2.02.176-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
device-mapper-1.02.145-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
device-mapper-libs-1.02.145-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
device-mapper-event-1.02.145-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
device-mapper-event-libs-1.02.145-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
device-mapper-persistent-data-0.7.3-2.el7    BUILT: Tue Oct 10 04:00:07 CDT 2017

Comment 2 Heinz Mauelshagen 2018-07-23 17:37:24 UTC
Provided better error message with lvm2 upstream
commit 2214dc12c34890c78b05456f58d0aa5d6dd08f4c

Comment 4 Roman Bednář 2018-08-07 11:24:06 UTC
Verified.

# lvconvert --yes --thinpool vg/raid
  Cannot convert logical volume vg/raid under snapshot.


3.10.0-926.el7.x86_64

lvm2-2.02.180-2.el7    BUILT: Wed Aug  1 18:22:48 CEST 2018

Comment 6 errata-xmlrpc 2018-10-30 11:02:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3193