RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1514146 - RAID: need better error when conversion due to snapshot is unavailable
Summary: RAID: need better error when conversion due to snapshot is unavailable
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.5
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-16 17:51 UTC by Corey Marthaler
Modified: 2021-09-03 12:40 UTC (History)
7 users (show)

Fixed In Version: lvm2-2.02.180-2.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-30 11:02:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:3193 0 None None None 2018-10-30 11:03:20 UTC

Description Corey Marthaler 2017-11-16 17:51:30 UTC
Description of problem:
This is similar to bug 1439399. I'm sure there are other convert operations that should be "properly disallowed" as well while the raid volume is under snapshot.


[root@host-117 ~]# lvcreate --type raid1 -m 1 -n raid -L 100M VG
  Logical volume "raid" created.
[root@host-117 ~]# lvcreate -s VG/raid -n snap -L 10M
  Using default stripesize 64.00 KiB.
  Rounding up size to full physical extent 12.00 MiB
  Logical volume "snap" created.

[root@host-117 ~]# lvs -a -o +devices
  LV              VG  Attr       LSize   Pool Origin Data% Cpy%Sync Devices
  raid            VG  owi-a-r--- 100.00m                   100.00   raid_rimage_0(0),raid_rimage_1(0)
  [raid_rimage_0] VG  iwi-aor--- 100.00m                            /dev/sda1(1)
  [raid_rimage_1] VG  iwi-aor--- 100.00m                            /dev/sdb1(1)
  [raid_rmeta_0]  VG  ewi-aor---   4.00m                            /dev/sda1(0)
  [raid_rmeta_1]  VG  ewi-aor---   4.00m                            /dev/sdb1(0)
  snap            VG  swi-a-s---  12.00m      raid   0.00           /dev/sda1(26)

[root@host-117 ~]# lvconvert --yes --thinpool VG/raid
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting VG/raid to thin pool's data volume with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Internal error: #LVs (10) != #visible LVs (2) + #snapshots (1) + #internal LVs (5) in VG VG



#metadata/pv_manip.c:421         /dev/sda1 0:      0      1: raid_tdata_rmeta_0(0:0)
#metadata/pv_manip.c:421         /dev/sda1 1:      1     25: raid_tdata_rimage_0(0:0)
#metadata/pv_manip.c:421         /dev/sda1 2:     26      3: snap(0:0)
#metadata/pv_manip.c:421         /dev/sda1 3:     29      1: lvol0(0:0)
#metadata/pv_manip.c:421         /dev/sda1 4:     30      1: lvol1(0:0)
#metadata/pv_manip.c:421         /dev/sda1 5:     31      1: raid_tmeta(0:0)
#metadata/pv_manip.c:421         /dev/sda1 6:     32      1: lvol2(0:0)
#metadata/pv_manip.c:421         /dev/sda1 7:     33   6364: NULL(0:0)
#metadata/pv_manip.c:421         /dev/sdb1 0:      0      1: raid_tdata_rmeta_1(0:0)
#metadata/pv_manip.c:421         /dev/sdb1 1:      1     25: raid_tdata_rimage_1(0:0)
#metadata/pv_manip.c:421         /dev/sdb1 2:     26   6371: NULL(0:0)
#metadata/pv_manip.c:421         /dev/sdc1 0:      0   6397: NULL(0:0)
#metadata/pv_manip.c:421         /dev/sdd1 0:      0   6397: NULL(0:0)
#metadata/pv_manip.c:421         /dev/sde1 0:      0   6397: NULL(0:0)
#metadata/pv_manip.c:421         /dev/sdf1 0:      0   6397: NULL(0:0)
#metadata/pv_manip.c:421         /dev/sdg1 0:      0   6397: NULL(0:0)
#metadata/pv_manip.c:421         /dev/sdh1 0:      0   6397: NULL(0:0)
#metadata/metadata.c:2512   Internal error: #LVs (12) != #visible LVs (4) + #snapshots (1) + #internal LVs (5) in VG VG
#metadata/metadata.c:2988         <backtrace>
#metadata/lv_manip.c:7773         <backtrace>
#metadata/lv_manip.c:8049         <backtrace>
#metadata/pool_manip.c:660         <backtrace>
#metadata/pool_manip.c:706         <backtrace>



Version-Release number of selected component (if applicable):
3.10.0-772.el7.x86_64

lvm2-2.02.176-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
lvm2-libs-2.02.176-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
lvm2-cluster-2.02.176-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
lvm2-lockd-2.02.176-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
lvm2-python-boom-0.8-3.el7    BUILT: Fri Nov 10 07:16:45 CST 2017
cmirror-2.02.176-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
device-mapper-1.02.145-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
device-mapper-libs-1.02.145-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
device-mapper-event-1.02.145-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
device-mapper-event-libs-1.02.145-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
device-mapper-persistent-data-0.7.3-2.el7    BUILT: Tue Oct 10 04:00:07 CDT 2017

Comment 2 Heinz Mauelshagen 2018-07-23 17:37:24 UTC
Provided better error message with lvm2 upstream
commit 2214dc12c34890c78b05456f58d0aa5d6dd08f4c

Comment 4 Roman Bednář 2018-08-07 11:24:06 UTC
Verified.

# lvconvert --yes --thinpool vg/raid
  Cannot convert logical volume vg/raid under snapshot.


3.10.0-926.el7.x86_64

lvm2-2.02.180-2.el7    BUILT: Wed Aug  1 18:22:48 CEST 2018

Comment 6 errata-xmlrpc 2018-10-30 11:02:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3193


Note You need to log in before you can comment on or make changes to this bug.