RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1726524 - lvm2: Can't remove the snap LV of root volume on multipath disk PV
Summary: lvm2: Can't remove the snap LV of root volume on multipath disk PV
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.8
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-03 06:04 UTC by Gang He
Modified: 2021-09-03 12:52 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-08-20 15:02:38 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Gang He 2019-07-03 06:04:53 UTC
Description of problem:
I am using lvm2-2.02.180 on Linux 4.12.14, I cannot remove the snap LV of root volume, which is based on multipath disk PV.
e.g. 
linux-kkay:/ # lvremove /dev/system/snap_root
  WARNING: Reading VG system from disk because lvmetad metadata is invalid.
Do you really want to remove active logical volume system/snap_root? [y/n]: y
  device-mapper: reload ioctl on  (254:3) failed: Invalid argument
  Failed to refresh root without snapshot.

But, I can remove the snap LV of data volume successfully, e.g.
linux-kkay:/ # lvremove /dev/system/data_snap
  WARNING: Reading VG system from disk because lvmetad metadata is invalid.
Do you really want to remove active logical volume system/data_snap? [y/n]: y
  Logical volume "data_snap" successfully removed

If I use the ordinary disk as PV (rather than multipath disk), I did not encounter this problem (both snap LVs can be removed).

The disk layout is as below,
linux-kkay:/ # lsblk
NAME                       MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda                          8:0    0   30G  0 disk
├─sda1                       8:1    0   30G  0 part
└─mpd_root                 254:0    0   30G  0 mpath
  └─mpd_root-part1         254:1    0   30G  0 part
    ├─system-root-real     254:2    0   10G  0 lvm
    │ ├─system-root        254:3    0   10G  0 lvm   /
    │ └─system-snap_root   254:5    0   10G  0 lvm
    ├─system-snap_root-cow 254:4    0    5G  0 lvm
    │ └─system-snap_root   254:5    0   10G  0 lvm
    ├─system-swap          254:6    0    2G  0 lvm   [SWAP]
    ├─system-data-real     254:8    0    4G  0 lvm
    │ ├─system-data        254:7    0    4G  0 lvm   /data
    │ └─system-data_snap   254:10   0    4G  0 lvm
    └─system-data_snap-cow 254:9    0  840M  0 lvm
      └─system-data_snap   254:10   0    4G  0 lvm
sdb                          8:16   0   30G  0 disk
├─sdb1                       8:17   0   30G  0 part
└─mpd_root                 254:0    0   30G  0 mpath
  └─mpd_root-part1         254:1    0   30G  0 part
    ├─system-root-real     254:2    0   10G  0 lvm
    │ ├─system-root        254:3    0   10G  0 lvm   /
    │ └─system-snap_root   254:5    0   10G  0 lvm
    ├─system-snap_root-cow 254:4    0    5G  0 lvm
    │ └─system-snap_root   254:5    0   10G  0 lvm
    ├─system-swap          254:6    0    2G  0 lvm   [SWAP]
    ├─system-data-real     254:8    0    4G  0 lvm
    │ ├─system-data        254:7    0    4G  0 lvm   /data
    │ └─system-data_snap   254:10   0    4G  0 lvm
    └─system-data_snap-cow 254:9    0  840M  0 lvm
      └─system-data_snap   254:10   0    4G  0 lvm



Version-Release number of selected component (if applicable):
lvm2-2.02.180

How reproducible:
Install a OS with LVM2 as system volume, the underlying PV disk is a multipath disk.
then, create a snapshot LV for "/" volume, reboot the machine.
delete this snapshot LV, then get this failure.

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Gang He 2019-07-03 06:31:15 UTC
sles12sp4-multipath:~ # dmsetup table
system-root_snap-cow: 0 10575872 linear 254:1 25167872
system-root-real: 0 20971520 linear 254:1 2048
system-data_snap: 0 8388608 snapshot 254:7 254:9 P 8
mpd_root-part1: 0 62912512 linear 254:0 2048
system-root_snap: 0 20971520 snapshot 254:2 254:4 P 8
mpd_root: 0 62914560 multipath 0 0 2 1 service-time 0 1 2 8:0 1 1 service-time 0 1 2 8:16 1 1
system-data_snap-cow: 0 4235264 linear 254:1 44132352
system-swap: 0 4194304 linear 254:1 20973568
system-root: 0 20971520 snapshot-origin 254:2
system-data-real: 0 8388608 linear 254:1 35743744
system-data: 0 8388608 snapshot-origin 254:7
sles12sp4-multipath:~ # dmsetup table
system-root_snap-cow: 0 10575872 linear 254:1 25167872
system-root-real: 0 20971520 linear 254:1 2048
system-data_snap: 0 8388608 snapshot 254:7 254:9 P 8
mpd_root-part1: 0 62912512 linear 254:0 2048
system-root_snap: 0 20971520 snapshot 254:2 254:4 P 8
mpd_root: 0 62914560 multipath 0 0 2 1 service-time 0 1 2 8:0 1 1 service-time 0 1 2 8:16 1 1
system-data_snap-cow: 0 4235264 linear 254:1 44132352
system-swap: 0 4194304 linear 254:1 20973568
system-root: 0 20971520 snapshot-origin 254:2
system-data-real: 0 8388608 linear 254:1 35743744
system-data: 0 8388608 snapshot-origin 254:7
sles12sp4-multipath:~ # dmsetup ls --tree
system-data_snap (254:10)
 ├─system-data_snap-cow (254:9)
 │  └─mpd_root-part1 (254:1)
 │     └─mpd_root (254:0)
 │        ├─ (8:16)
 │        └─ (8:0)
 └─system-data-real (254:7)
    └─mpd_root-part1 (254:1)
       └─mpd_root (254:0)
          ├─ (8:16)
          └─ (8:0)
system-root_snap (254:5)
 ├─system-root_snap-cow (254:4)
 │  └─mpd_root-part1 (254:1)
 │     └─mpd_root (254:0)
 │        ├─ (8:16)
 │        └─ (8:0)
 └─system-root-real (254:2)
    └─mpd_root-part1 (254:1)
       └─mpd_root (254:0)
          ├─ (8:16)
          └─ (8:0)
system-swap (254:6)
 └─mpd_root-part1 (254:1)
    └─mpd_root (254:0)
       ├─ (8:16)
       └─ (8:0)
system-root (254:3)
 └─system-root-real (254:2)
    └─mpd_root-part1 (254:1)
       └─mpd_root (254:0)
          ├─ (8:16)
          └─ (8:0)
system-data (254:8)
 └─system-data-real (254:7)
    └─mpd_root-part1 (254:1)
       └─mpd_root (254:0)
          ├─ (8:16)
          └─ (8:0)
sles12sp4-multipath:~ # dmsetup info -c
Name                 Maj Min Stat Open Targ Event  UUID
system-root_snap-cow 254   4 L--w    1    1      0 LVM-dEKggFwQUzRB1IUx5Uni6jsS0Jkw8LQkIg6oU0zjuzkj8EtZnhbXfZO1E7T0hamm-cow
system-root-real     254   2 L--w    2    1      0 LVM-dEKggFwQUzRB1IUx5Uni6jsS0Jkw8LQknLvErWbDQBimSdsKjrPYPQs9rsAS2sEX-real
system-data_snap     254  10 L--w    0    1      0 LVM-dEKggFwQUzRB1IUx5Uni6jsS0Jkw8LQkLoAMn9pAfrgLMpFMe0hqbZe96XBRerkx
mpd_root-part1       254   1 L--w    5    1      0 part1-mpath-0QEMU_QEMU_HARDDISK_0001
system-root_snap     254   5 L--w    0    1      0 LVM-dEKggFwQUzRB1IUx5Uni6jsS0Jkw8LQkIg6oU0zjuzkj8EtZnhbXfZO1E7T0hamm
mpd_root             254   0 L--w    1    1      0 mpath-0QEMU_QEMU_HARDDISK_0001
system-data_snap-cow 254   9 L--w    1    1      0 LVM-dEKggFwQUzRB1IUx5Uni6jsS0Jkw8LQkLoAMn9pAfrgLMpFMe0hqbZe96XBRerkx-cow
system-swap          254   6 L--w    2    1      0 LVM-dEKggFwQUzRB1IUx5Uni6jsS0Jkw8LQkF9WxJl28V9oYgRDrl6xfOAZLQHijCF5n
system-root          254   3 L--w    1    1      0 LVM-dEKggFwQUzRB1IUx5Uni6jsS0Jkw8LQknLvErWbDQBimSdsKjrPYPQs9rsAS2sEX
system-data-real     254   7 L--w    2    1      0 LVM-dEKggFwQUzRB1IUx5Uni6jsS0Jkw8LQkeGyNVZidr2BndMBJkHbOt1O4UeyyockZ-real
system-data          254   8 L--w    0    1      0 LVM-dEKggFwQUzRB1IUx5Uni6jsS0Jkw8LQkeGyNVZidr2BndMBJkHbOt1O4UeyyockZ
sles12sp4-multipath:~ # dmsetup status
system-root_snap-cow: 0 10575872 linear
system-root-real: 0 20971520 linear
system-data_snap: 0 8388608 snapshot 136056/4235264 544
mpd_root-part1: 0 62912512 linear
system-root_snap: 0 20971520 snapshot 7760/10575872 40
mpd_root: 0 62914560 multipath 2 0 0 0 2 1 A 0 1 2 8:0 A 0 0 1 E 0 1 2 8:16 A 0 0 1
system-data_snap-cow: 0 4235264 linear
system-swap: 0 4194304 linear
system-root: 0 20971520 snapshot-origin
system-data-real: 0 8388608 linear
system-data: 0 8388608 snapshot-origin
sles12sp4-multipath:~ #

Comment 3 Zdenek Kabelac 2019-07-30 14:43:43 UTC
Can you please provide/attach    'lvremove -vvvv'  trace of problematic command.

(ideally interleaved with 'dmesg')
(Probably most easiest is to start logging into syslog and set loglevel verbosity to 7)

Comment 4 Gang He 2019-07-31 06:25:05 UTC
Hello Zdenek,

I am sorry for the delayed update.
This bug should be specific to SUSE kernel-source, since one developer back-port a kernel patch from the upstream to the suse old kernel, but the function return value meaning in the patch is changed between two kernels. I feel Redhat kernel should not has the similar error, then, please close this incident.

Thanks a lot.
Gang 

the patch looks like,

commit e24e1fa638293b6e205334b11c94d0c6698b0ac1

    Fix buggy backport in
    patches.fixes/dax-check-for-queue_flag_dax-in-bdev_dax_supported.patch
    (bsc#1109859)

    suse-commit: 6d44295943bd4473177287661f39f071dacb63c1

diff --git a/drivers/dax/super.c b/drivers/dax/super.c index b6471297c8c2..5cc340b44bb5 100644
--- a/drivers/dax/super.c
+++ b/drivers/dax/super.c
@@ -104,7 +104,7 @@ int ____bdev_dax_supported(struct block_device *bdev, int blocksize)
        if (!q || !blk_queue_dax(q)) {
                pr_debug("%s: error: request queue doesn't support dax\n",
                                bdevname(bdev, buf));
-               return false;
+               return -EOPNOTSUPP;
        }

        err = bdev_dax_pgoff(bdev, 0, PAGE_SIZE, &pgoff);

Comment 5 Zdenek Kabelac 2019-07-31 10:39:44 UTC
So can we close this BZ - as it's not reproducible with standard kernel ?

Comment 6 Gang He 2019-10-22 02:52:18 UTC
Hello Zdenek,

Yes, you can close it, since this bug was caused by SUSE-kernel (one patch back-port problem).
I believe the other kernels should not have the same error.

Thanks a lot.
Gang


Note You need to log in before you can comment on or make changes to this bug.