RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1511106 - RFE: provide a reason why vdo can't be removed
Summary: RFE: provide a reason why vdo can't be removed
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: vdo
Version: 7.5
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: rc
: ---
Assignee: Joseph Chapman
QA Contact: Jakub Krysl
URL:
Whiteboard:
: 1527921 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-08 17:20 UTC by Corey Marthaler
Modified: 2019-03-06 01:08 UTC (History)
6 users (show)

Fixed In Version: 6.1.1.114
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-30 09:38:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:3094 0 None None None 2018-10-30 09:39:26 UTC

Description Corey Marthaler 2017-11-08 17:20:32 UTC
Description of problem:
vdo knows it can't remove an online vdo, or vdo w/ an activte LV stacked on top. It would be nice to provide that info/reason to the user.

[root@harding-02 ~]# vdo create --name foo --device /dev/mapper/mpatha
Creating VDO foo
Starting VDO foo
Starting compression on VDO foo
VDO instance 1 volume is ready at /dev/mapper/foo

[root@harding-02 ~]# vgcreate VG /dev/mapper/foo
  Physical volume "/dev/mapper/foo" successfully created.
  Volume group "VG" successfully created

[root@harding-02 ~]# lvcreate -n LV -L 100M VG
  Logical volume "LV" created.

[root@harding-02 ~]# mkfs.ext4 /dev/VG/LV 
mke2fs 1.42.9 (28-Dec-2013)
[...]
[root@harding-02 ~]# mount /dev/VG/LV /mnt/vdo

[root@harding-02 ~]# vdo remove --name foo
Removing VDO foo
Stopping VDO foo
vdo: ERROR - cannot stop VDO service foo

Nov  8 11:11:22 harding-02 vdo: ERROR - cannot stop VDO service foo

[root@harding-02 ~]# umount /mnt/vdo

[root@harding-02 ~]# vdo remove --name foo
Removing VDO foo
Stopping VDO foo
vdo: ERROR - cannot stop VDO service foo

[root@harding-02 ~]# lvchange -an VG
[root@harding-02 ~]# vdo remove --name foo
Removing VDO foo
Stopping VDO foo

Version-Release number of selected component (if applicable):
3.10.0-772.el7.x86_64

lvm2-2.02.176-2.el7    BUILT: Fri Nov  3 07:46:53 CDT 2017
lvm2-libs-2.02.176-2.el7    BUILT: Fri Nov  3 07:46:53 CDT 2017
lvm2-cluster-2.02.176-2.el7    BUILT: Fri Nov  3 07:46:53 CDT 2017
lvm2-lockd-2.02.176-2.el7    BUILT: Fri Nov  3 07:46:53 CDT 2017
lvm2-python-boom-0.8-2.el7    BUILT: Fri Nov  3 07:48:54 CDT 2017
cmirror-2.02.176-2.el7    BUILT: Fri Nov  3 07:46:53 CDT 2017
device-mapper-1.02.145-2.el7    BUILT: Fri Nov  3 07:46:53 CDT 2017
device-mapper-libs-1.02.145-2.el7    BUILT: Fri Nov  3 07:46:53 CDT 2017
device-mapper-event-1.02.145-2.el7    BUILT: Fri Nov  3 07:46:53 CDT 2017
device-mapper-event-libs-1.02.145-2.el7    BUILT: Fri Nov  3 07:46:53 CDT 2017
device-mapper-persistent-data-0.7.3-2.el7    BUILT: Tue Oct 10 04:00:07 CDT 2017
sanlock-3.5.0-1.el7    BUILT: Wed Apr 26 09:37:30 CDT 2017
sanlock-lib-3.5.0-1.el7    BUILT: Wed Apr 26 09:37:30 CDT 2017
vdo-6.1.0.34-8    BUILT: Fri Nov  3 06:58:45 CDT 2017
kmod-kvdo-6.1.0.34-7.el7    BUILT: Fri Nov  3 06:44:06 CDT 2017

Comment 6 Joseph Chapman 2017-12-04 16:32:31 UTC
The remove is done via "dmsetup remove" so there isn't a general way to pass back a detailed error message. The VDO manager does already check to see whether anything is mounted on the VDO volume before removing it. Looking at adding a check for LVM on top of the device.

Comment 7 Joseph Chapman 2017-12-04 16:59:48 UTC
Comment from corwin on the associated Permabit Jira ticket at 2017-11-14 15:33:00:

  It is not at all clear that this is possible.

Comment 8 Jakub Krysl 2017-12-20 13:57:29 UTC
*** Bug 1527921 has been marked as a duplicate of this bug. ***

Comment 9 Jakub Krysl 2017-12-20 14:05:36 UTC
Some more info here. vdo remove tries "dmsetup remove" few times and fails after because it cannot remove it:
# lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
--snip--
sdc                         8:32   0  5.5T  0 disk
└─md127                     9:127  0 10.9T  0 raid0
  └─vdo_test              253:2    0 10.9T  0 vdo
    └─vg_vdo_test-lv_test 253:3    0 10.9T  0 lvm
sdd                         8:48   0  5.5T  0 disk
└─md127                     9:127  0 10.9T  0 raid0
  └─vdo_test              253:2    0 10.9T  0 vdo
    └─vg_vdo_test-lv_test 253:3    0 10.9T  0 lvm
# vdo remove --all --verbose
Removing VDO vdo_test
Stopping VDO vdo_test
    dmsetup status vdo_test
    mount
    udevadm settle
    dmsetup remove vdo_test
    dmsetup remove vdo_test
    dmsetup remove vdo_test
    dmsetup remove vdo_test
    dmsetup remove vdo_test
    dmsetup remove vdo_test
    dmsetup remove vdo_test
    dmsetup remove vdo_test
    dmsetup remove vdo_test
    dmsetup remove vdo_test
    dmsetup status vdo_test
vdo: ERROR - cannot stop VDO service vdo_test

This is what LVM does in this situation:
# lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
--snip--
sdc                         8:32   0  5.5T  0 disk
└─md127                     9:127  0 10.9T  0 raid0
  └─vdo_test              253:2    0 10.9T  0 vdo
    └─vg_vdo_test-lv_test 253:3    0 10.9T  0 lvm
      └─test-test         253:4    0  100G  0 lvm
sdd                         8:48   0  5.5T  0 disk
└─md127                     9:127  0 10.9T  0 raid0
  └─vdo_test              253:2    0 10.9T  0 vdo
    └─vg_vdo_test-lv_test 253:3    0 10.9T  0 lvm
      └─test-test         253:4    0  100G  0 lvm
# vgremove vg_vdo_test
Do you really want to remove volume group "vg_vdo_test" containing 1 logical volumes? [y/n]: y
  Logical volume vg_vdo_test/lv_test is used by another device.
# lvremove vg_vdo_test/lv_test
  Logical volume vg_vdo_test/lv_test is used by another device.

Maybe we can reproduce similar check to what LVM does in VDO...

Comment 10 Ken Raeburn 2017-12-20 19:16:52 UTC
"lsblk" can display information about the device dependencies.

Comment 12 Joseph Chapman 2018-02-23 17:49:32 UTC
What LVM is doing is checking /sys/dev/block/<major>:<minor>/holders/. If that directory is non-empty, it generates the "device in use" message. (It then checks for a mounted file system, but we already do that check.)

I will implement the same check.

Comment 14 Jakub Krysl 2018-07-03 15:43:28 UTC
Tested on:
RHEL-7.6-20180626.0
kernel-3.10.0-915.el7
kmod-vdo-6.1.1.99-1.el7
vdo-6.1.1.99-2.el7

Now removing VDO under active VG+LV results in 'in use':
# lsblk
NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdb                           8:16   0     2T  0 disk
└─vdo                       253:3    0     2T  0 vdo
  └─vg-lv                   253:4    0     2T  0 lvm
# vdo remove --name vdo
Removing VDO vdo
Stopping VDO vdo
vdo: ERROR - cannot stop VDO volume vdo: in use

# lsblk
NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdb                           8:16   0     2T  0 disk
└─vg-lv                     253:3    0     2T  0 lvm
  └─vg2-lv2                 253:4    0     2T  0 lvm
# lvremove vg/lv
  Logical volume vg/lv is used by another device.


There is a difference when creating target from the device using targetcli:
# targetcli
targetcli shell version 2.1.fb46
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/backstores/block> create test /dev/mapper/vdo
Created block storage object test using /dev/mapper/vdo.
/backstores/block> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup/.
Configuration saved to /etc/target/saveconfig.json
# vdo remove --all --verbose
Removing VDO vdo
Stopping VDO vdo
    dmsetup status vdo
    mount
    udevadm settle
    dmsetup remove vdo
    dmsetup remove vdo
    dmsetup remove vdo
    dmsetup remove vdo
    dmsetup remove vdo
    dmsetup remove vdo
    dmsetup remove vdo
    dmsetup remove vdo
    dmsetup remove vdo
    dmsetup remove vdo
    dmsetup status vdo
vdo: ERROR - cannot stop VDO service vdo


# targetcli
targetcli shell version 2.1.fb46
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/backstores/block> create test /dev/mapper/vg-lv
Created block storage object test using /dev/mapper/vg-lv.
/backstores/block> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup/.
Configuration saved to /etc/target/saveconfig.json
[root@storageqe-74 ~]# vgremove vg -ff
  Logical volume vg/lv in use.

It appears LVM is checking not only sys holders, but also 'open count' on dmsetup (as not all open devices show here):
# dmsetup info -c /dev/mapper/vg-lv
Name             Maj Min Stat Open Targ Event  UUID                                                      
vg-lv            253   3 L--w    1    1      0 LVM-7aGzcEZ288BrHgHW0eQduEZAsDe5s3Xup2kvbsf6eOyuljOGU6n2Ec8I1bHIcj8O

Please add this check too to be safe.

Comment 15 Joseph Chapman 2018-07-10 19:05:44 UTC
Here, for reference, is what LVM does:

If there's a sysfs directory set (I think this is configuration):
  If /sys/dev/block/<major>:<minor>/holders. exists and is not empty:
    -> error "Logical volume %s is used by another device."
  If a file system is mounted on the device:
    -> error "Logical volume %s contains a filesystem in use."
Repeat 25 times:
  check dm open count (which I assume vdo can do with "dmsetup info")
  if the open count > 0 and thiat was the last retry:
    -> error "Logical volume %s in use."
  sleep 0.2 sec

Comment 17 Joseph Chapman 2018-07-16 13:56:43 UTC
No, the second fix (after the ticket was reopened) did not make it in before 6.1.1.111. The second fix simply changes the error message if the remove fails due to the device being open.

Comment 18 Jakub Krysl 2018-08-30 15:13:09 UTC
Tested with vdo-6.1.1.120-3.el7:
Now the message provides reason "vdo: ERROR - cannot stop VDO service vdo: device in use"

Comment 20 errata-xmlrpc 2018-10-30 09:38:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3094


Note You need to log in before you can comment on or make changes to this bug.