Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1686091

Summary: How should native lvm vdo volumes be presented in 'vdo list'
Product: Red Hat Enterprise Linux 8 Reporter: Corey Marthaler <cmarthal>
Component: vdoAssignee: bjohnsto
Status: CLOSED ERRATA QA Contact: vdo-qe
Severity: low Docs Contact:
Priority: unspecified    
Version: 8.0CC: awalsh, bgurney, bjohnsto, rhandlin
Target Milestone: rcFlags: pm-rhel: mirror+
Target Release: 8.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: 6.2.1.83 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-11-05 22:12:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Corey Marthaler 2019-03-06 17:26:13 UTC
Description of problem:
I only noticed this when our automated volume cleanup script started failing trying to call 'vdo remove *-vpool0' vdo volumes after it had already called lvm to clean up said volume so it no longer existed. 

It'll be easy to fix the script to ignore "*-vpool" volumes but that got me thinking if that display was expected or not? "*-vpool0" wasn't the actual name given for the VDO volume, although internally LVM maybe calling it just that. Feel free to close this NOTABUG if this is all expected.


[root@hayes-03 ~]# vdo create --force --writePolicy async --name vPV1 --device /dev/sdd1
Creating VDO vPV1
Starting VDO vPV1
Starting compression on VDO vPV1
VDO instance 106 volume is ready at /dev/mapper/vPV1
[root@hayes-03 ~]# vdo create --force --writePolicy async --name vPV2 --device /dev/sdd2
Creating VDO vPV2
Starting VDO vPV2
Starting compression on VDO vPV2
VDO instance 107 volume is ready at /dev/mapper/vPV2
[root@hayes-03 ~]# vdo create --force --writePolicy sync --name vPV3 --device /dev/sdc1
Creating VDO vPV3
Starting VDO vPV3
Starting compression on VDO vPV3
VDO instance 108 volume is ready at /dev/mapper/vPV3
[root@hayes-03 ~]# vdo create --force --writePolicy async --name vPV4 --device /dev/sdc2
Creating VDO vPV4
Starting VDO vPV4
Starting compression on VDO vPV4
VDO instance 109 volume is ready at /dev/mapper/vPV4
[root@hayes-03 ~]# vdo list
vPV1
vPV2
vPV3
vPV4

[root@hayes-03 ~]# pvcreate /dev/sd[befg][12]
  Physical volume "/dev/sdb1" successfully created.
  Physical volume "/dev/sdb2" successfully created.
  Physical volume "/dev/sde1" successfully created.
  Physical volume "/dev/sde2" successfully created.
  Physical volume "/dev/sdf1" successfully created.
  Physical volume "/dev/sdf2" successfully created.
  Physical volume "/dev/sdg1" successfully created.
  Physical volume "/dev/sdg2" successfully created.
[root@hayes-03 ~]# vgcreate VG /dev/sd[befg][12]
  Volume group "VG" successfully created
[root@hayes-03 ~]# lvcreate --type vdo -n vdo -L 4G VG
  Logical volume "vdo" created.
[root@hayes-03 ~]# lvs -a -o +devices
  LV             VG Attr       LSize    Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices        
  vdo            VG vwi-a-v--- 1016.00m vpool0        0.00                                    vpool0(0)      
  vpool0         VG dwi-ao----    4.00g               75.05                                   vpool0_vdata(0)
  [vpool0_vdata] VG Dwi-ao----    4.00g                                                       /dev/sdb1(0)   
[root@hayes-03 ~]# vdo list
VG-vpool0
vPV1
vPV2
vPV3
vPV4

Version-Release number of selected component (if applicable):
kernel-4.18.0-75.el8    BUILT: Fri Mar  1 11:37:34 CST 2019
vdo-6.2.0.293-10.el8    BUILT: Fri Dec 14 18:18:47 CST 2018
kmod-kvdo-6.2.0.293-50.el8    BUILT: Mon Feb 25 16:53:12 CST 2019

lvm2-2.03.02-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
lvm2-libs-2.03.02-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
lvm2-dbusd-2.03.02-6.el8    BUILT: Fri Feb 22 04:50:28 CST 2019
device-mapper-1.02.155-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
device-mapper-libs-1.02.155-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
device-mapper-event-1.02.155-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
device-mapper-event-libs-1.02.155-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
device-mapper-persistent-data-0.7.6-1.el8    BUILT: Sun Aug 12 04:21:55 CDT 2018

Comment 1 Corey Marthaler 2019-03-06 17:53:55 UTC
It looks like vdo doesn't truly "know" about these volumes.

[root@hayes-03 ~]# vdo list
pv_shuffle_A-vpool0
pv_shuffle_B-vpool0
vPV1
vPV10
vPV11
[...]

[root@hayes-03 ~]# vdo remove -n pv_shuffle_A-vpool0
vdo: ERROR - VDO volume pv_shuffle_A-vpool0 not found

[root@hayes-03 ~]# vdo remove -n pv_shuffle_B-vpool0
vdo: ERROR - VDO volume pv_shuffle_B-vpool0 not found

I would have expeced to see an "in use" error like for normal vdos, not "not found":

[root@hayes-03 ~]# vdo remove -n vPV30
Removing VDO vPV30
Stopping VDO vPV30
vdo: ERROR - cannot stop VDO volume vPV30: in use

Comment 2 Bryan Gurney 2019-03-06 18:09:15 UTC
What do you see if you run "vdo status"?

Comment 3 Corey Marthaler 2019-03-06 18:16:43 UTC
(In reply to Bryan Gurney from comment #2)
> What do you see if you run "vdo status"?

There's no mention in "vdo status" about these vpools

Comment 4 Bryan Gurney 2019-03-06 18:57:14 UTC
Oh, okay, now I realize what's happening.  The vdo command reads from /etc/vdoconf.yml to check for the currently created VDO volumes.  It doesn't detect the "lvmvdo" created VDO volumes, since they're not in /etc/vdoconf.yml.

Comment 5 Bryan Gurney 2019-03-06 22:42:19 UTC
Looking closer, "vdo list" gets its information via "dmsetup status", therefore it would display all of the devices under target type "vdo".  However, "vdo status" would only gather information from volumes defined in /etc/vdoconf.yml.

Comment 8 Jakub Krysl 2019-08-22 16:06:01 UTC
This change removes any VDO created by LVM from vdo list. Creating VDO directly with dmsetup still pops up in the list as we cannot be sure that VDO was NOT created by vdoManager and just is missing from the config file. So 'vdo list' command now shows VDO volumes possibly manageable by vdoManager.
kmod-kvdo-6.2.1.138-56.el8.x86_64
vdo-6.2.1.134-11.el8.x86_64

# vdo list --all
VG-vpool1
vdo1

(VG-vpool0 created by LVM is not listed, VG-vpool1 is created directly by dmsetup command)

Comment 11 errata-xmlrpc 2019-11-05 22:12:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3548