RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2168703 - vgimportclone.c "Using new VG name" doesn't actually check if name is new and can hang if it's not
Summary: vgimportclone.c "Using new VG name" doesn't actually check if name is new an...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: lvm2
Version: 9.2
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: rc
: ---
Assignee: David Teigland
QA Contact: cluster-qe
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-02-09 19:26 UTC by Corey Marthaler
Modified: 2023-11-07 11:27 UTC (History)
8 users (show)

Fixed In Version: lvm2-2.03.21-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-11-07 08:53:33 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker CLUSTERQE-6635 0 None None None 2023-04-19 22:22:36 UTC
Red Hat Issue Tracker RHELPLAN-148218 0 None None None 2023-02-09 19:27:20 UTC
Red Hat Product Errata RHBA-2023:6633 0 None None None 2023-11-07 08:53:59 UTC

Description Corey Marthaler 2023-02-09 19:26:56 UTC
Description of problem:
[root@virt-558 ~]# vgs
  VG            #PV #LV #SN Attr   VSize    VFree   
  foo             3   0   0 wz--n- <239.99g <239.99g
  rhel_virt-558   1   2   0 wz--n-   <7.00g       0 

[root@virt-558 ~]# pvscan
  PV /dev/sdb    VG foo             lvm2 [<80.00 GiB / <80.00 GiB free]
  PV /dev/sdc    VG foo             lvm2 [<80.00 GiB / <80.00 GiB free]
  PV /dev/sdd    VG foo             lvm2 [<80.00 GiB / <80.00 GiB free]
  PV /dev/vda2   VG rhel_virt-558   lvm2 [<7.00 GiB / 0    free]
  Total: 4 [246.98 GiB] / in use: 4 [246.98 GiB] / in no VG: 0 [0   ]

[root@virt-558 ~]# vgimportclone -n foo -vvvv /dev/sdb /dev/sdc /dev/sdd
[...]

20:21:39.703100 vgimportclone[613593] vgimportclone.c:288  scan new devs
20:21:39.703182 vgimportclone[613593] label/label.c:640  Scanning 3 devices for VG info
20:21:39.703212 vgimportclone[613593] label/label.c:568  open /dev/sdb ro di 0 fd 6
20:21:39.703411 vgimportclone[613593] label/label.c:568  open /dev/sdc ro di 1 fd 7
20:21:39.703724 vgimportclone[613593] label/label.c:568  open /dev/sdd ro di 2 fd 8
20:21:39.704126 vgimportclone[613593] label/label.c:679  Scanning submitted 3 reads
20:21:39.704205 vgimportclone[613593] label/label.c:713  Processing data from device /dev/sdb 8:16 di 0
20:21:39.704227 vgimportclone[613593] device/dev-io.c:96  /dev/sdb: using cached size 167772160 sectors
20:21:39.704277 vgimportclone[613593] device/dev-io.c:96  /dev/sdb: using cached size 167772160 sectors
20:21:39.704298 vgimportclone[613593] filters/filter-persistent.c:131  filter caching good /dev/sdb
20:21:39.704322 vgimportclone[613593] label/label.c:311  Found label at sector 1 on /dev/sdb
20:21:39.704341 vgimportclone[613593] cache/lvmcache.c:2477  Found PVID Ceh9eHBTEY3yIC6oo5E7RFR3ZTpFbw8B on /dev/sdb
20:21:39.704361 vgimportclone[613593] cache/lvmcache.c:2000  lvmcache /dev/sdb: now in VG #orphans_lvm2 #orphans_lvm2
20:21:39.704517 vgimportclone[613593] format_text/text_label.c:538  Scanning /dev/sdb mda1 summary.
20:21:39.704630 vgimportclone[613593] format_text/format-text.c:196  Reading mda header sector from /dev/sdb at 4096
20:21:39.704663 vgimportclone[613593] format_text/import.c:57  Reading metadata summary from /dev/sdb at 4608 size 1399 (+0)
20:21:39.704722 vgimportclone[613593] format_text/format-text.c:1574  Found metadata summary on /dev/sdb at 4608 size 1399 for VG foo
20:21:39.704743 vgimportclone[613593] cache/lvmcache.c:1919  lvmcache adding vginfo for foo 1E5sO5-0S7q-aAmI-ZXDG-ZaBR-3AE8-upUst2
20:21:39.704763 vgimportclone[613593] cache/lvmcache.c:2000  lvmcache /dev/sdb: now in VG foo 1E5sO50S7qaAmIZXDGZaBR3AE8upUst2
20:21:39.704779 vgimportclone[613593] cache/lvmcache.c:1840  lvmcache /dev/sdb: VG foo: set VGID to 1E5sO50S7qaAmIZXDGZaBR3AE8upUst2.
20:21:39.704794 vgimportclone[613593] cache/lvmcache.c:2196  lvmcache /dev/sdb mda1 VG foo set seqno 1 checksum 6f4e664e mda_size 1399
20:21:39.704810 vgimportclone[613593] cache/lvmcache.c:2034  lvmcache /dev/sdb: VG foo: set creation host to virt-558.cluster-qe.lab.eng.brq.redhat.com.
20:21:39.704825 vgimportclone[613593] format_text/text_label.c:566  Found metadata seqno 1 in mda1 on /dev/sdb
20:21:39.704847 vgimportclone[613593] label/label.c:713  Processing data from device /dev/sdc 8:32 di 1
20:21:39.704862 vgimportclone[613593] device/dev-io.c:96  /dev/sdc: using cached size 167772160 sectors
20:21:39.704910 vgimportclone[613593] device/dev-io.c:96  /dev/sdc: using cached size 167772160 sectors
20:21:39.704931 vgimportclone[613593] filters/filter-persistent.c:131  filter caching good /dev/sdc
20:21:39.704952 vgimportclone[613593] label/label.c:311  Found label at sector 1 on /dev/sdc
20:21:39.704967 vgimportclone[613593] cache/lvmcache.c:2477  Found PVID tyWKiaFop1UoZYlMvoh9fA4wpt1yiC4V on /dev/sdc
20:21:39.704984 vgimportclone[613593] cache/lvmcache.c:2000  lvmcache /dev/sdc: now in VG #orphans_lvm2 #orphans_lvm2
20:21:39.704999 vgimportclone[613593] format_text/text_label.c:538  Scanning /dev/sdc mda1 summary.
20:21:39.705014 vgimportclone[613593] format_text/format-text.c:196  Reading mda header sector from /dev/sdc at 4096
20:21:39.705030 vgimportclone[613593] format_text/format-text.c:1549  Skipping read of already known VG metadata with matching mda checksum on /dev/sdc.
20:21:39.705049 vgimportclone[613593] format_text/format-text.c:1574  Found metadata summary on /dev/sdc at 4608 size 1399 for VG foo
20:21:39.705068 vgimportclone[613593] cache/lvmcache.c:2000  lvmcache /dev/sdc: now in VG foo 1E5sO50S7qaAmIZXDGZaBR3AE8upUst2
20:21:39.705083 vgimportclone[613593] format_text/text_label.c:566  Found metadata seqno 1 in mda1 on /dev/sdc
20:21:39.705100 vgimportclone[613593] label/label.c:713  Processing data from device /dev/sdd 8:48 di 2
20:21:39.705118 vgimportclone[613593] device/dev-io.c:96  /dev/sdd: using cached size 167772160 sectors
20:21:39.705156 vgimportclone[613593] device/dev-io.c:96  /dev/sdd: using cached size 167772160 sectors
20:21:39.705170 vgimportclone[613593] filters/filter-persistent.c:131  filter caching good /dev/sdd
20:21:39.705187 vgimportclone[613593] label/label.c:311  Found label at sector 1 on /dev/sdd
20:21:39.705195 vgimportclone[613593] cache/lvmcache.c:2477  Found PVID p27WgbkIUrHTdCnYUs8gzBOAGadhJain on /dev/sdd
20:21:39.705204 vgimportclone[613593] cache/lvmcache.c:2000  lvmcache /dev/sdd: now in VG #orphans_lvm2 #orphans_lvm2
20:21:39.705213 vgimportclone[613593] format_text/text_label.c:538  Scanning /dev/sdd mda1 summary.
20:21:39.705221 vgimportclone[613593] format_text/format-text.c:196  Reading mda header sector from /dev/sdd at 4096
20:21:39.705230 vgimportclone[613593] format_text/format-text.c:1549  Skipping read of already known VG metadata with matching mda checksum on /dev/sdd.
20:21:39.705238 vgimportclone[613593] format_text/format-text.c:1574  Found metadata summary on /dev/sdd at 4608 size 1399 for VG foo
20:21:39.705246 vgimportclone[613593] cache/lvmcache.c:2000  lvmcache /dev/sdd: now in VG foo 1E5sO50S7qaAmIZXDGZaBR3AE8upUst2
20:21:39.705254 vgimportclone[613593] format_text/text_label.c:566  Found metadata seqno 1 in mda1 on /dev/sdd
20:21:39.705262 vgimportclone[613593] label/label.c:748  Scanned devices: read errors 0 process errors 0 failed 0
20:21:39.705270 vgimportclone[613593] filters/filter-persistent.c:106  /dev/sdb: filter cache using (cached good)
20:21:39.705278 vgimportclone[613593] filters/filter-persistent.c:106  /dev/sdc: filter cache using (cached good)
20:21:39.705286 vgimportclone[613593] filters/filter-persistent.c:106  /dev/sdd: filter cache using (cached good)
20:21:39.705302 vgimportclone[613593] cache/lvmcache.c:2603  Destroy lvmcache content
20:21:39.705318 vgimportclone[613593] vgimportclone.c:375  get other devices
20:21:39.705336 vgimportclone[613593] filters/filter-deviceid.c:40  /dev/sda: Skipping (deviceid)
20:21:39.705356 vgimportclone[613593] filters/filter-deviceid.c:40  /dev/vda: Skipping (deviceid)
20:21:39.705387 vgimportclone[613593] filters/filter-deviceid.c:40  /dev/rhel_virt-558/root: Skipping (deviceid)
20:21:39.705407 vgimportclone[613593] filters/filter-deviceid.c:40  /dev/vda1: Skipping (deviceid)
20:21:39.705426 vgimportclone[613593] filters/filter-deviceid.c:40  /dev/rhel_virt-558/swap: Skipping (deviceid)
20:21:39.705460 vgimportclone[613593] device/dev-io.c:120  /dev/vda2: size is 14678016 sectors
20:21:39.705482 vgimportclone[613593] device/dev-io.c:466  Closed /dev/vda2
20:21:39.705500 vgimportclone[613593] filters/filter-persistent.c:131  filter caching good /dev/vda2
20:21:39.705518 vgimportclone[613593] filters/filter-persistent.c:106  /dev/sdb: filter cache using (cached good)
20:21:39.705536 vgimportclone[613593] filters/filter-persistent.c:106  /dev/sdc: filter cache using (cached good)
20:21:39.705551 vgimportclone[613593] filters/filter-persistent.c:106  /dev/sdd: filter cache using (cached good)
20:21:39.705598 vgimportclone[613593] filters/filter-deviceid.c:40  /dev/sde: Skipping (deviceid)
20:21:39.705626 vgimportclone[613593] device/dev-io.c:120  /dev/sdf: size is 167772160 sectors
20:21:39.705647 vgimportclone[613593] device/dev-io.c:466  Closed /dev/sdf
20:21:39.705694 vgimportclone[613593] filters/filter-persistent.c:131  filter caching good /dev/sdf
20:21:39.705716 vgimportclone[613593] vgimportclone.c:382  scan other devices
20:21:39.705738 vgimportclone[613593] label/label.c:640  Scanning 2 devices for VG info
20:21:39.705752 vgimportclone[613593] label/label.c:568  open /dev/vda2 ro di 0 fd 6
20:21:39.705854 vgimportclone[613593] label/label.c:568  open /dev/sdf ro di 1 fd 7
20:21:39.705955 vgimportclone[613593] label/label.c:679  Scanning submitted 2 reads
20:21:39.706742 vgimportclone[613593] label/label.c:713  Processing data from device /dev/vda2 252:2 di 0
20:21:39.706788 vgimportclone[613593] device/dev-io.c:96  /dev/vda2: using cached size 14678016 sectors
20:21:39.706814 vgimportclone[613593] device/dev-io.c:96  /dev/vda2: using cached size 14678016 sectors
20:21:39.706824 vgimportclone[613593] filters/filter-persistent.c:131  filter caching good /dev/vda2
20:21:39.706835 vgimportclone[613593] label/label.c:311  Found label at sector 1 on /dev/vda2
20:21:39.706844 vgimportclone[613593] cache/lvmcache.c:2477  Found PVID Uqx81Ru436b9l38v7AM4wh0vWLzfc8yq on /dev/vda2
20:21:39.706855 vgimportclone[613593] cache/lvmcache.c:2000  lvmcache /dev/vda2: now in VG #orphans_lvm2 #orphans_lvm2
20:21:39.706864 vgimportclone[613593] format_text/text_label.c:538  Scanning /dev/vda2 mda1 summary.
20:21:39.706873 vgimportclone[613593] format_text/format-text.c:196  Reading mda header sector from /dev/vda2 at 4096
20:21:39.706885 vgimportclone[613593] format_text/import.c:57  Reading metadata summary from /dev/vda2 at 45568 size 1414 (+0)
20:21:39.706924 vgimportclone[613593] format_text/format-text.c:1574  Found metadata summary on /dev/vda2 at 45568 size 1414 for VG rhel_virt-558
20:21:39.706942 vgimportclone[613593] cache/lvmcache.c:1919  lvmcache adding vginfo for rhel_virt-558 vLi1IJ-x4co-PQUS-oTsw-ozTq-usF6-zmaL6u
20:21:39.706962 vgimportclone[613593] cache/lvmcache.c:2000  lvmcache /dev/vda2: now in VG rhel_virt-558 vLi1IJx4coPQUSoTswozTqusF6zmaL6u
20:21:39.706971 vgimportclone[613593] cache/lvmcache.c:1840  lvmcache /dev/vda2: VG rhel_virt-558: set VGID to vLi1IJx4coPQUSoTswozTqusF6zmaL6u.
20:21:39.706979 vgimportclone[613593] cache/lvmcache.c:2196  lvmcache /dev/vda2 mda1 VG rhel_virt-558 set seqno 28 checksum e9778959 mda_size 1414
20:21:39.706993 vgimportclone[613593] cache/lvmcache.c:2034  lvmcache /dev/vda2: VG rhel_virt-558: set creation host to virt-558.cluster-qe.lab.eng.brq.redhat.com.
20:21:39.707010 vgimportclone[613593] format_text/text_label.c:566  Found metadata seqno 28 in mda1 on /dev/vda2
20:21:39.707025 vgimportclone[613593] label/label.c:713  Processing data from device /dev/sdf 8:80 di 1
20:21:39.707041 vgimportclone[613593] device/dev-io.c:96  /dev/sdf: using cached size 167772160 sectors
20:21:39.707089 vgimportclone[613593] device/dev-io.c:96  /dev/sdf: using cached size 167772160 sectors
20:21:39.707110 vgimportclone[613593] filters/filter-persistent.c:131  filter caching good /dev/sdf
20:21:39.707131 vgimportclone[613593] label/label.c:398  /dev/sdf: No lvm label detected
20:21:39.707154 vgimportclone[613593] label/label.c:748  Scanned devices: read errors 0 process errors 0 failed 0
20:21:39.707172 vgimportclone[613593] vgimportclone.c:438  Using new VG name foo.
20:21:39.707192 vgimportclone[613593] cache/lvmcache.c:2603  Destroy lvmcache content
20:21:39.707225 vgimportclone[613593] vgimportclone.c:448  import vg on new devices
20:21:39.707246 vgimportclone[613593] misc/lvm-flock.c:229  Locking /run/lock/lvm/V_foo WB
20:21:39.707597 vgimportclone[613593] device_mapper/libdm-common.c:987  Preparing SELinux context for /run/lock/lvm/V_foo to system_u:object_r:lvm_lock_t:s0.
20:21:39.707663 vgimportclone[613593] misc/lvm-flock.c:113  _do_flock /run/lock/lvm/V_foo:aux WB
20:21:39.707719 vgimportclone[613593] misc/lvm-flock.c:113  _do_flock /run/lock/lvm/V_foo WB
20:21:39.707757 vgimportclone[613593] misc/lvm-flock.c:47  _undo_flock /run/lock/lvm/V_foo:aux
20:21:39.707795 vgimportclone[613593] device_mapper/libdm-common.c:990  Resetting SELinux context to default value.

[DEADLOCK]
^C



^C20:26:02.227409 vgimportclone[613593] misc/lvm-signal.c:50  Interrupted...
20:26:02.227445 vgimportclone[613593] misc/lvm-flock.c:134  Giving up waiting for lock.
20:26:02.227465 vgimportclone[613593] misc/lvm-flock.c:154  <backtrace>
20:26:02.227474 vgimportclone[613593] misc/lvm-flock.c:47  _undo_flock /run/lock/lvm/V_foo:aux
20:26:02.227531 vgimportclone[613593] device_mapper/libdm-common.c:990  Resetting SELinux context to default value.
20:26:02.227610 vgimportclone[613593] misc/lvm-flock.c:244  <backtrace>
20:26:02.227618 vgimportclone[613593] locking/file_locking.c:60  <backtrace>
20:26:02.227627 vgimportclone[613593] locking/locking.c:190  <backtrace>
20:26:02.227638 vgimportclone[613593] vgimportclone.c:456  Can't get lock for VG name foo
20:26:02.227652 vgimportclone[613593] misc/lvm-flock.c:84  Unlocking /run/lock/lvm/P_global
20:26:02.227663 vgimportclone[613593] misc/lvm-flock.c:47  _undo_flock /run/lock/lvm/P_global
20:26:02.227681 vgimportclone[613593] misc/lvm-flock.c:84  Unlocking /run/lock/lvm/V_foo
20:26:02.227689 vgimportclone[613593] misc/lvm-flock.c:47  _undo_flock /run/lock/lvm/V_foo
20:26:02.227711 vgimportclone[613593] device_mapper/libdm-config.c:1085  global/notify_dbus not found in config: defaulting to 1
20:26:02.227776 vgimportclone[613593] notify/lvmnotify.c:110  dbus damon not running, not notifying
20:26:02.227798 vgimportclone[613593] cache/lvmcache.c:2603  Destroy lvmcache content
20:26:02.242595 vgimportclone[613593] lvmcmdline.c:3352  Completed: vgimportclone -n foo -vvvv /dev/sdb /dev/sdc /dev/sdd
20:26:02.246258 vgimportclone[613593] cache/lvmcache.c:2603  Destroy lvmcache content
20:26:02.246317 vgimportclone[613593] metadata/vg.c:80  Freeing VG #orphans_lvm2 at 0x55c738497560.
20:26:02.246384 vgimportclone[613593] activate/fs.c:492  Syncing device names




Version-Release number of selected component (if applicable):
kernel-5.14.0-231.el9    BUILT: Mon Jan  9 08:32:41 PM CET 2023
lvm2-2.03.17-5.el9    BUILT: Thu Jan 26 11:17:14 PM CET 2023
lvm2-libs-2.03.17-5.el9    BUILT: Thu Jan 26 11:17:14 PM CET 2023


How reproducible:
Everytime

Comment 1 David Teigland 2023-02-09 23:45:15 UTC
fix in main:
https://sourceware.org/git/?p=lvm2.git;a=commit;h=be124ae81027e8736106e4958bd2dfab971d6764

$ pvs
  PV         VG   Fmt  Attr PSize    PFree  
  /dev/loop0 foo  lvm2 a--     4.00m   4.00m
$ vgimportclone -n foo /dev/loop0
$ pvs
  PV         VG   Fmt  Attr PSize    PFree  
  /dev/loop0 foo1 lvm2 a--     4.00m   4.00m


$ pvs
  PV         VG   Fmt  Attr PSize    PFree  
  /dev/loop0 foo  lvm2 a--     4.00m   4.00m
  /dev/loop1 foo1 lvm2 a--     4.00m   4.00m
$ vgimportclone -n foo /dev/loop0
$ pvs
  PV         VG   Fmt  Attr PSize    PFree  
  /dev/loop0 foo2 lvm2 a--     4.00m   4.00m
  /dev/loop1 foo1 lvm2 a--     4.00m   4.00m

Comment 3 Corey Marthaler 2023-05-16 15:47:06 UTC
Marking Verfied:Tested in the latest rpms.

kernel-5.14.0-306.el9    BUILT: Sat Apr 29 05:45:15 PM CEST 2023
lvm2-2.03.21-1.el9    BUILT: Fri Apr 21 02:33:33 PM CEST 2023
lvm2-libs-2.03.21-1.el9    BUILT: Fri Apr 21 02:33:33 PM CEST 2023



[root@grant-01 ~]# pvscan
  PV /dev/sdb1   VG foo             lvm2 [<447.13 GiB / <447.13 GiB free]
  PV /dev/sdc1   VG foo             lvm2 [<447.13 GiB / <447.13 GiB free]
  PV /dev/sdd1   VG foo             lvm2 [<447.13 GiB / <447.13 GiB free]
  Total: 3 [<1.31 TiB] / in use: 3 [<1.31 TiB] / in no VG: 0 [0   ]
[root@grant-01 ~]# vgimportclone -n foo /dev/sdb1 /dev/sdc1 /dev/sdd1
[root@grant-01 ~]# pvscan
  PV /dev/sdb1   VG foo1            lvm2 [<447.13 GiB / <447.13 GiB free]
  PV /dev/sdc1   VG foo1            lvm2 [<447.13 GiB / <447.13 GiB free]
  PV /dev/sdd1   VG foo1            lvm2 [<447.13 GiB / <447.13 GiB free]
  Total: 3 [<1.31 TiB] / in use: 3 [<1.31 TiB] / in no VG: 0 [0   ]

Comment 7 Corey Marthaler 2023-05-26 00:02:24 UTC
Works on the latest build as well. Marking VERIFIED.

kernel-5.14.0-306.el9    BUILT: Sat Apr 29 05:45:15 PM CEST 2023
lvm2-2.03.21-2.el9    BUILT: Thu May 25 12:03:04 AM CEST 2023
lvm2-libs-2.03.21-2.el9    BUILT: Thu May 25 12:03:04 AM CEST 2023


SCENARIO - vgimportclone_vdo_using_existing_vg_name:  Create a vdo volume on a VG where and then vgimportclone that VG using the current existing name (bug 2168703) 

Creating VG on grant-01.6a2m.lab.eng.bos.redhat.com using PV(s) /dev/nvme0n1p1 /dev/sdd1 /dev/sde1 /dev/sdg1 /dev/sdh1 /dev/sdc1
vgcreate    vdo_sanity /dev/nvme0n1p1 /dev/sdd1 /dev/sde1 /dev/sdg1 /dev/sdh1 /dev/sdc1
  Volume group "vdo_sanity" successfully created
creating VG on grant-01.6a2m.lab.eng.bos.redhat.com using PV(s) /dev/nvme1n1p1
vgcreate    vdo_sanity1 /dev/nvme1n1p1
  Volume group "vdo_sanity1" successfully created

lvcreate --yes --type linear -n vdo_pool  -L 10G vdo_sanity  
Logical volume "vdo_pool" created.
lvconvert --yes --type vdo-pool -n vdo_lv  -V100G vdo_sanity/vdo_pool
The VDO volume can address 6 GB in 3 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdo_lv" created.
  Converted vdo_sanity/vdo_pool to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume.
WARNING: Converting logical volume vdo_sanity/vdo_pool to VDO pool volume with formatting.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
deactivating LV vdo_lv on grant-01.6a2m.lab.eng.bos.redhat.com
lvchange --yes -an  vdo_sanity/vdo_lv

vgimportclone -n vdo_sanity /dev/nvme0n1p1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdg1 /dev/sdh1
lvremove  -f vdo_sanity/vdo_lv
Logical volume "vdo_lv" successfully removed.

Comment 10 errata-xmlrpc 2023-11-07 08:53:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:6633


Note You need to log in before you can comment on or make changes to this bug.