RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1685257 - "Internal error: #LVs (8) != #visible LVs (3) + #snapshots (1) + #internal LVs (5) in VG" when trying to uncache or splitcache cache origin volume
Summary: "Internal error: #LVs (8) != #visible LVs (3) + #snapshots (1) + #internal LV...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.0
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-04 19:17 UTC by Corey Marthaler
Modified: 2021-09-07 11:56 UTC (History)
8 users (show)

Fixed In Version: lvm2-2.03.07-1.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-04-28 16:58:57 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
verbose lvconvert attempt (208.73 KB, text/plain)
2019-03-04 19:18 UTC, Corey Marthaler
no flags Details
another verbose lvconvert attempt (264.27 KB, text/plain)
2019-03-04 19:19 UTC, Corey Marthaler
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-30361 0 None None None 2021-09-07 11:55:08 UTC
Red Hat Product Errata RHEA-2020:1881 0 None None None 2020-04-28 16:59:11 UTC

Description Corey Marthaler 2019-03-04 19:17:20 UTC
Description of problem:
This scenario (which was stacked on top of VDO PVs) attempts to merge an inactive snapshot, which waits until the next activation/reboot to actually do it, then after that passes, tears down the cache origin with either a lvconvert --splitcache or --uncache 


SCENARIO - [reboot_before_cache_snap_merge_starts]
Attempt to merge an inuse snapshot, then "reboot" the machine before the merge can take place

*** Cache info for this scenario ***
*  origin (slow):  /dev/mapper/vPV10
*  pool (fast):    /dev/mapper/vPV9
************************************

Adding "slow" and "fast" tags to corresponding pvs
Create origin (slow) volume
lvcreate --wipesignatures y  -L 4G -n corigin cache_sanity @slow

Create cache data and cache metadata (fast) volumes
lvcreate  -L 2G -n pool cache_sanity @fast
lvcreate  -L 12M -n pool_meta cache_sanity @fast

Create cache pool volume by combining the cache data and cache metadata (fast) volumes with policy: mq  mode: writeback
lvconvert --yes --type cache-pool --cachepolicy mq --cachemode writeback -c 64 --poolmetadata cache_sanity/pool_meta cache_sanity/pool
  WARNING: Converting cache_sanity/pool and cache_sanity/pool_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Create cached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --yes --type cache --cachemetadataformat 1 --cachepool cache_sanity/pool cache_sanity/corigin

Placing an xfs filesystem on origin volume
Mounting origin volume

Making snapshot of origin volume
lvcreate  -s /dev/cache_sanity/corigin -c 128 -n merge_reboot -L 500M
Mounting snap volume

Attempt to merge snapshot cache_sanity/merge_reboot
lvconvert --merge cache_sanity/merge_reboot --yes

umount and deactivate volume group
vgchange --sysinit -ay cache_sanity
vgchange --refresh cache_sanity

Separating cache pool (lvconvert --splitcache) cache_sanity/corigin from cache origin
couldn't split cache pool volume


#device_mapper/libdm-common.c:1375          cache_sanity-merge_reboot: retaining kernel read ahead of 256 (requested 256)
#metadata/lv_manip.c:819         cache_sanity/corigin:0 is no longer a user of cache_sanity/pool.
#metadata/lv_manip.c:6064          LV pool in VG cache_sanity is now visible.
#metadata/lv_manip.c:6064          LV corigin_corig in VG cache_sanity is now visible.
#metadata/lv_manip.c:6794        Removing layer corigin_corig for corigin
#metadata/lv_manip.c:1392          Dropping snapshot merge of snapshot0 to removed origin corigin.
#metadata/lv_manip.c:1141        Remove cache_sanity/corigin:0[0] from the top of LV cache_sanity/corigin_corig:0.
#metadata/lv_manip.c:819         cache_sanity/corigin:0 is no longer a user of cache_sanity/corigin_corig.
#metadata/lv_manip.c:1241        Stack cache_sanity/corigin_corig:0[0] on LV cache_sanity/corigin:0.
#metadata/lv_manip.c:792         Adding cache_sanity/corigin_corig:0 as an user of cache_sanity/corigin.
#metadata/lv_manip.c:6074          LV pool in VG cache_sanity is now hidden.
#metadata/lv_manip.c:792         Adding cache_sanity/corigin_corig:0 as an user of cache_sanity/pool.
#metadata/lv_manip.c:6492        Updating logical volume cache_sanity/corigin on disk(s).
#metadata/pv_manip.c:417           /dev/mapper/vPV15 0:      0      3: lvol0_pmspare(0:0)
#metadata/pv_manip.c:417           /dev/mapper/vPV15 1:      3    125: merge_reboot(0:0)
#metadata/pv_manip.c:417           /dev/mapper/vPV15 2:    128 112807: NULL(0:0)
#metadata/pv_manip.c:417           /dev/mapper/vPV14 0:      0 112935: NULL(0:0)
#metadata/pv_manip.c:417           /dev/mapper/vPV13 0:      0   1024: corigin(0:0)
#metadata/pv_manip.c:417           /dev/mapper/vPV13 1:   1024 473716: NULL(0:0)
#metadata/pv_manip.c:417           /dev/mapper/vPV12 0:      0    512: pool_cdata(0:0)
#metadata/pv_manip.c:417           /dev/mapper/vPV12 1:    512      3: pool_cmeta(0:0)
#metadata/pv_manip.c:417           /dev/mapper/vPV12 2:    515 474225: NULL(0:0)
#metadata/metadata.c:2390    Internal error: #LVs (8) != #visible LVs (3) + #snapshots (1) + #internal LVs (5) in VG cache_sanity
#metadata/metadata.c:2867          <backtrace>
#metadata/lv_manip.c:6494          <backtrace>
#metadata/cache_manip.c:639           <backtrace>
#lvconvert.c:1849          <backtrace>
#toollib.c:3208          <backtrace>
#toollib.c:3668          <backtrace>
#mm/memlock.c:594           Unlock: Memlock counters: prioritized:0 locked:0 critical:0 daemon:0 suspended:0


Version-Release number of selected component (if applicable):
4.18.0-75.el8.x86_64

kernel-4.18.0-75.el8    BUILT: Fri Mar  1 11:37:34 CST 2019
lvm2-2.03.02-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
lvm2-libs-2.03.02-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
lvm2-dbusd-2.03.02-6.el8    BUILT: Fri Feb 22 04:50:28 CST 2019
lvm2-lockd-2.03.02-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
boom-boot-0.9-7.el8    BUILT: Mon Jan 14 14:00:54 CST 2019
cmirror-2.03.02-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
device-mapper-1.02.155-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
device-mapper-libs-1.02.155-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
device-mapper-event-1.02.155-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
device-mapper-event-libs-1.02.155-6.el8    BUILT: Fri Feb 22 04:47:54 CST 2019
device-mapper-persistent-data-0.7.6-1.el8    BUILT: Sun Aug 12 04:21:55 CDT 2018
sanlock-3.6.0-5.el8    BUILT: Thu Dec  6 13:31:26 CST 2018
sanlock-lib-3.6.0-5.el8    BUILT: Thu Dec  6 13:31:26 CST 2018
vdo-6.2.0.293-10.el8    BUILT: Fri Dec 14 18:18:47 CST 2018
kmod-kvdo-6.2.0.293-50.el8    BUILT: Mon Feb 25 16:53:12 CST 2019


How reproducible:
Often

Comment 1 Corey Marthaler 2019-03-04 19:18:45 UTC
Created attachment 1540713 [details]
verbose lvconvert attempt

Comment 2 Corey Marthaler 2019-03-04 19:19:42 UTC
Created attachment 1540714 [details]
another verbose lvconvert attempt

Comment 3 Corey Marthaler 2019-03-04 20:28:28 UTC
Why would the "vgchange --sysinit -ay cache_sanity" be causing Buffer I/O errors?


Mar  4 14:25:20 hayes-01 qarshd[9596]: Running cmdline: lvconvert --merge cache_sanity/merge_reboot --yes
Mar  4 14:25:21 hayes-01 systemd[1]: Started qarsh Per-Connection Server (10.15.80.218:49504).
Mar  4 14:25:21 hayes-01 qarshd[9610]: Talking to peer ::ffff:10.15.80.218:49504 (IPv6)
Mar  4 14:25:21 hayes-01 qarshd[9610]: Running cmdline: umount /mnt/merge_reboot /mnt/corigin
Mar  4 14:25:21 hayes-01 kernel: XFS (dm-16): Unmounting Filesystem
Mar  4 14:25:21 hayes-01 kernel: XFS (dm-10): Unmounting Filesystem
Mar  4 14:25:21 hayes-01 systemd[1]: Started qarsh Per-Connection Server (10.15.80.218:49506).
Mar  4 14:25:21 hayes-01 qarshd[9615]: Talking to peer ::ffff:10.15.80.218:49506 (IPv6)
Mar  4 14:25:21 hayes-01 qarshd[9615]: Running cmdline: vgchange -an cache_sanity
Mar  4 14:25:21 hayes-01 lvm[4244]: No longer monitoring snapshot cache_sanity-merge_reboot.
Mar  4 14:25:22 hayes-01 systemd[1]: Started qarsh Per-Connection Server (10.15.80.218:49508).
Mar  4 14:25:22 hayes-01 qarshd[9650]: Talking to peer ::ffff:10.15.80.218:49508 (IPv6)
Mar  4 14:25:22 hayes-01 qarshd[9650]: Running cmdline: vgchange --sysinit -ay cache_sanity
Mar  4 14:25:22 hayes-01 kernel: Buffer I/O error on dev dm-16, logical block 0, async page read
Mar  4 14:25:22 hayes-01 kernel: Buffer I/O error on dev dm-16, logical block 1, async page read
Mar  4 14:25:22 hayes-01 kernel: Buffer I/O error on dev dm-16, logical block 0, async page read
Mar  4 14:25:22 hayes-01 kernel: Buffer I/O error on dev dm-16, logical block 1, async page read
Mar  4 14:25:22 hayes-01 systemd[1]: Started qarsh Per-Connection Server (10.15.80.218:49510).
Mar  4 14:25:22 hayes-01 qarshd[9707]: Talking to peer ::ffff:10.15.80.218:49510 (IPv6)
Mar  4 14:25:23 hayes-01 qarshd[9707]: Running cmdline: lvs -a -o +devices cache_sanity
Mar  4 14:25:23 hayes-01 systemd[1]: Started qarsh Per-Connection Server (10.15.80.218:49512).
Mar  4 14:25:23 hayes-01 qarshd[9713]: Talking to peer ::ffff:10.15.80.218:49512 (IPv6)
Mar  4 14:25:23 hayes-01 qarshd[9713]: Running cmdline: vgchange --refresh cache_sanity
Mar  4 14:25:23 hayes-01 systemd[1]: Started LVM2 poll daemon.
Mar  4 14:25:23 hayes-01 kernel: Buffer I/O error on dev dm-16, logical block 0, async page read
Mar  4 14:25:23 hayes-01 kernel: Buffer I/O error on dev dm-16, logical block 1, async page read
Mar  4 14:25:23 hayes-01 systemd[1]: Started qarsh Per-Connection Server (10.15.80.218:49514).
Mar  4 14:25:23 hayes-01 qarshd[9755]: Talking to peer ::ffff:10.15.80.218:49514 (IPv6)
Mar  4 14:25:24 hayes-01 qarshd[9755]: Running cmdline: lvs -a -o +devices cache_sanity
Mar  4 14:25:24 hayes-01 systemd[1]: Started qarsh Per-Connection Server (10.15.80.218:49516).
Mar  4 14:25:24 hayes-01 qarshd[9760]: Talking to peer ::ffff:10.15.80.218:49516 (IPv6)
Mar  4 14:25:24 hayes-01 qarshd[9760]: Running cmdline: lvs --noheadings -a -o lv_name --select pool_lv=pool
Mar  4 14:25:24 hayes-01 systemd[1]: Started qarsh Per-Connection Server (10.15.80.218:49518).
Mar  4 14:25:24 hayes-01 qarshd[9765]: Talking to peer ::ffff:10.15.80.218:49518 (IPv6)
Mar  4 14:25:24 hayes-01 qarshd[9765]: Running cmdline: lvconvert -vvvv --splitcache /dev/cache_sanity/corigin
Mar  4 14:25:25 hayes-01 kernel: Buffer I/O error on dev dm-16, logical block 0, async page read
Mar  4 14:25:25 hayes-01 kernel: Buffer I/O error on dev dm-16, logical block 1, async page read
Mar  4 14:25:40 hayes-01 lvmpolld[9723]: W: #011LVPOLL: PID 9748: STDERR: '  WARNING: This metadata update is NOT backed up.'
Mar  4 14:26:22 hayes-01 kernel: Buffer I/O error on dev dm-12, logical block 0, async page read
Mar  4 14:26:22 hayes-01 kernel: Buffer I/O error on dev dm-12, logical block 1, async page read
Mar  4 14:26:22 hayes-01 kernel: Buffer I/O error on dev dm-11, logical block 0, async page read
Mar  4 14:26:22 hayes-01 kernel: Buffer I/O error on dev dm-11, logical block 1, async page read
Mar  4 14:26:22 hayes-01 kernel: Buffer I/O error on dev dm-10, logical block 0, async page read
Mar  4 14:26:22 hayes-01 kernel: Buffer I/O error on dev dm-10, logical block 1, async page read

Comment 5 Zdenek Kabelac 2019-03-06 14:41:01 UTC
I assume I have fixes for this in my tree for upstreaming - I need to extend test suite to cover described case to be sure about that.

Comment 6 Corey Marthaler 2019-10-11 19:23:56 UTC
FWIW, still hitting this in final 8.1 regression testing.

kernel-4.18.0-147.4.el8    BUILT: Thu Oct  3 15:38:54 CDT 2019
lvm2-2.03.05-5.el8    BUILT: Thu Sep 26 01:40:57 CDT 2019
lvm2-libs-2.03.05-5.el8    BUILT: Thu Sep 26 01:40:57 CDT 2019
lvm2-dbusd-2.03.05-5.el8    BUILT: Thu Sep 26 01:43:33 CDT 2019
device-mapper-1.02.163-5.el8    BUILT: Thu Sep 26 01:40:57 CDT 2019
device-mapper-libs-1.02.163-5.el8    BUILT: Thu Sep 26 01:40:57 CDT 2019
device-mapper-event-1.02.163-5.el8    BUILT: Thu Sep 26 01:40:57 CDT 2019
device-mapper-event-libs-1.02.163-5.el8    BUILT: Thu Sep 26 01:40:57 CDT 2019
device-mapper-persistent-data-0.8.5-2.el8    BUILT: Wed Jun  5 10:28:04 CDT 2019
vdo-6.2.1.134-11.el8    BUILT: Fri Aug  2 10:39:03 CDT 2019
kmod-kvdo-6.2.1.138-57.el8    BUILT: Fri Sep 13 11:00:16 CDT 2019


SCENARIO - [reboot_before_cache_snap_merge_starts]
Attempt to merge an inuse snapshot, then "reboot" the machine before the merge can take place

*** Cache info for this scenario ***
*  origin (slow):  /dev/mapper/vPV14
*  pool (fast):    /dev/mapper/vPV13
************************************

Adding "slow" and "fast" tags to corresponding pvs
Create origin (slow) volume
lvcreate --wipesignatures y  -L 4G -n corigin cache_sanity @slow

Create cache data and cache metadata (fast) volumes
lvcreate  -L 2G -n pool cache_sanity @fast
lvcreate  -L 12M -n pool_meta cache_sanity @fast

Create cache pool volume by combining the cache data and cache metadata (fast) volumes with policy: mq  mode: writethrough
lvconvert --yes --type cache-pool --cachepolicy mq --cachemode writethrough -c 64 --poolmetadata cache_sanity/pool_meta cache_sanity/pool
  WARNING: Converting cache_sanity/pool and cache_sanity/pool_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Create cached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --yes --type cache --cachemetadataformat 1 --cachepool cache_sanity/pool cache_sanity/corigin

Placing an xfs filesystem on origin volume
Mounting origin volume

Making snapshot of origin volume
lvcreate  -s /dev/cache_sanity/corigin -c 128 -n merge_reboot -L 500M
Mounting snap volume

Attempt to merge snapshot cache_sanity/merge_reboot
lvconvert --merge cache_sanity/merge_reboot --yes

umount and deactivate volume group
vgchange --sysinit -ay cache_sanity
vgchange --refresh cache_sanity

Separating cache pool (lvconvert --splitcache) cache_sanity/corigin from cache origin
  Internal error: #LVs (8) != #visible LVs (3) + #snapshots (1) + #internal LVs (5) in VG cache_sanity
couldn't split cache pool volume

Comment 9 Corey Marthaler 2019-12-06 16:38:00 UTC
Marking verified in the latest rpms. 

kernel-4.18.0-151.el8    BUILT: Fri Nov 15 13:14:53 CST 2019
lvm2-2.03.07-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
lvm2-libs-2.03.07-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
lvm2-dbusd-2.03.07-1.el8    BUILT: Mon Dec  2 00:12:23 CST 2019
lvm2-lockd-2.03.07-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
device-mapper-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
device-mapper-libs-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
device-mapper-event-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
device-mapper-event-libs-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
device-mapper-persistent-data-0.8.5-2.el8    BUILT: Wed Jun  5 10:28:04 CDT 2019
vdo-6.2.2.24-11.el8    BUILT: Wed Oct 30 21:22:06 CDT 2019
kmod-kvdo-6.2.2.24-60.el8    BUILT: Mon Nov 11 16:14:12 CST 2019


Both "lvconvert --splitcache" and "lvconvert --uncache" no longer fail when attempted on top of cache volumes stacked on VDO PVs.


hayes-02: pvcreate  /dev/mapper/vPV15 /dev/mapper/vPV14 /dev/mapper/vPV13 /dev/mapper/vPV12
hayes-02: vgcreate   cache_sanity /dev/mapper/vPV15 /dev/mapper/vPV14 /dev/mapper/vPV13 /dev/mapper/vPV12

============================================================
Iteration 1 of 1 started at Fri Dec  6 09:48:19 CST 2019
============================================================
SCENARIO - [reboot_before_cache_snap_merge_starts]
Attempt to merge an inuse snapshot, then "reboot" the machine before the merge can take place

*** Cache info for this scenario ***
*  origin (slow):  /dev/mapper/vPV15
*  pool (fast):    /dev/mapper/vPV14
************************************

Adding "slow" and "fast" tags to corresponding pvs
Create origin (slow) volume
lvcreate --wipesignatures y  -L 4G -n corigin cache_sanity @slow

Create cache data and cache metadata (fast) volumes
lvcreate  -L 2G -n pool cache_sanity @fast
lvcreate  -L 12M -n pool_meta cache_sanity @fast

Create cache pool volume by combining the cache data and cache metadata (fast) volumes with policy: smq  mode: writeback
lvconvert --yes --type cache-pool --cachepolicy smq --cachemode writeback -c 32 --poolmetadata cache_sanity/pool_meta cache_sanity/pool
  WARNING: Converting cache_sanity/pool and cache_sanity/pool_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Create cached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --yes --type cache --cachemetadataformat 2 --cachepool cache_sanity/pool cache_sanity/corigin

Placing an xfs filesystem on origin volume
Mounting origin volume

Making snapshot of origin volume
lvcreate  -s /dev/cache_sanity/corigin -c 128 -n merge_reboot -L 500M
Mounting snap volume

Attempt to merge snapshot cache_sanity/merge_reboot
lvconvert --merge cache_sanity/merge_reboot --yes

umount and deactivate volume group
vgchange --sysinit -ay cache_sanity
vgchange --refresh cache_sanity


[root@hayes-02 ~]# lvs -a -o +devices
  LV                 VG           Attr       LSize  Pool         Origin          Data%  Meta%  Move Log Cpy%Sync Convert Devices                
  corigin            cache_sanity Cwi-a-C---  4.00g [pool_cpool] [corigin_corig] 0.56   6.71            0.00             corigin_corig(0)       
  [corigin_corig]    cache_sanity owi-aoC---  4.00g                                                                      /dev/mapper/vPV15(0)   
  [lvol0_pmspare]    cache_sanity ewi------- 12.00m                                                                      /dev/mapper/vPV15(1024)
  [pool_cpool]       cache_sanity Cwi---C---  2.00g                              0.56   6.71            0.00             pool_cpool_cdata(0)    
  [pool_cpool_cdata] cache_sanity Cwi-ao----  2.00g                                                                      /dev/mapper/vPV14(0)   
  [pool_cpool_cmeta] cache_sanity ewi-ao---- 12.00m                                                                      /dev/mapper/vPV14(512) 

[root@hayes-02 ~]# lvconvert --splitcache /dev/cache_sanity/corigin
  Flushing 0 blocks for cache cache_sanity/corigin.
  Logical volume cache_sanity/corigin is not cached and cache_sanity/pool is unused.



============================================================
Iteration 12 of 13 started at Fri Dec  6 10:28:06 CST 2019
============================================================
SCENARIO - [reboot_before_cache_snap_merge_starts]
Attempt to merge an inuse snapshot, then "reboot" the machine before the merge can take place

*** Cache info for this scenario ***
*  origin (slow):  /dev/mapper/vPV13
*  pool (fast):    /dev/mapper/vPV12
************************************

Adding "slow" and "fast" tags to corresponding pvs
Create origin (slow) volume
lvcreate --wipesignatures y  -L 4G -n corigin cache_sanity @slow
WARNING: xfs signature detected on /dev/cache_sanity/corigin at offset 0. Wipe it? [y/n]: [n]
  Aborted wiping of xfs.
  1 existing signature left on the device.

Create cache data and cache metadata (fast) volumes
lvcreate  -L 2G -n pool cache_sanity @fast
lvcreate  -L 12M -n pool_meta cache_sanity @fast

Create cache pool volume by combining the cache data and cache metadata (fast) volumes with policy: mq  mode: writeback
lvconvert --yes --type cache-pool --cachepolicy mq --cachemode writeback -c 64 --poolmetadata cache_sanity/pool_meta cache_sanity/pool
  WARNING: Converting cache_sanity/pool and cache_sanity/pool_meta to cache pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Create cached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --yes --type cache --cachemetadataformat 1 --cachepool cache_sanity/pool cache_sanity/corigin

Placing an xfs filesystem on origin volume
Mounting origin volume

Making snapshot of origin volume
lvcreate  -s /dev/cache_sanity/corigin -c 128 -n merge_reboot -L 500M
Mounting snap volume

Attempt to merge snapshot cache_sanity/merge_reboot
lvconvert --merge cache_sanity/merge_reboot --yes

umount and deactivate volume group
vgchange --sysinit -ay cache_sanity
vgchange --refresh cache_sanity

Uncaching cache origin (lvconvert --uncache) cache_sanity/corigin from cache origin
Removing cache origin volume cache_sanity/corigin
lvremove -f /dev/cache_sanity/corigin

Comment 11 errata-xmlrpc 2020-04-28 16:58:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:1881


Note You need to log in before you can comment on or make changes to this bug.