Bug 2064802
Summary: | vdo filesystem deadlock while doing looping online resizing during I/O | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 9 | Reporter: | Corey Marthaler <cmarthal> | |
Component: | kmod-kvdo | Assignee: | Ken Raeburn <raeburn> | |
Status: | CLOSED ERRATA | QA Contact: | Filip Suba <fsuba> | |
Severity: | high | Docs Contact: | Petr Hybl <phybl> | |
Priority: | high | |||
Version: | 9.0 | CC: | abhide, agk, awalsh, cwei, fsuba, gfialova, heinzm, jbrassow, kslaveyk, phybl, prajnoha, raeburn, zkabelac | |
Target Milestone: | rc | Keywords: | Triaged | |
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | 8.2.0.2 | Doc Type: | Bug Fix | |
Doc Text: |
.Journal entries no longer stop the journal writes
Previously, in the VDO driver during device-mapper suspend operation and after resuming device operation, some journal blocks could still be marked as waiting for some metadata updates to be made before they could be reused, even though those updates had already been done. When enough journal entries were made for the journal to wrap around back to the same physical block, it was not available. Journal writes would stop, waiting for the block to become available, which never happened. Consequently, when some operations on a VDO device included a suspend or resume cycle, the device was in a frozen state after some journal updates. The journal updates before this device state were unpredictable because it was depended on previous allocation patterns within VDO, and the incoming write or discard patterns. With this update, after the suspend or resume cycle saving data to storage, the internal data structure state is reset and lockups no longer happened.
|
Story Points: | --- | |
Clone Of: | ||||
: | 2109047 2119143 (view as bug list) | Environment: | ||
Last Closed: | 2022-11-15 11:18:52 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 2109047, 2119143 |
Description
Corey Marthaler
2022-03-16 15:18:59 UTC
Here's a deadlock and trace with an XFS filesystem being unmounted. kernel-4.18.0-378.el8 BUILT: Fri Apr 1 17:57:23 CDT 2022 lvm2-2.03.14-3.el8 BUILT: Tue Jan 4 14:54:16 CST 2022 lvm2-libs-2.03.14-3.el8 BUILT: Tue Jan 4 14:54:16 CST 2022 vdo-6.2.6.14-14.el8 BUILT: Fri Feb 11 14:43:08 CST 2022 kmod-kvdo-6.2.6.14-84.el8 BUILT: Tue Mar 22 07:41:18 CDT 2022 Apr 12 13:58:25 hayes-03 qarshd[39247]: Running cmdline: umount /mnt/vdo_lv Apr 12 13:58:25 hayes-03 systemd[1]: mnt-vdo_lv.mount: Succeeded. Apr 12 14:01:49 hayes-03 kernel: INFO: task kworker/u81:0:37761 blocked for more than 120 seconds. Apr 12 14:01:49 hayes-03 kernel: Tainted: G O --------- - - 4.18.0-378.el8.x86_64 #1 Apr 12 14:01:49 hayes-03 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Apr 12 14:01:49 hayes-03 kernel: task:kworker/u81:0 state:D stack: 0 pid:37761 ppid: 2 flags:0x80004080 Apr 12 14:01:49 hayes-03 kernel: Workqueue: writeback wb_workfn (flush-253:2) Apr 12 14:01:49 hayes-03 kernel: Call Trace: Apr 12 14:01:49 hayes-03 kernel: __schedule+0x2d1/0x830 Apr 12 14:01:49 hayes-03 kernel: ? finish_wait+0x80/0x80 Apr 12 14:01:49 hayes-03 kernel: schedule+0x35/0xa0 Apr 12 14:01:49 hayes-03 kernel: io_schedule+0x12/0x40 Apr 12 14:01:49 hayes-03 kernel: limiterWaitForOneFree+0xbc/0xf0 [kvdo] Apr 12 14:01:49 hayes-03 kernel: ? finish_wait+0x80/0x80 Apr 12 14:01:49 hayes-03 kernel: kvdoMapBio+0xc8/0x2a0 [kvdo] Apr 12 14:01:49 hayes-03 kernel: __map_bio+0x4c/0x210 [dm_mod] Apr 12 14:01:49 hayes-03 kernel: __split_and_process_non_flush+0x1d8/0x240 [dm_mod] Apr 12 14:01:49 hayes-03 kernel: dm_make_request+0x12c/0x380 [dm_mod] Apr 12 14:01:49 hayes-03 kernel: generic_make_request+0x25b/0x350 Apr 12 14:01:49 hayes-03 kernel: submit_bio+0x3c/0x160 Apr 12 14:01:49 hayes-03 kernel: iomap_writepage_map+0x509/0x670 Apr 12 14:01:49 hayes-03 kernel: write_cache_pages+0x197/0x420 Apr 12 14:01:49 hayes-03 kernel: ? iomap_invalidatepage+0xe0/0xe0 Apr 12 14:01:49 hayes-03 kernel: ? blk_queue_enter+0xdf/0x1f0 Apr 12 14:01:49 hayes-03 kernel: iomap_writepages+0x1c/0x40 Apr 12 14:01:49 hayes-03 kernel: xfs_vm_writepages+0x7e/0xb0 [xfs] Apr 12 14:01:49 hayes-03 kernel: do_writepages+0xc2/0x1c0 Apr 12 14:01:49 hayes-03 kernel: __writeback_single_inode+0x39/0x2f0 Apr 12 14:01:49 hayes-03 kernel: writeback_sb_inodes+0x1e6/0x450 Apr 12 14:01:50 hayes-03 kernel: __writeback_inodes_wb+0x5f/0xc0 Apr 12 14:01:50 hayes-03 kernel: wb_writeback+0x247/0x2e0 Apr 12 14:01:50 hayes-03 kernel: ? get_nr_inodes+0x35/0x50 Apr 12 14:01:50 hayes-03 kernel: wb_workfn+0x37c/0x4d0 Apr 12 14:01:50 hayes-03 kernel: ? __switch_to_asm+0x35/0x70 Apr 12 14:01:50 hayes-03 kernel: ? __switch_to_asm+0x41/0x70 Apr 12 14:01:50 hayes-03 kernel: ? __switch_to_asm+0x35/0x70 Apr 12 14:01:50 hayes-03 kernel: ? __switch_to_asm+0x41/0x70 Apr 12 14:01:50 hayes-03 kernel: ? __switch_to_asm+0x35/0x70 Apr 12 14:01:50 hayes-03 kernel: ? __switch_to_asm+0x41/0x70 Apr 12 14:01:50 hayes-03 kernel: ? __switch_to_asm+0x35/0x70 Apr 12 14:01:50 hayes-03 kernel: ? __switch_to_asm+0x41/0x70 Apr 12 14:01:50 hayes-03 kernel: process_one_work+0x1a7/0x360 Apr 12 14:01:50 hayes-03 kernel: ? create_worker+0x1a0/0x1a0 Apr 12 14:01:50 hayes-03 kernel: worker_thread+0x30/0x390 Apr 12 14:01:50 hayes-03 kernel: ? create_worker+0x1a0/0x1a0 Apr 12 14:01:50 hayes-03 kernel: kthread+0x10a/0x120 Apr 12 14:01:50 hayes-03 kernel: ? set_kthread_struct+0x40/0x40 Apr 12 14:01:50 hayes-03 kernel: ret_from_fork+0x35/0x40 Apr 12 14:01:50 hayes-03 kernel: INFO: task kworker/36:0:38305 blocked for more than 120 seconds. Apr 12 14:01:50 hayes-03 kernel: Tainted: G O --------- - - 4.18.0-378.el8.x86_64 #1 Apr 12 14:01:50 hayes-03 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Apr 12 14:01:50 hayes-03 kernel: task:kworker/36:0 state:D stack: 0 pid:38305 ppid: 2 flags:0x80004080 Apr 12 14:01:50 hayes-03 kernel: Workqueue: xfs-sync/dm-2 xfs_log_worker [xfs] Apr 12 14:01:50 hayes-03 kernel: Call Trace: Apr 12 14:01:50 hayes-03 kernel: __schedule+0x2d1/0x830 Apr 12 14:01:50 hayes-03 kernel: ? __switch_to_asm+0x41/0x70 Apr 12 14:01:50 hayes-03 kernel: ? crc_22+0x1e/0x1e [crc32c_intel] Apr 12 14:01:50 hayes-03 kernel: ? finish_wait+0x80/0x80 Apr 12 14:01:50 hayes-03 kernel: schedule+0x35/0xa0 Apr 12 14:01:50 hayes-03 kernel: io_schedule+0x12/0x40 Apr 12 14:01:50 hayes-03 kernel: limiterWaitForOneFree+0xbc/0xf0 [kvdo] Apr 12 14:01:50 hayes-03 kernel: ? finish_wait+0x80/0x80 Apr 12 14:01:50 hayes-03 kernel: kvdoMapBio+0xc8/0x2a0 [kvdo] Apr 12 14:01:50 hayes-03 kernel: __map_bio+0x4c/0x210 [dm_mod] Apr 12 14:01:50 hayes-03 kernel: __split_and_process_non_flush+0x1d8/0x240 [dm_mod] Apr 12 14:01:50 hayes-03 kernel: dm_make_request+0x12c/0x380 [dm_mod] Apr 12 14:01:50 hayes-03 kernel: generic_make_request+0x25b/0x350 Apr 12 14:01:50 hayes-03 kernel: ? bio_add_page+0x42/0x50 Apr 12 14:01:50 hayes-03 kernel: submit_bio+0x3c/0x160 Apr 12 14:01:50 hayes-03 kernel: xlog_state_release_iclog+0x6e/0x80 [xfs] Apr 12 14:01:50 hayes-03 kernel: xfs_log_force+0x129/0x1c0 [xfs] Apr 12 14:01:50 hayes-03 kernel: xfs_log_worker+0x35/0x60 [xfs] Apr 12 14:01:50 hayes-03 kernel: process_one_work+0x1a7/0x360 Apr 12 14:01:50 hayes-03 kernel: ? create_worker+0x1a0/0x1a0 Apr 12 14:01:50 hayes-03 kernel: worker_thread+0x30/0x390 Apr 12 14:01:50 hayes-03 kernel: ? create_worker+0x1a0/0x1a0 Apr 12 14:01:50 hayes-03 kernel: kthread+0x10a/0x120 Apr 12 14:01:50 hayes-03 kernel: ? set_kthread_struct+0x40/0x40 Apr 12 14:01:50 hayes-03 kernel: ret_from_fork+0x35/0x40 Apr 12 14:01:50 hayes-03 kernel: INFO: task umount:39248 blocked for more than 120 seconds. Apr 12 14:01:50 hayes-03 kernel: Tainted: G O --------- - - 4.18.0-378.el8.x86_64 #1 Apr 12 14:01:50 hayes-03 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Apr 12 14:01:50 hayes-03 kernel: task:umount state:D stack: 0 pid:39248 ppid: 39247 flags:0x00004080 Apr 12 14:01:50 hayes-03 kernel: Call Trace: Apr 12 14:01:50 hayes-03 kernel: __schedule+0x2d1/0x830 Apr 12 14:01:50 hayes-03 kernel: ? cpumask_next+0x17/0x20 Apr 12 14:01:50 hayes-03 kernel: ? mnt_get_count+0x39/0x50 Apr 12 14:01:50 hayes-03 kernel: schedule+0x35/0xa0 Apr 12 14:01:50 hayes-03 kernel: rwsem_down_write_slowpath+0x308/0x5c0 Apr 12 14:01:50 hayes-03 kernel: ? fsnotify_grab_connector+0x3c/0x60 Apr 12 14:01:50 hayes-03 kernel: deactivate_super+0x43/0x50 Apr 12 14:01:50 hayes-03 kernel: cleanup_mnt+0x3b/0x70 Apr 12 14:01:50 hayes-03 kernel: task_work_run+0x8a/0xb0 Apr 12 14:01:50 hayes-03 kernel: exit_to_usermode_loop+0xeb/0xf0 Apr 12 14:01:50 hayes-03 kernel: do_syscall_64+0x198/0x1a0 Apr 12 14:01:50 hayes-03 kernel: entry_SYSCALL_64_after_hwframe+0x65/0xca Apr 12 14:01:50 hayes-03 kernel: RIP: 0033:0x7f2d8ea10dab Apr 12 14:01:50 hayes-03 kernel: Code: Unable to access opcode bytes at RIP 0x7f2d8ea10d81. Apr 12 14:01:50 hayes-03 kernel: RSP: 002b:00007fffb7b9cac8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6 Apr 12 14:01:50 hayes-03 kernel: RAX: 0000000000000000 RBX: 00005560393e9460 RCX: 00007f2d8ea10dab Apr 12 14:01:50 hayes-03 kernel: RDX: 0000000000000001 RSI: 0000000000000000 RDI: 00005560393e9640 Apr 12 14:01:50 hayes-03 kernel: RBP: 0000000000000000 R08: 00005560393e9660 R09: 00007f2d8eb57820 Apr 12 14:01:50 hayes-03 kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00005560393e9640 Apr 12 14:01:50 hayes-03 kernel: R13: 00007f2d8f881184 R14: 0000000000000000 R15: 00000000ffffffff Apr 12 14:03:52 hayes-03 kernel: INFO: task kworker/u81:0:37761 blocked for more than 120 seconds. If there’s been a lot of data written into cache memory, and the file system flushes it out all at once, sometimes it takes a while for VDO to process all the data being sent to it, especially if the VDO configuration hasn’t been tuned for performance on the particular hardware. Threads waiting for the data to finish writing, like the umount process, will block, and if there’s enough data, they may block for long enough that the kernel’s 120-second timer fires and logs the complaints you see here. If you use “iostat” and “top” while this is happening, do they indicate activity at the VDO level? If so, I would expect the umount to complete if you wait long enough. Or is this a real, permanent lockup with no system activity? Can you tell me something about the system you’re running the test on? How much memory has it got? Are the drives you’re using flash storage or spinning HDDs? Adding a note that this still exists in our testing, since this bug is marked MODIFIED with a Fixed In version listed. kernel-5.14.0-130.el9 BUILT: Fri Jul 15 07:31:56 AM CDT 2022 lvm2-2.03.16-2.el9 BUILT: Thu Jul 14 11:45:18 AM CDT 2022 lvm2-libs-2.03.16-2.el9 BUILT: Thu Jul 14 11:45:18 AM CDT 2022 vdo-8.1.1.360-1.el9 BUILT: Sat Feb 12 11:34:09 PM CST 2022 kmod-kvdo-8.1.1.371-41.el9 BUILT: Sat Jul 16 03:39:21 PM CDT 2022 Jul 26 11:24:54 hayes-03 kernel: INFO: task kworker/18:0:192809 blocked for more than 122 seconds. Jul 26 11:24:54 hayes-03 kernel: Tainted: G O --------- --- 5.14.0-130.el9.x86_64 #1 Jul 26 11:24:54 hayes-03 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jul 26 11:24:54 hayes-03 kernel: task:kworker/18:0 state:D stack: 0 pid:192809 ppid: 2 flags:0x00004000 Jul 26 11:24:54 hayes-03 kernel: Workqueue: xfs-conv/dm-3 xfs_end_io [xfs] Jul 26 11:24:54 hayes-03 kernel: Call Trace: Jul 26 11:24:54 hayes-03 kernel: __schedule+0x206/0x580 Jul 26 11:24:54 hayes-03 kernel: schedule+0x43/0xa0 Jul 26 11:24:54 hayes-03 kernel: rwsem_down_write_slowpath+0x27b/0x4c0 Yes, the hopefully-fixed version is kmod-kvdo-8.2.0.2 or later. These test scenarios now run with out issue on the latest rpms. Marking VERIFIED. kernel-5.14.0-138.el9 BUILT: Sun Jul 31 06:20:38 AM CDT 2022 vdo-8.2.0.2-1.el9 BUILT: Tue Jul 19 02:28:15 PM CDT 2022 kmod-kvdo-8.2.0.2-41.el9 BUILT: Thu Jul 28 05:24:49 PM CDT 2022 lvm2-2.03.16-3.el9 BUILT: Mon Aug 1 04:42:35 AM CDT 2022 lvm2-libs-2.03.16-3.el9 BUILT: Mon Aug 1 04:42:35 AM CDT 2022 ============================================================ Iteration 14 of 14 started at Wed Aug 10 14:13:03 2022 ============================================================ SCENARIO - open_ext4_fsadm_vdo_resize: Create an EXT4 filesysem on VDO, mount it, and then attempt multiple online fsadm resizes with data checking (bug 2064802) adding entry to the devices file for /dev/sde1 creating PV on hayes-03 using device /dev/sde1 pvcreate --yes -ff /dev/sde1 Physical volume "/dev/sde1" successfully created. adding entry to the devices file for /dev/sdd1 creating PV on hayes-03 using device /dev/sdd1 pvcreate --yes -ff /dev/sdd1 Physical volume "/dev/sdd1" successfully created. adding entry to the devices file for /dev/sdh1 creating PV on hayes-03 using device /dev/sdh1 pvcreate --yes -ff /dev/sdh1 Physical volume "/dev/sdh1" successfully created. adding entry to the devices file for /dev/sdi1 creating PV on hayes-03 using device /dev/sdi1 pvcreate --yes -ff /dev/sdi1 Physical volume "/dev/sdi1" successfully created. creating VG on hayes-03 using PV(s) /dev/sde1 /dev/sdd1 /dev/sdh1 /dev/sdi1 vgcreate vdo_sanity /dev/sde1 /dev/sdd1 /dev/sdh1 /dev/sdi1 Volume group "vdo_sanity" successfully created lvcreate --yes --type linear -n vdo_pool -L 50G vdo_sanity Wiping vdo signature on /dev/vdo_sanity/vdo_pool. Logical volume "vdo_pool" created. lvconvert --yes --type vdo-pool -n vdo_lv -V 100G vdo_sanity/vdo_pool The VDO volume can address 46 GB in 23 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Logical volume "vdo_lv" created. Converted vdo_sanity/vdo_pool to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume. WARNING: Converting logical volume vdo_sanity/vdo_pool to VDO pool volume with formating. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) mkfs --type ext4 -F /dev/vdo_sanity/vdo_lv mount /dev/vdo_sanity/vdo_lv /mnt/vdo_lv Writing files to /mnt/vdo_lv /usr/tests/sts-rhel8.7/bin/checkit -w /mnt/vdo_lv -f /tmp/Filesystem.1314273 -n 5000 /usr/tests/sts-rhel8.7/bin/checkit -w /mnt/vdo_lv -f /tmp/Filesystem.1314273 -v Starting dd io to vdo fs to be resized Attempt to resize the open vdo filesystem multiple times with lvextend/fsadm on hayes-03 +++ itr 1 +++ [...] +++ itr 4 +++ Adding additional space to vdo_sanity/vdo_lv on hayes-03 lvextend --yes --resizefs -L +500M vdo_sanity/vdo_lv Size of logical volume vdo_sanity/vdo_lv changed from 101.46 GiB (25975 extents) to 101.95 GiB (26100 extents). Logical volume vdo_sanity/vdo_lv successfully resized. Filesystem at /dev/mapper/vdo_sanity-vdo_lv is mounted on /mnt/vdo_lv; on-line resizing required old_desc_blocks = 13, new_desc_blocks = 13 The filesystem on /dev/mapper/vdo_sanity-vdo_lv is now 26726400 (4k) blocks long. resize2fs 1.46.5 (30-Dec-2021) PRE:106393600.0 POST:106905600.0 PRE:{'104137560'} POST:{'104641336'} Checking files from /mnt/vdo_lv /usr/tests/sts-rhel8.7/bin/checkit -w /mnt/vdo_lv -f /tmp/Filesystem.1314273 -v checkit starting with: VERIFY Verify XIOR Stream: /tmp/Filesystem.1314273 Working dir: /mnt/vdo_lv umount /mnt/vdo_lv lvremove -f vdo_sanity/vdo_lv Logical volume "vdo_lv" successfully removed. removing vg vdo_sanity from hayes-03 Volume group "vdo_sanity" successfully removed removing pv /dev/sde1 on hayes-03 Labels on physical volume "/dev/sde1" successfully wiped. removing entry from the devices file for /dev/sde1 removing pv /dev/sdd1 on hayes-03 Labels on physical volume "/dev/sdd1" successfully wiped. removing entry from the devices file for /dev/sdd1 removing pv /dev/sdh1 on hayes-03 Labels on physical volume "/dev/sdh1" successfully wiped. removing entry from the devices file for /dev/sdh1 removing pv /dev/sdi1 on hayes-03 Labels on physical volume "/dev/sdi1" successfully wiped. removing entry from the devices file for /dev/sdi1 SCENARIO - open_xfs_fsadm_vdo_resize: Create an XFS filesysem on VDO, mount it, and then attempt multiple online fsadm resizes with data checking (bug 2064802) adding entry to the devices file for /dev/sde1 creating PV on hayes-03 using device /dev/sde1 pvcreate --yes -ff /dev/sde1 Physical volume "/dev/sde1" successfully created. adding entry to the devices file for /dev/sdd1 creating PV on hayes-03 using device /dev/sdd1 pvcreate --yes -ff /dev/sdd1 Physical volume "/dev/sdd1" successfully created. adding entry to the devices file for /dev/sdh1 creating PV on hayes-03 using device /dev/sdh1 pvcreate --yes -ff /dev/sdh1 Physical volume "/dev/sdh1" successfully created. adding entry to the devices file for /dev/sdi1 creating PV on hayes-03 using device /dev/sdi1 pvcreate --yes -ff /dev/sdi1 Physical volume "/dev/sdi1" successfully created. creating VG on hayes-03 using PV(s) /dev/sde1 /dev/sdd1 /dev/sdh1 /dev/sdi1 vgcreate vdo_sanity /dev/sde1 /dev/sdd1 /dev/sdh1 /dev/sdi1 Volume group "vdo_sanity" successfully created lvcreate --yes --type linear -n vdo_pool -L 50G vdo_sanity Wiping vdo signature on /dev/vdo_sanity/vdo_pool. Logical volume "vdo_pool" created. lvconvert --yes --type vdo-pool -n vdo_lv -V 100G vdo_sanity/vdo_pool The VDO volume can address 46 GB in 23 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Logical volume "vdo_lv" created. Converted vdo_sanity/vdo_pool to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume. WARNING: Converting logical volume vdo_sanity/vdo_pool to VDO pool volume with formating. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) mkfs --type xfs -f /dev/vdo_sanity/vdo_lv mount /dev/vdo_sanity/vdo_lv /mnt/vdo_lv Writing files to /mnt/vdo_lv /usr/tests/sts-rhel8.7/bin/checkit -w /mnt/vdo_lv -f /tmp/Filesystem.1314273 -n 5000 /usr/tests/sts-rhel8.7/bin/checkit -w /mnt/vdo_lv -f /tmp/Filesystem.1314273 -v Starting dd io to vdo fs to be resized Attempt to resize the open vdo filesystem multiple times with lvextend/fsadm on hayes-03 +++ itr 1 +++ [...] +++ itr 4 +++ Adding additional space to vdo_sanity/vdo_lv on hayes-03 lvextend --yes --resizefs -L +500M vdo_sanity/vdo_lv 40000+0 records in 40000+0 records out 41943040000 bytes (42 GB, 39 GiB) copied, 186.615 s, 225 MB/s Size of logical volume vdo_sanity/vdo_lv changed from 101.46 GiB (25975 extents) to 101.95 GiB (26100 extents). Logical volume vdo_sanity/vdo_lv successfully resized. meta-data=/dev/mapper/vdo_sanity-vdo_lv isize=512 agcount=5, agsize=6553600 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 bigtime=1 inobtcount=1 data = bsize=4096 blocks=26598400, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=12800, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 26598400 to 26726400 PRE:106393600.0 POST:106905600.0 PRE:{'106342400'} POST:{'106854400'} Checking files from /mnt/vdo_lv /usr/tests/sts-rhel8.7/bin/checkit -w /mnt/vdo_lv -f /tmp/Filesystem.1314273 -v checkit starting with: VERIFY Verify XIOR Stream: /tmp/Filesystem.1314273 Working dir: /mnt/vdo_lv umount /mnt/vdo_lv lvremove -f vdo_sanity/vdo_lv Logical volume "vdo_lv" successfully removed. removing vg vdo_sanity from hayes-03 Volume group "vdo_sanity" successfully removed removing pv /dev/sde1 on hayes-03 Labels on physical volume "/dev/sde1" successfully wiped. removing entry from the devices file for /dev/sde1 removing pv /dev/sdd1 on hayes-03 Labels on physical volume "/dev/sdd1" successfully wiped. removing entry from the devices file for /dev/sdd1 removing pv /dev/sdh1 on hayes-03 Labels on physical volume "/dev/sdh1" successfully wiped. removing entry from the devices file for /dev/sdh1 removing pv /dev/sdi1 on hayes-03 Labels on physical volume "/dev/sdi1" successfully wiped. removing entry from the devices file for /dev/sdi1 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (kmod-kvdo bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:8333 |