Description of problem: It's likely this is just a dup of bug 1959626. However, in this test scenario, writecache volumes were used and --uncache was used to attempt to tear them a part afterwards. SCENARIO - [open_EXT4_on_writecache_origin_fsadm_reduce_attempts] Create a writecached EXT4 filesystem, mount it, snapshot it, and shrink the ORIGIN while online *** Writecache info for this scenario *** * origin (slow): /dev/sdi1 * pool (fast): /dev/sdh1 ************************************ Adding "slow" and "fast" tags to corresponding pvs Create origin (slow) volume lvcreate --yes --wipesignatures y -L 4G -n cworigin writecache_sanity @slow Create writecache cvol (fast) volumes lvcreate --yes -L 4G -n pool writecache_sanity @fast Deactivate *ONLY* fast pool before conversion to write cache (SEE bug 1185347) Create writecached volume by combining the cache pool (fast) and origin (slow) volumes lvconvert --yes --type writecache --cachesettings 'low_watermark=8 high_watermark=83 writeback_jobs=2466 autocommit_blocks=2667 autocommit_time=2385' --cachevol writecache_sanity/pool writecache_sanity/cworigin Placing an EXT filesystem on origin volume mke2fs 1.45.6 (20-Mar-2020) Mounting origin volume Writing files to /mnt/cworigin Checking files on /mnt/cworigin Making snapshot of origin volume lvcreate --yes -s /dev/writecache_sanity/cworigin -c 16 -n online_resize -L 500M Resize the open snapshoted ORIGIN EXT4 filesystem multiple times with lvreduce/fsadm on hayes-01 RESIZING CACHE VOLUMES IS NOW SUPPORTED IN 8.5 (lvreduce --yes -L -120M -r /dev/writecache_sanity/cworigin) resize2fs 1.45.6 (20-Mar-2020) FS: 3776 <= 3650 LV: 4194304.00 lt 4071424.00 Checking files on /mnt/cworigin (lvreduce --yes -L -120M -r /dev/writecache_sanity/cworigin) resize2fs 1.45.6 (20-Mar-2020) FS: 3650 <= 3526 LV: 4071424.00 lt 3948544.00 Checking files on /mnt/cworigin [...] Removing snap volume writecache_sanity/online_resize lvremove -f /dev/writecache_sanity/online_resize Uncaching cache origin (lvconvert --yes --uncache writecache_sanity/cworigin) from cache origin May 13 12:39:40 hayes-01 kernel: INFO: task kworker/31:2:31612 blocked for more than 120 seconds. May 13 12:39:40 hayes-01 kernel: Not tainted 4.18.0-305.2.el8.x86_64 #1 May 13 12:39:40 hayes-01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. May 13 12:39:40 hayes-01 kernel: task:kworker/31:2 state:D stack: 0 pid:31612 ppid: 2 flags:0x80004080 May 13 12:39:40 hayes-01 kernel: Workqueue: writecache-writeback writecache_writeback [dm_writecache] May 13 12:39:40 hayes-01 kernel: Call Trace: May 13 12:39:40 hayes-01 kernel: __schedule+0x2c4/0x700 May 13 12:39:40 hayes-01 kernel: schedule+0x37/0xa0 May 13 12:39:40 hayes-01 kernel: io_schedule+0x12/0x40 May 13 12:39:40 hayes-01 kernel: writecache_wait_on_freelist+0x70/0xa0 [dm_writecache] May 13 12:39:40 hayes-01 kernel: ? finish_wait+0x80/0x80 May 13 12:39:40 hayes-01 kernel: __writeback_throttle.isra.39+0x31/0x50 [dm_writecache] May 13 12:39:40 hayes-01 kernel: __writecache_writeback_ssd+0x1d9/0x210 [dm_writecache] May 13 12:39:40 hayes-01 kernel: writecache_writeback+0x78f/0x850 [dm_writecache] May 13 12:39:40 hayes-01 kernel: ? blk_finish_plug+0x21/0x2e May 13 12:39:40 hayes-01 kernel: process_one_work+0x1a7/0x360 May 13 12:39:40 hayes-01 kernel: worker_thread+0x30/0x390 May 13 12:39:40 hayes-01 kernel: ? create_worker+0x1a0/0x1a0 May 13 12:39:40 hayes-01 kernel: kthread+0x116/0x130 May 13 12:39:40 hayes-01 kernel: ? kthread_flush_work_fn+0x10/0x10 May 13 12:39:40 hayes-01 kernel: ret_from_fork+0x35/0x40 May 13 12:41:43 hayes-01 kernel: INFO: task kworker/31:2:31612 blocked for more than 120 seconds. May 13 12:41:43 hayes-01 kernel: Not tainted 4.18.0-305.2.el8.x86_64 #1 May 13 12:41:43 hayes-01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. May 13 12:41:43 hayes-01 kernel: task:kworker/31:2 state:D stack: 0 pid:31612 ppid: 2 flags:0x80004080 May 13 12:41:43 hayes-01 kernel: Workqueue: writecache-writeback writecache_writeback [dm_writecache] May 13 12:41:43 hayes-01 kernel: Call Trace: May 13 12:41:43 hayes-01 kernel: __schedule+0x2c4/0x700 May 13 12:41:43 hayes-01 kernel: schedule+0x37/0xa0 May 13 12:41:43 hayes-01 kernel: io_schedule+0x12/0x40 May 13 12:41:43 hayes-01 kernel: writecache_wait_on_freelist+0x70/0xa0 [dm_writecache] May 13 12:41:43 hayes-01 kernel: ? finish_wait+0x80/0x80 May 13 12:41:43 hayes-01 kernel: __writeback_throttle.isra.39+0x31/0x50 [dm_writecache] May 13 12:41:43 hayes-01 kernel: __writecache_writeback_ssd+0x1d9/0x210 [dm_writecache] May 13 12:41:43 hayes-01 kernel: writecache_writeback+0x78f/0x850 [dm_writecache] May 13 12:41:43 hayes-01 kernel: ? blk_finish_plug+0x21/0x2e May 13 12:41:43 hayes-01 kernel: process_one_work+0x1a7/0x360 May 13 12:41:43 hayes-01 kernel: worker_thread+0x30/0x390 May 13 12:41:43 hayes-01 kernel: ? create_worker+0x1a0/0x1a0 May 13 12:41:43 hayes-01 kernel: kthread+0x116/0x130 May 13 12:41:43 hayes-01 kernel: ? kthread_flush_work_fn+0x10/0x10 May 13 12:41:43 hayes-01 kernel: ret_from_fork+0x35/0x40 [root@hayes-01 ~]# ps -ef | grep lvconvert root 33402 33401 0 12:32 ? 00:00:00 lvconvert --yes --uncache writecache_sanity/cworigin root 34102 2582 0 13:53 pts/1 00:00:00 grep --color=auto lvconvert [root@hayes-01 ~]# strace -p 33402 strace: Process 33402 attached restart_syscall(<... resuming interrupted nanosleep ...>) = 0 rt_sigprocmask(SIG_SETMASK, ~[RTMIN RT_1], [], 8) = 0 openat(AT_FDCWD, "/proc/thread-self/attr/fscreate", O_RDWR|O_CLOEXEC) = 11 write(11, "system_u:object_r:lvm_lock_t:s0\0", 32) = 32 close(11) = 0 openat(AT_FDCWD, "/run/lock/lvm/V_writecache_sanity:aux", O_RDWR|O_CREAT|O_APPEND, 0777) = 11 rt_sigprocmask(SIG_BLOCK, NULL, ~[KILL STOP RTMIN RT_1], 8) = 0 rt_sigaction(SIGINT, NULL, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fbbb2ca6400}, 8) = 0 rt_sigaction(SIGINT, {sa_handler=0x5652955cdc90, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fbbb2ca6400}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fbbb2ca6400}, 8) = 0 rt_sigaction(SIGTERM, NULL, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fbbb2ca6400}, 8) = 0 rt_sigaction(SIGTERM, {sa_handler=0x5652955cdc90, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fbbb2ca6400}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fbbb2ca6400}, 8) = 0 rt_sigprocmask(SIG_SETMASK, ~[INT KILL TERM STOP RTMIN RT_1], NULL, 8) = 0 flock(11, LOCK_EX) = 0 rt_sigprocmask(SIG_BLOCK, NULL, ~[INT KILL TERM STOP RTMIN RT_1], 8) = 0 rt_sigprocmask(SIG_SETMASK, ~[KILL STOP RTMIN RT_1], NULL, 8) = 0 rt_sigaction(SIGINT, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fbbb2ca6400}, NULL, 8) = 0 rt_sigaction(SIGTERM, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fbbb2ca6400}, NULL, 8) = 0 stat("/run/lock/lvm/V_writecache_sanity:aux", {st_mode=S_IFREG|0700, st_size=0, ...}) = 0 fstat(11, {st_mode=S_IFREG|0700, st_size=0, ...}) = 0 openat(AT_FDCWD, "/run/lock/lvm/V_writecache_sanity", O_RDWR|O_CREAT|O_APPEND, 0777) = 13 rt_sigprocmask(SIG_BLOCK, NULL, ~[KILL STOP RTMIN RT_1], 8) = 0 rt_sigaction(SIGINT, NULL, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fbbb2ca6400}, 8) = 0 rt_sigaction(SIGINT, {sa_handler=0x5652955cdc90, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fbbb2ca6400}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fbbb2ca6400}, 8) = 0 rt_sigaction(SIGTERM, NULL, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fbbb2ca6400}, 8) = 0 rt_sigaction(SIGTERM, {sa_handler=0x5652955cdc90, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fbbb2ca6400}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fbbb2ca6400}, 8) = 0 rt_sigprocmask(SIG_SETMASK, ~[INT KILL TERM STOP RTMIN RT_1], NULL, 8) = 0 flock(13, LOCK_EX) = 0 rt_sigprocmask(SIG_BLOCK, NULL, ~[INT KILL TERM STOP RTMIN RT_1], 8) = 0 rt_sigprocmask(SIG_SETMASK, ~[KILL STOP RTMIN RT_1], NULL, 8) = 0 rt_sigaction(SIGINT, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fbbb2ca6400}, NULL, 8) = 0 rt_sigaction(SIGTERM, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fbbb2ca6400}, NULL, 8) = 0 stat("/run/lock/lvm/V_writecache_sanity", {st_mode=S_IFREG|0700, st_size=0, ...}) = 0 fstat(13, {st_mode=S_IFREG|0700, st_size=0, ...}) = 0 flock(11, LOCK_EX|LOCK_NB) = 0 stat("/run/lock/lvm/V_writecache_sanity:aux", {st_mode=S_IFREG|0700, st_size=0, ...}) = 0 fstat(11, {st_mode=S_IFREG|0700, st_size=0, ...}) = 0 unlink("/run/lock/lvm/V_writecache_sanity:aux") = 0 close(11) = 0 openat(AT_FDCWD, "/proc/thread-self/attr/fscreate", O_RDWR|O_CLOEXEC) = 11 write(11, NULL, 0) = 0 close(11) = 0 io_submit(0x7fbbb4a98000, 1, [{aio_data=0, aio_lio_opcode=IOCB_CMD_PREAD, aio_fildes=12, aio_buf=0x5652976e4000, aio_nbytes=131072, aio_offset=0}]) = 1 io_getevents(0x7fbbb4a98000, 1, 64, [{data=0, obj=0x565296fbde50, res=131072, res2=0}], NULL) = 1 io_submit(0x7fbbb4a98000, 1, [{aio_data=0, aio_lio_opcode=IOCB_CMD_PREAD, aio_fildes=3, aio_buf=0x565297704000, aio_nbytes=131072, aio_offset=0}]) = 1 io_getevents(0x7fbbb4a98000, 1, 64, [{data=0, obj=0x565296fbde50, res=131072, res2=0}], NULL) = 1 io_submit(0x7fbbb4a98000, 1, [{aio_data=0, aio_lio_opcode=IOCB_CMD_PREAD, aio_fildes=4, aio_buf=0x565297724000, aio_nbytes=131072, aio_offset=0}]) = 1 io_getevents(0x7fbbb4a98000, 1, 64, [{data=0, obj=0x565296fbde50, res=131072, res2=0}], NULL) = 1 io_submit(0x7fbbb4a98000, 1, [{aio_data=0, aio_lio_opcode=IOCB_CMD_PREAD, aio_fildes=5, aio_buf=0x565297744000, aio_nbytes=131072, aio_offset=0}]) = 1 io_getevents(0x7fbbb4a98000, 1, 64, [{data=0, obj=0x565296fbde50, res=131072, res2=0}], NULL) = 1 io_submit(0x7fbbb4a98000, 1, [{aio_data=0, aio_lio_opcode=IOCB_CMD_PREAD, aio_fildes=6, aio_buf=0x565297764000, aio_nbytes=131072, aio_offset=0}]) = 1 io_getevents(0x7fbbb4a98000, 1, 64, [{data=0, obj=0x565296fbde50, res=131072, res2=0}], NULL) = 1 io_submit(0x7fbbb4a98000, 1, [{aio_data=0, aio_lio_opcode=IOCB_CMD_PREAD, aio_fildes=7, aio_buf=0x565297784000, aio_nbytes=131072, aio_offset=0}]) = 1 io_getevents(0x7fbbb4a98000, 1, 64, [{data=0, obj=0x565296fbde50, res=131072, res2=0}], NULL) = 1 io_submit(0x7fbbb4a98000, 1, [{aio_data=0, aio_lio_opcode=IOCB_CMD_PREAD, aio_fildes=8, aio_buf=0x5652977a4000, aio_nbytes=131072, aio_offset=0}]) = 1 io_getevents(0x7fbbb4a98000, 1, 64, [{data=0, obj=0x565296fbde50, res=131072, res2=0}], NULL) = 1 io_submit(0x7fbbb4a98000, 1, [{aio_data=0, aio_lio_opcode=IOCB_CMD_PREAD, aio_fildes=9, aio_buf=0x5652977c4000, aio_nbytes=131072, aio_offset=0}]) = 1 io_getevents(0x7fbbb4a98000, 1, 64, [{data=0, obj=0x565296fbde50, res=131072, res2=0}], NULL) = 1 io_submit(0x7fbbb4a98000, 1, [{aio_data=0, aio_lio_opcode=IOCB_CMD_PREAD, aio_fildes=10, aio_buf=0x5652977e4000, aio_nbytes=131072, aio_offset=0}]) = 1 io_getevents(0x7fbbb4a98000, 1, 64, [{data=0, obj=0x565296fbde50, res=131072, res2=0}], NULL) = 1 ioctl(12, BLKGETSIZE64, [1999844106240]) = 0 ioctl(3, BLKGETSIZE64, [1999844106240]) = 0 ioctl(4, BLKGETSIZE64, [1999844106240]) = 0 ioctl(5, BLKGETSIZE64, [1999844106240]) = 0 ioctl(6, BLKGETSIZE64, [1999844106240]) = 0 ioctl(7, BLKGETSIZE64, [1999844106240]) = 0 ioctl(8, BLKGETSIZE64, [1999844106240]) = 0 ioctl(9, BLKGETSIZE64, [1999844106240]) = 0 ioctl(10, BLKGETSIZE64, [1999844106240]) = 0 stat("/dev/mapper/control", {st_mode=S_IFCHR|0600, st_rdev=makedev(0xa, 0xec), ...}) = 0 openat(AT_FDCWD, "/dev/mapper/control", O_RDWR) = 11 ioctl(11, DM_DEV_STATUS, {version=4.0.0, data_size=2048, uuid="LVM-qR2lYr89s0ecRzpHkb0fUdPp86qY09f5dNuk5C5ER3v890AqkjX1UHg6GOjNv9we", flags=DM_EXISTS_FLAG|DM_SKIP_BDGET_FLAG} => {version=4.43.0, data_size=305, dev=makedev(0xfd, 0), name="writecache_sanity-cworigin", uuid="LVM-qR2lYr89s0ecRzpHkb0fUdPp86qY09f5dNuk5C5ER3v890AqkjX1UHg6GOjNv9we", target_count=1, open_count=0, event_nr=0, flags=DM_EXISTS_FLAG|DM_ACTIVE_PRESENT_FLAG|DM_SKIP_BDGET_FLAG}) = 0 ioctl(11, DM_TABLE_STATUS, {version=4.0.0, data_size=16384, data_start=312, uuid="LVM-qR2lYr89s0ecRzpHkb0fUdPp86qY09f5dNuk5C5ER3v890AqkjX1UHg6GOjNv9we", flags=DM_EXISTS_FLAG|DM_SKIP_BDGET_FLAG|DM_NOFLUSH_FLAG} => {version=4.43.0, data_size=372, data_start=312, dev=makedev(0xfd, 0), name="writecache_sanity-cworigin", uuid="LVM-qR2lYr89s0ecRzpHkb0fUdPp86qY09f5dNuk5C5ER3v890AqkjX1UHg6GOjNv9we", target_count=1, open_count=0, event_nr=0, flags=DM_EXISTS_FLAG|DM_ACTIVE_PRESENT_FLAG|DM_SKIP_BDGET_FLAG|DM_NOFLUSH_FLAG, ...}) = 0 close(11) = 0 flock(13, LOCK_NB|LOCK_UN) = 0 flock(13, LOCK_EX|LOCK_NB) = 0 stat("/run/lock/lvm/V_writecache_sanity", {st_mode=S_IFREG|0700, st_size=0, ...}) = 0 fstat(13, {st_mode=S_IFREG|0700, st_size=0, ...}) = 0 unlink("/run/lock/lvm/V_writecache_sanity") = 0 close(13) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 write(1, " Detaching writecache cleaning "..., 46) = 46 write(1, " This command can be cancelled "..., 73) = 73 nanosleep({tv_sec=5, tv_nsec=0}, ^Cstrace: Process 33402 detached Version-Release number of selected component (if applicable): kernel-4.18.0-305.2.el8 BUILT: Wed May 5 10:35:03 CDT 2021 lvm2-2.03.12-0.1.20210426git4dc5d4a.el8 BUILT: Mon Apr 26 08:23:33 CDT 2021 lvm2-libs-2.03.12-0.1.20210426git4dc5d4a.el8 BUILT: Mon Apr 26 08:23:33 CDT 2021 lvm2-dbusd-2.03.12-0.1.20210426git4dc5d4a.el8 BUILT: Mon Apr 26 08:23:33 CDT 2021 lvm2-lockd-2.03.12-0.1.20210426git4dc5d4a.el8 BUILT: Mon Apr 26 08:23:33 CDT 2021 boom-boot-1.3-1.el8 BUILT: Sat Jan 30 02:31:18 CST 2021 device-mapper-1.02.177-0.1.20210426git4dc5d4a.el8 BUILT: Mon Apr 26 08:23:33 CDT 2021 device-mapper-libs-1.02.177-0.1.20210426git4dc5d4a.el8 BUILT: Mon Apr 26 08:23:33 CDT 2021 device-mapper-event-1.02.177-0.1.20210426git4dc5d4a.el8 BUILT: Mon Apr 26 08:23:33 CDT 2021 device-mapper-event-libs-1.02.177-0.1.20210426git4dc5d4a.el8 BUILT: Mon Apr 26 08:23:33 CDT 2021
commit 5ec24dfb0bb0c9e48d59d6fbb83262a7087964e1 enabled resizing LVs with cache or writecache attached, but was it only meant to allow extending LVs with cache|writecache? I suspect that it should be preventing lvreduce of LVs with cache|writecache.
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.