| Summary: | LVM cache: allow abandoning cachepool (lvconvert --uncache) if it has failed, remaining scenarios | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Corey Marthaler <cmarthal> |
| Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> |
| lvm2 sub component: | Cache Logical Volumes | QA Contact: | cluster-qe <cluster-qe> |
| Status: | CLOSED WONTFIX | Docs Contact: | |
| Severity: | medium | ||
| Priority: | unspecified | CC: | agk, as-rsi, heinzm, jbrassow, msnitzer, prajnoha, rocketraman, tbskyd, zkabelac |
| Version: | 7.3 | ||
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-12-15 07:46:10 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Bug Depends On: | 1379472 | ||
| Bug Blocks: | |||
|
Description
Corey Marthaler
2016-09-20 19:01:57 UTC
*** Bug 1379413 has been marked as a duplicate of this bug. *** I can confirm, this bug still exist!
Can't disable/remove cache, can't activate the volume.
root@vmm:~# lvs --version
LVM version: 2.02.176(2) (2017-11-03)
Library version: 1.02.145 (2017-11-03)
Driver version: 4.37.0
Configuration: ./configure --build=x86_64-linux-gnu --prefix=/usr --includedir=${prefix}/include --mandir=${prefix}/share/man --infodir=${prefix}/share/info --sysconfdir=/etc --localstatedir=/var --disable-silent-rules --libdir=${prefix}/lib/x86_64-linux-gnu --libexecdir=${prefix}/lib/x86_64-linux-gnu --runstatedir=/run --disable-maintainer-mode --disable-dependency-tracking --exec-prefix= --bindir=/bin --libdir=/lib/x86_64-linux-gnu --sbindir=/sbin --with-usrlibdir=/usr/lib/x86_64-linux-gnu --with-optimisation=-O2 --with-cache=internal --with-clvmd=corosync --with-cluster=internal --with-device-uid=0 --with-device-gid=6 --with-device-mode=0660 --with-default-pid-dir=/run --with-default-run-dir=/run/lvm --with-default-locking-dir=/run/lock/lvm --with-thin=internal --with-thin-check=/usr/sbin/thin_check --with-thin-dump=/usr/sbin/thin_dump --with-thin-repair=/usr/sbin/thin_repair --enable-applib --enable-blkid_wiping --enable-cmdlib --enable-cmirrord --enable-dmeventd --enable-dbus-service --enable-lvmetad --enable-lvmlockd-dlm --enable-lvmlockd-sanlock --enable-lvmpolld --enable-notify-dbus --enable-pkgconfig --enable-readline --enable-udev_rules --enable-udev_sync
root@vmm:~# lvchange --version
LVM version: 2.02.176(2) (2017-11-03)
Library version: 1.02.145 (2017-11-03)
Driver version: 4.37.0
Configuration: ./configure --build=x86_64-linux-gnu --prefix=/usr --includedir=${prefix}/include --mandir=${prefix}/share/man --infodir=${prefix}/share/info --sysconfdir=/etc --localstatedir=/var --disable-silent-rules --libdir=${prefix}/lib/x86_64-linux-gnu --libexecdir=${prefix}/lib/x86_64-linux-gnu --runstatedir=/run --disable-maintainer-mode --disable-dependency-tracking --exec-prefix= --bindir=/bin --libdir=/lib/x86_64-linux-gnu --sbindir=/sbin --with-usrlibdir=/usr/lib/x86_64-linux-gnu --with-optimisation=-O2 --with-cache=internal --with-clvmd=corosync --with-cluster=internal --with-device-uid=0 --with-device-gid=6 --with-device-mode=0660 --with-default-pid-dir=/run --with-default-run-dir=/run/lvm --with-default-locking-dir=/run/lock/lvm --with-thin=internal --with-thin-check=/usr/sbin/thin_check --with-thin-dump=/usr/sbin/thin_dump --with-thin-repair=/usr/sbin/thin_repair --enable-applib --enable-blkid_wiping --enable-cmdlib --enable-cmirrord --enable-dmeventd --enable-dbus-service --enable-lvmetad --enable-lvmlockd-dlm --enable-lvmlockd-sanlock --enable-lvmpolld --enable-notify-dbus --enable-pkgconfig --enable-readline --enable-udev_rules --enable-udev_sync
root@vmm:~# lvconvert --version
LVM version: 2.02.176(2) (2017-11-03)
Library version: 1.02.145 (2017-11-03)
Driver version: 4.37.0
Configuration: ./configure --build=x86_64-linux-gnu --prefix=/usr --includedir=${prefix}/include --mandir=${prefix}/share/man --infodir=${prefix}/share/info --sysconfdir=/etc --localstatedir=/var --disable-silent-rules --libdir=${prefix}/lib/x86_64-linux-gnu --libexecdir=${prefix}/lib/x86_64-linux-gnu --runstatedir=/run --disable-maintainer-mode --disable-dependency-tracking --exec-prefix= --bindir=/bin --libdir=/lib/x86_64-linux-gnu --sbindir=/sbin --with-usrlibdir=/usr/lib/x86_64-linux-gnu --with-optimisation=-O2 --with-cache=internal --with-clvmd=corosync --with-cluster=internal --with-device-uid=0 --with-device-gid=6 --with-device-mode=0660 --with-default-pid-dir=/run --with-default-run-dir=/run/lvm --with-default-locking-dir=/run/lock/lvm --with-thin=internal --with-thin-check=/usr/sbin/thin_check --with-thin-dump=/usr/sbin/thin_dump --with-thin-repair=/usr/sbin/thin_repair --enable-applib --enable-blkid_wiping --enable-cmdlib --enable-cmirrord --enable-dmeventd --enable-dbus-service --enable-lvmetad --enable-lvmlockd-dlm --enable-lvmlockd-sanlock --enable-lvmpolld --enable-notify-dbus --enable-pkgconfig --enable-readline --enable-udev_rules --enable-udev_sync
root@vmm:~# lvs -a -o -move_pv,mirror_log,copy_percent -o +devices | grep wotan
wotan VM_SysDevs Cwi---C--- 16.00g [wotan_cache] [wotan_corig] wotan_corig(0)
[wotan_cache] VM_SysDevs Cwi---C--- 8.00g wotan_cache_cdata(0)
[wotan_cache_cdata] VM_SysDevs Cwi------- 8.00g /dev/sdj2(0)
[wotan_cache_cmeta] VM_SysDevs ewi------- 12.00m /dev/md127(51712)
[wotan_corig] VM_SysDevs owi---C--- 16.00g
root@vmm:~# lvchange -ay -v VM_SysDevs/wotan
Activating logical volume VM_SysDevs/wotan exclusively.
activation/volume_list configuration setting not defined: Checking only host tags for VM_SysDevs/wotan.
Creating VM_SysDevs-wotan_cache_cdata
Loading VM_SysDevs-wotan_cache_cdata table (253:5)
Resuming VM_SysDevs-wotan_cache_cdata (253:5)
Creating VM_SysDevs-wotan_cache_cmeta
Loading VM_SysDevs-wotan_cache_cmeta table (253:7)
Resuming VM_SysDevs-wotan_cache_cmeta (253:7)
Creating VM_SysDevs-wotan_corig
Loading VM_SysDevs-wotan_corig table (253:8)
Resuming VM_SysDevs-wotan_corig (253:8)
Executing: /usr/sbin/cache_check -q /dev/mapper/VM_SysDevs-wotan_cache_cmeta
/usr/sbin/cache_check failed: 1
Check of pool VM_SysDevs/wotan_cache failed (status:1). Manual repair required!
Removing VM_SysDevs-wotan_corig (253:8)
Removing VM_SysDevs-wotan_cache_cmeta (253:7)
Removing VM_SysDevs-wotan_cache_cdata (253:5)
root@vmm:~# lvconvert --splitcache --force --yes --verbose VM_SysDevs/wotan
Archiving volume group "VM_SysDevs" metadata (seqno 1015).
activation/volume_list configuration setting not defined: Checking only host tags for VM_SysDevs/wotan.
Creating VM_SysDevs-wotan_cache_cdata
Loading VM_SysDevs-wotan_cache_cdata table (253:5)
Resuming VM_SysDevs-wotan_cache_cdata (253:5)
Creating VM_SysDevs-wotan_cache_cmeta
Loading VM_SysDevs-wotan_cache_cmeta table (253:7)
Resuming VM_SysDevs-wotan_cache_cmeta (253:7)
Creating VM_SysDevs-wotan_corig
Loading VM_SysDevs-wotan_corig table (253:8)
Resuming VM_SysDevs-wotan_corig (253:8)
Executing: /usr/sbin/cache_check -q /dev/mapper/VM_SysDevs-wotan_cache_cmeta
/usr/sbin/cache_check failed: 1
Check of pool VM_SysDevs/wotan_cache failed (status:1). Manual repair required!
Removing VM_SysDevs-wotan_corig (253:8)
Removing VM_SysDevs-wotan_cache_cmeta (253:7)
Removing VM_SysDevs-wotan_cache_cdata (253:5)
Failed to active cache locally VM_SysDevs/wotan.
root@vmm:~# lvconvert --uncache --force --yes --verbose VM_SysDevs/wotan
Archiving volume group "VM_SysDevs" metadata (seqno 1015).
activation/volume_list configuration setting not defined: Checking only host tags for VM_SysDevs/wotan.
Creating VM_SysDevs-wotan_cache_cdata
Loading VM_SysDevs-wotan_cache_cdata table (253:5)
Resuming VM_SysDevs-wotan_cache_cdata (253:5)
Creating VM_SysDevs-wotan_cache_cmeta
Loading VM_SysDevs-wotan_cache_cmeta table (253:7)
Resuming VM_SysDevs-wotan_cache_cmeta (253:7)
Creating VM_SysDevs-wotan_corig
Loading VM_SysDevs-wotan_corig table (253:8)
Resuming VM_SysDevs-wotan_corig (253:8)
Executing: /usr/sbin/cache_check -q /dev/mapper/VM_SysDevs-wotan_cache_cmeta
/usr/sbin/cache_check failed: 1
Check of pool VM_SysDevs/wotan_cache failed (status:1). Manual repair required!
Removing VM_SysDevs-wotan_corig (253:8)
Removing VM_SysDevs-wotan_cache_cmeta (253:7)
Removing VM_SysDevs-wotan_cache_cdata (253:5)
Failed to active cache locally VM_SysDevs/wotan.
No access to the volume in any way with bad cache device.
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. |