Bug 1881056
Summary: | Can't remove lvmcache device | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | [Community] LVM and device-mapper | Reporter: | Roy Sigurd Karlsbakk <roy> | ||||||
Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> | ||||||
lvm2 sub component: | Cache Logical Volumes | QA Contact: | cluster-qe <cluster-qe> | ||||||
Status: | NEW --- | Docs Contact: | |||||||
Severity: | high | ||||||||
Priority: | unspecified | CC: | agk, heinzm, jbrassow, msnitzer, prajnoha, roy, thornber, zkabelac | ||||||
Version: | unspecified | Flags: | pm-rhel:
lvm-technical-solution?
pm-rhel: lvm-test-coverage? |
||||||
Target Milestone: | --- | ||||||||
Target Release: | --- | ||||||||
Hardware: | x86_64 | ||||||||
OS: | Linux | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | Type: | Bug | |||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Attachments: |
|
Description
Roy Sigurd Karlsbakk
2020-09-21 13:21:50 UTC
Please attach resulting file from a command 'lvmdump -a'. Created attachment 1715544 [details]
lvmdump -a
lvmdump -a as requested
Hmm - can we get a long history of 'device-mapper' messages from kernel log (from a journal or /var/log/syslog... dmesg...) whatever fits. So we can see which devices in device stack are failing and which are correct. From dmesg -T [ma. sep. 21 18:11:25 2020] device-mapper: cache: Origin device (dm-8) discard unsupported: Disabling discard passdown. [ma. sep. 21 18:11:28 2020] device-mapper: cache: Origin device (dm-8) discard unsupported: Disabling discard passdown. Created attachment 1717580 [details]
Removed usage of cache
Until we provide an 'lvm2 solution' for this erroring case, the way to fix your case is to try to vgcfgrestore attached modified metadata (dropped cache and left your origin as 'data' LV with respective UUID).
# vgcfgrestore -f data_new data
Should make you this LVs:
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
[data] data -wi------- 13,67t
[lvol0_pmspare] data ewi------- 1,00g
vmtest data -wi------- 100,00g
You can 'lvremove' lvol0_pmspare later on.
Assuming you had the 1 failing sector problem - you should probably run 'fsck' anyway.
Thanks. But - is this safe? Is there a way to roll back if it fails? Yep - you have full archive of previous lvm2 metadata in /etc/lvm/archive |