Bug 1549272
Summary: | "Failed to lock logical volume" during raid scrub check after partial activation | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Corey Marthaler <cmarthal> | ||||
Component: | lvm2 | Assignee: | Heinz Mauelshagen <heinzm> | ||||
lvm2 sub component: | Mirroring and RAID | QA Contact: | cluster-qe <cluster-qe> | ||||
Status: | CLOSED WONTFIX | Docs Contact: | |||||
Severity: | medium | ||||||
Priority: | unspecified | CC: | agk, heinzm, jbrassow, msnitzer, prajnoha, zkabelac | ||||
Version: | 7.5 | ||||||
Target Milestone: | rc | ||||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | |||||||
: | 1886597 (view as bug list) | Environment: | |||||
Last Closed: | 2021-02-15 07:35:28 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1886597 | ||||||
Attachments: |
|
Description
Corey Marthaler
2018-02-26 21:00:05 UTC
Created attachment 1401029 [details]
verbose lvchange attempt
This is an issue in the current rhel8.3 as well (this test has been turned off since 2018). We can open a new bug to track and fix (if warranted), in rhel8 and close this issue. kernel-4.18.0-234.el8.x86_64 [root@hayes-03 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices missing_pv_raid raid_sanity Rwi-a-r-r- 100.00m 100.00 missing_pv_raid_rimage_0(0),missing_pv_raid_rimage_1(0) [missing_pv_raid_rimage_0] raid_sanity Iwi-aor-r- 100.00m /dev/sdb1(1) [missing_pv_raid_rimage_1] raid_sanity iwi-aor--- 100.00m /dev/sdc1(1) [missing_pv_raid_rmeta_0] raid_sanity ewi-aor-r- 4.00m /dev/sdb1(0) [missing_pv_raid_rmeta_1] raid_sanity ewi-aor--- 4.00m /dev/sdc1(0) [root@hayes-03 ~]# lvchange --syncaction check raid_sanity/missing_pv_raid [root@hayes-03 ~]# echo $? 0 Oct 8 12:11:00 hayes-03 kernel: md: mdX: data-check done. Oct 8 12:11:00 hayes-03 lvm[546077]: WARNING: Device #0 of raid1 array, raid_sanity-missing_pv_raid, has failed. Oct 8 12:11:00 hayes-03 kernel: device-mapper: raid: Failed to read superblock of device at position 0 Oct 8 12:11:00 hayes-03 kernel: device-mapper: raid: Device 1 specified for rebuild; clearing superblock Oct 8 12:11:00 hayes-03 kernel: md: pers->run() failed ... Oct 8 12:11:00 hayes-03 kernel: device-mapper: table: 253:6: raid: Failed to run raid array Oct 8 12:11:00 hayes-03 kernel: device-mapper: ioctl: error adding target to table Oct 8 12:11:00 hayes-03 lvm[546077]: device-mapper: reload ioctl on (253:6) failed: Invalid argument Oct 8 12:11:00 hayes-03 lvm[546077]: Failed to suspend logical volume raid_sanity/missing_pv_raid. Oct 8 12:11:00 hayes-03 lvm[546077]: Failed to replace faulty devices in raid_sanity/missing_pv_raid. Oct 8 12:11:00 hayes-03 lvm[546077]: Repair of RAID device raid_sanity-missing_pv_raid failed. Oct 8 12:11:00 hayes-03 lvm[546077]: Failed to process event for raid_sanity-missing_pv_raid. After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. |