Bug 871630
| Summary: | DM RAID: kernel panic when attempting to activate partial RAID LV (i.e. an array that has missing devices) | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Jonathan Earl Brassow <jbrassow> |
| Component: | kernel | Assignee: | Jonathan Earl Brassow <jbrassow> |
| Status: | CLOSED ERRATA | QA Contact: | Red Hat Kernel QE team <kernel-qe> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 6.3 | CC: | cmarthal |
| Target Milestone: | rc | Keywords: | Regression |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | kernel-2.6.32-339.el6 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2013-02-21 06:54:41 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Jonathan Earl Brassow
2012-10-30 22:07:22 UTC
This bug was not present in 6.3 - it has turned up in 6.4 testing. This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux release for currently deployed products. This request is not yet committed for inclusion in a release. Issue is not present in upstream kernel (3.7.0-rc2). Turns out, we've been over this problem upstream already:
Here's the upstream commit that fixed this problem:
commit a9ad8526bb1af0741a5c0e01155dac08e7bdde60
Author: Jonathan Brassow <jbrassow>
Date: Tue Apr 24 10:23:13 2012 +1000
DM RAID: Use safe version of rdev_for_each
Fix segfault caused by using rdev_for_each instead of rdev_for_each_safe
Commit dafb20fa34320a472deb7442f25a0c086e0feb33 mistakenly replaced a safe
iterator with an unsafe one when making some macro changes.
Signed-off-by: Jonathan Brassow <jbrassow>
Signed-off-by: NeilBrown <neilb>
FWIW, this can be repo'ed with the following: ./raid_sanity -t raid10 -e vgcfgrestore_raid_with_missing_pv Patch(es) available on kernel-2.6.32-339.el6 This has been verified fixed in the latest kernel (2.6.32-339.el6.x86_64). SCENARIO (raid10) - [vgcfgrestore_raid_with_missing_pv] Create a raid, force remove a leg, and then restore it's VG taft-01: lvcreate --type raid10 -i 2 -n missing_pv_raid -L 100M --nosync raid_sanity WARNING: New raid10 won't be synchronised. Don't read what you didn't write! Deactivating missing_pv_raid raid Backup the VG config taft-01 vgcfgbackup -f /tmp/raid_sanity.bkup.6320 raid_sanity Force removing PV /dev/sdc2 (used in this raid) taft-01: 'echo y | pvremove -ff /dev/sdc2' Really WIPE LABELS from physical volume "/dev/sdc2" of volume group "raid_sanity" [y/n]? WARNING: Wiping physical volume label from /dev/sdc2 of volume group "raid_sanity" Verifying that this VG is now corrupt No physical volume label read from /dev/sdc2 Failed to read physical volume "/dev/sdc2" Attempt to restore the VG back to it's original state (should not segfault) taft-01 vgcfgrestore -f /tmp/raid_sanity.bkup.6320 raid_sanity Couldn't find device with uuid yRBOXP-7IVO-3yeH-dvtr-wG8H-KZJo-Ah4yRs. Cannot restore Volume Group raid_sanity with 1 PVs marked as missing. Restore failed. Checking syslog to see if vgcfgrestore segfaulted Activating VG in partial readonly mode taft-01 vgchange -ay --partial raid_sanity PARTIAL MODE. Incomplete logical volumes will be processed. Couldn't find device with uuid yRBOXP-7IVO-3yeH-dvtr-wG8H-KZJo-Ah4yRs. Recreating PV using it's old uuid taft-01 pvcreate --norestorefile --uuid "yRBOXP-7IVO-3yeH-dvtr-wG8H-KZJo-Ah4yRs" /dev/sdc2 Restoring the VG back to it's original state taft-01 vgcfgrestore -f /tmp/raid_sanity.bkup.6320 raid_sanity Reactivating VG Deactivating raid missing_pv_raid... and removing *** Bug 867644 has been marked as a duplicate of this bug. *** Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2013-0496.html |