Bug 1354646
Summary: | vgreduce --removemissing of partial activated raid0 volume segfaults | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Corey Marthaler <cmarthal> |
Component: | lvm2 | Assignee: | Heinz Mauelshagen <heinzm> |
lvm2 sub component: | Mirroring and RAID | QA Contact: | cluster-qe <cluster-qe> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | unspecified | ||
Priority: | unspecified | CC: | agk, heinzm, jbrassow, msnitzer, prajnoha, prockai, rbednar, zkabelac |
Version: | 7.3 | ||
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | lvm2-2.02.161-1.el7 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-11-04 04:15:31 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Corey Marthaler
2016-07-11 20:33:16 UTC
Check for MetaLV was missing in raid_manip, thus a reference to it in a log_debug() call caused the segfault. Fix verified in the latest rpms. 3.10.0-480.el7.x86_64 lvm2-2.02.161-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 lvm2-libs-2.02.161-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 lvm2-cluster-2.02.161-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 device-mapper-1.02.131-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 device-mapper-libs-1.02.131-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 device-mapper-event-1.02.131-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 device-mapper-event-libs-1.02.131-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 device-mapper-persistent-data-0.6.3-1.el7 BUILT: Fri Jul 22 05:29:13 CDT 2016 cmirror-2.02.161-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 sanlock-3.4.0-1.el7 BUILT: Fri Jun 10 11:41:03 CDT 2016 sanlock-lib-3.4.0-1.el7 BUILT: Fri Jun 10 11:41:03 CDT 2016 lvm2-lockd-2.02.161-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 ============================================================ Iteration 10 of 10 started at Thu Jul 28 16:18:27 CDT 2016 ============================================================ SCENARIO (raid1) - [partial_raid_activation_replace_missing_segment] Create a raid, corrupt an image, and then reactivate it partially with an error dm target Recreating PVs/VG with smaller sizes pvcreate --setphysicalvolumesize 200M /dev/sdb1 /dev/sdb2 /dev/sdd1 /dev/sdd2 /dev/sdf1) vgcreate raid_sanity /dev/sdb1 /dev/sdb2 /dev/sdd1 /dev/sdd2 /dev/sdf1 host-078: lvcreate --type raid1 -m 1 -n partial_activation -L 188M raid_sanity /dev/sdb1 /dev/sdb2 Waiting until all mirror|raid volumes become fully syncd... 1/1 mirror(s) are fully synced: ( 100.00% ) Sleeping 15 sec Deactivating volume group vgchange -an raid_sanity host-078: dd if=/dev/zero of=/dev/sdb1 bs=1M count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0181949 s, 57.6 MB/s pvscan --cache /dev/sdb1 Verify there's an unknown device where the corrupt PV used to be WARNING: Device for PV 1z5Jta-Edi9-wYia-9k7R-h7HE-CVUG-4qoa0O not found or rejected by a filter. Activating VG in partial readonly mode vgchange -ay --partial raid_sanity PARTIAL MODE. Incomplete logical volumes will be processed. WARNING: Device for PV 1z5Jta-Edi9-wYia-9k7R-h7HE-CVUG-4qoa0O not found or rejected by a filter. Verify an error target now exists for the corrupted image Restoring VG to default extent size vgreduce --removemissing --force raid_sanity WARNING: Device for PV 1z5Jta-Edi9-wYia-9k7R-h7HE-CVUG-4qoa0O not found or rejected by a filter. Remove -missing_0_0 images perform raid scrubbing (lvchange --syncaction repair) on raid raid_sanity/partial_activation Waiting until all mirror|raid volumes become fully syncd... 1/1 mirror(s) are fully synced: ( 100.00% ) Sleeping 15 sec Deactivating raid partial_activation... and removing Restoring VG back to default parameters vgremove --yes raid_sanity pvremove --yes /dev/sdb2 /dev/sdd1 /dev/sdd2 /dev/sdf1 pvcreate /dev/sdb1 /dev/sdb2 /dev/sdd1 /dev/sdd2 /dev/sdf1 /dev/sdf2 /dev/sdg1 /dev/sdg2 /dev/sdh1 /dev/sdh2 vgcreate raid_sanity /dev/sdb1 /dev/sdb2 /dev/sdd1 /dev/sdd2 /dev/sdf1 /dev/sdf2 /dev/sdg1 /dev/sdg2 /dev/sdh1 /dev/sdh2 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-1445.html |