Bug 1030130

Summary: failing multiple raid1 volumes can lead to repair failures and _extracted images
Product: Red Hat Enterprise Linux 6 Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: Heinz Mauelshagen <heinzm>
Status: CLOSED WORKSFORME QA Contact: cluster-qe <cluster-qe>
Severity: high Docs Contact:
Priority: unspecified    
Version: 6.5CC: agk, dwysocha, heinzm, jbrassow, msnitzer, prajnoha, prockai, thornber, zkabelac
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1073666 (view as bug list) Environment:
Last Closed: 2016-03-01 16:40:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1073666    

Description Corey Marthaler 2013-11-14 00:43:25 UTC
Description of problem:
================================================================================
Iteration 0.1 started at Wed Nov 13 16:47:11 CST 2013
================================================================================
Scenario kill_multiple_synced_raid1_4legs: Kill multiple legs of synced 4 leg raid1 volume(s)

********* RAID hash info for this scenario *********
* names:              synced_multiple_raid1_4legs_1
* sync:               1
* type:               raid1
* -m |-i value:       4
* leg devices:        /dev/sdf1 /dev/sda1 /dev/sdc1 /dev/sde1 /dev/sdh1
* failpv(s):          /dev/sde1 /dev/sdc1
* failnode(s):        virt-004.cluster-qe.lab.eng.brq.redhat.com
* additional snap:    /dev/sdf1
* lvmetad:             0
* raid fault policy:   allocate
******************************************************

Creating raids(s) on virt-004.cluster-qe.lab.eng.brq.redhat.com...
virt-004.cluster-qe.lab.eng.brq.redhat.com: lvcreate --type raid1 -m 4 -n synced_multiple_raid1_4legs_1 -L 500M black_bird /dev/sdf1:0-2000 /dev/sda1:0-2000 /dev/sdc1:0-2000 /dev/sde1:0-2000 /dev/sdh1:0-2000

Current mirror/raid device structure(s):
  LV                                       Attr       LSize   Cpy%Sync Devices
  synced_multiple_raid1_4legs_1            rwi-a-r--- 500.00m     0.00 synced_multiple_raid1_4legs_1_rimage_0(0),synced_multiple_raid1_4legs_1_rimage_1(0),synced_multiple_raid1_4legs_1_rimage_2(0),synced_multiple_raid1_4legs_1_rimage_3(0),synced_multiple_raid1_4legs_1_rimage_4(0)
  [synced_multiple_raid1_4legs_1_rimage_0] Iwi-aor--- 500.00m          /dev/sdf1(1)
  [synced_multiple_raid1_4legs_1_rimage_1] Iwi-aor--- 500.00m          /dev/sda1(1)
  [synced_multiple_raid1_4legs_1_rimage_2] Iwi-aor--- 500.00m          /dev/sdc1(1)
  [synced_multiple_raid1_4legs_1_rimage_3] Iwi-aor--- 500.00m          /dev/sde1(1)
  [synced_multiple_raid1_4legs_1_rimage_4] Iwi-aor--- 500.00m          /dev/sdh1(1)
  [synced_multiple_raid1_4legs_1_rmeta_0]  ewi-aor---   4.00m          /dev/sdf1(0)
  [synced_multiple_raid1_4legs_1_rmeta_1]  ewi-aor---   4.00m          /dev/sda1(0)
  [synced_multiple_raid1_4legs_1_rmeta_2]  ewi-aor---   4.00m          /dev/sdc1(0)
  [synced_multiple_raid1_4legs_1_rmeta_3]  ewi-aor---   4.00m          /dev/sde1(0)
  [synced_multiple_raid1_4legs_1_rmeta_4]  ewi-aor---   4.00m          /dev/sdh1(0)

/dev/sda1 IS in the mirror
/dev/sdb1 is NOT in the mirror
/dev/sdc1 IS in the mirror
/dev/sde1 IS in the mirror
/dev/sdf1 IS in the mirror
/dev/sdg1 is NOT in the mirror
/dev/sdh1 IS in the mirror
AVAIL:2 - NEEDED:2

Waiting until all mirror|raid volumes become fully syncd...
   1/1 mirror(s) are fully synced: ( 100.00% )

Creating ext on top of mirror(s) on virt-004.cluster-qe.lab.eng.brq.redhat.com...
mke2fs 1.41.12 (17-May-2010)
Mounting mirrored ext filesystems on virt-004.cluster-qe.lab.eng.brq.redhat.com...

PV=/dev/sde1
        synced_multiple_raid1_4legs_1_rimage_3: 1.0
        synced_multiple_raid1_4legs_1_rmeta_3: 1.0
PV=/dev/sdc1
        synced_multiple_raid1_4legs_1_rimage_2: 1.0
        synced_multiple_raid1_4legs_1_rmeta_2: 1.0

Creating a snapshot volume of each of the raids
Writing verification files (checkit) to mirror(s) on...
        ---- virt-004.cluster-qe.lab.eng.brq.redhat.com ----

Sleeping 15 seconds to get some outsanding EXT I/O locks before the failure 
Verifying files (checkit) on mirror(s) on...
        ---- virt-004.cluster-qe.lab.eng.brq.redhat.com ----

Disabling device sde on virt-004.cluster-qe.lab.eng.brq.redhat.com
Disabling device sdc on virt-004.cluster-qe.lab.eng.brq.redhat.com

Getting recovery check start time from /var/log/messages: Nov 13 23:48
Attempting I/O to cause mirror down conversion(s) on virt-004.cluster-qe.lab.eng.brq.redhat.com
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.459083 s, 91.4 MB/s

Verifying current sanity of lvm after the failure

Current mirror/raid device structure(s):
  Couldn't find device with uuid dq67H3-2s2K-O2Xl-0Hfv-euYB-zcWN-mfyXas.
  Couldn't find device with uuid PShnmk-v4DV-OgUy-qota-AsxV-WMsp-Kca2Iy.
  LV                                               Attr       LSize   Cpy%Sync Devices
  bb_snap1                                         swi-a-s--- 252.00m          /dev/sdf1(126)
  synced_multiple_raid1_4legs_1                    owi-aor--- 500.00m   100.00 synced_multiple_raid1_4legs_1_rimage_0(0),synced_multiple_raid1_4legs_1_rimage_1(0),synced_multiple_raid1_4legs_1_rimage_2(0),synced_multiple_raid1_4legs_1_rimage_5(0),synced_multiple_raid1_4legs_1_rimage_4(0)
  [synced_multiple_raid1_4legs_1_rimage_0]         iwi-aor--- 500.00m          /dev/sdf1(1)
  [synced_multiple_raid1_4legs_1_rimage_1]         iwi-aor--- 500.00m          /dev/sda1(1)
  [synced_multiple_raid1_4legs_1_rimage_2]         iwi-aor--- 500.00m          /dev/sdg1(1)
  synced_multiple_raid1_4legs_1_rimage_3_extracted -wi-----p- 500.00m          unknown device(1)
  [synced_multiple_raid1_4legs_1_rimage_4]         iwi-aor--- 500.00m          /dev/sdh1(1)
  [synced_multiple_raid1_4legs_1_rimage_5]         iwi-aor--- 500.00m          /dev/sdb1(1)
  [synced_multiple_raid1_4legs_1_rmeta_0]          ewi-aor---   4.00m          /dev/sdf1(0)
  [synced_multiple_raid1_4legs_1_rmeta_1]          ewi-aor---   4.00m          /dev/sda1(0)
  [synced_multiple_raid1_4legs_1_rmeta_2]          ewi-aor---   4.00m          /dev/sdg1(0)
  synced_multiple_raid1_4legs_1_rmeta_3_extracted  -wi-----p-   4.00m          unknown device(0)
  [synced_multiple_raid1_4legs_1_rmeta_4]          ewi-aor---   4.00m          /dev/sdh1(0)
  [synced_multiple_raid1_4legs_1_rmeta_5]          ewi-aor---   4.00m          /dev/sdb1(0)

Verifying FAILED device /dev/sde1 is *NOT* in the volume(s)
Verifying FAILED device /dev/sdc1 is *NOT* in the volume(s)
Verifying IMAGE device /dev/sdf1 *IS* in the volume(s)
Verifying IMAGE device /dev/sda1 *IS* in the volume(s)
Verifying IMAGE device /dev/sdh1 *IS* in the volume(s)
verify the rimage/rmeta dm devices remain after the failures

Checking EXISTENCE and STATE of synced_multiple_raid1_4legs_1_rimage_3 on: virt-004.cluster-qe.lab.eng.brq.redhat.com 
synced_multiple_raid1_4legs_1_rimage_3 on virt-004.cluster-qe.lab.eng.brq.redhat.com should still exist


Version-Release number of selected component (if applicable):
2.6.32-425.el6.x86_64
lvm2-2.02.100-8.el6    BUILT: Wed Oct 30 09:10:56 CET 2013
lvm2-libs-2.02.100-8.el6    BUILT: Wed Oct 30 09:10:56 CET 2013
lvm2-cluster-2.02.100-8.el6    BUILT: Wed Oct 30 09:10:56 CET 2013
udev-147-2.51.el6    BUILT: Thu Oct 17 13:14:34 CEST 2013
device-mapper-1.02.79-8.el6    BUILT: Wed Oct 30 09:10:56 CET 2013
device-mapper-libs-1.02.79-8.el6    BUILT: Wed Oct 30 09:10:56 CET 2013
device-mapper-event-1.02.79-8.el6    BUILT: Wed Oct 30 09:10:56 CET 2013
device-mapper-event-libs-1.02.79-8.el6    BUILT: Wed Oct 30 09:10:56 CET 2013
device-mapper-persistent-data-0.2.8-2.el6    BUILT: Mon Oct 21 16:14:25 CEST 2013
cmirror-2.02.100-8.el6    BUILT: Wed Oct 30 09:10:56 CET 2013


How reproducible:
Often

Comment 2 RHEL Program Management 2013-11-17 01:25:16 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 6 Corey Marthaler 2016-03-01 16:40:02 UTC
The test was unable to find this bug in the current rpms last night after running many iterations. Closing and will reopen if ever seen again.


2.6.32-616.el6.x86_64
lvm2-2.02.143-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
lvm2-libs-2.02.143-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
lvm2-cluster-2.02.143-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
udev-147-2.71.el6    BUILT: Wed Feb 10 07:07:17 CST 2016
device-mapper-1.02.117-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
device-mapper-libs-1.02.117-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
device-mapper-event-1.02.117-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
device-mapper-event-libs-1.02.117-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
device-mapper-persistent-data-0.6.2-0.1.rc5.el6    BUILT: Wed Feb 24 07:07:09 CST 2016
cmirror-2.02.143-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016