Bug 1281922 - multiple leg failures to raid volumes can cause '__extracted' images
multiple leg failures to raid volumes can cause '__extracted' images
Status: ASSIGNED
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2 (Show other bugs)
7.2
x86_64 Linux
unspecified Severity high
: rc
: ---
Assigned To: Heinz Mauelshagen
cluster-qe@redhat.com
:
Depends On:
Blocks: 1403540
  Show dependency treegraph
 
Reported: 2015-11-13 15:07 EST by Corey Marthaler
Modified: 2017-11-01 20:03 EDT (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1403540 (view as bug list)
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Corey Marthaler 2015-11-13 15:07:45 EST
Description of problem:

[...]
================================================================================
Iteration 0.36 started at Thu Nov 12 14:59:14 CST 2015
================================================================================
Scenario kill_two_synced_raid10_3legs: Kill two legs (none of which share the same stripe leg) of synced 3 leg raid10 volume(s)

********* RAID hash info for this scenario *********
* names:              synced_two_raid10_3legs_1
* sync:               1
* type:               raid10
* -m |-i value:       3
* leg devices:        /dev/sda1 /dev/sde1 /dev/sdf1 /dev/sdd1 /dev/sdb1 /dev/sdg1
* spanned legs:        0
* failpv(s):          /dev/sda1 /dev/sdf1
* additional snap:    /dev/sde1
* failnode(s):        host-112.virt.lab.msp.redhat.com
* lvmetad:            0
* raid fault policy:  allocate
******************************************************

Creating raids(s) on host-112.virt.lab.msp.redhat.com...
host-112.virt.lab.msp.redhat.com: lvcreate  --type raid10 -i 3 -n synced_two_raid10_3legs_1 -L 500M black_bird /dev/sda1:0-2400 /dev/sde1:0-2400 /dev/sdf1:0-2400 /dev/sdd1:0-2400 /dev/sdb1:0-2400 /dev/sdg1:0-2400

Current mirror/raid device structure(s):
  LV                                   Attr       LSize   Cpy%Sync Devices
  synced_two_raid10_3legs_1            rwi-a-r--- 504.00m 100.00   synced_two_raid10_3legs_1_rimage_0(0),synced_two_raid10_3legs_1_rimage_1(0),synced_two_raid10_3legs_1_rimage_2(0),synced_two_raid10_3legs_1_rimage_3(0),synced_two_raid10_3legs_1_rimage_4(0),synced_two_raid10_3legs_1_rimage_5(0)
  [synced_two_raid10_3legs_1_rimage_0] iwi-aor--- 168.00m          /dev/sda1(1)
  [synced_two_raid10_3legs_1_rimage_1] iwi-aor--- 168.00m          /dev/sde1(1)
  [synced_two_raid10_3legs_1_rimage_2] iwi-aor--- 168.00m          /dev/sdf1(1)
  [synced_two_raid10_3legs_1_rimage_3] iwi-aor--- 168.00m          /dev/sdd1(1)
  [synced_two_raid10_3legs_1_rimage_4] iwi-aor--- 168.00m          /dev/sdb1(1)
  [synced_two_raid10_3legs_1_rimage_5] iwi-aor--- 168.00m          /dev/sdg1(1)
  [synced_two_raid10_3legs_1_rmeta_0]  ewi-aor---   4.00m          /dev/sda1(0)
  [synced_two_raid10_3legs_1_rmeta_1]  ewi-aor---   4.00m          /dev/sde1(0)
  [synced_two_raid10_3legs_1_rmeta_2]  ewi-aor---   4.00m          /dev/sdf1(0)
  [synced_two_raid10_3legs_1_rmeta_3]  ewi-aor---   4.00m          /dev/sdd1(0)
  [synced_two_raid10_3legs_1_rmeta_4]  ewi-aor---   4.00m          /dev/sdb1(0)
  [synced_two_raid10_3legs_1_rmeta_5]  ewi-aor---   4.00m          /dev/sdg1(0)

* NOTE: not enough available devices for allocation fault polices to fully work *
(well technically, since we have 1, some allocation should work)

Waiting until all mirror|raid volumes become fully syncd...
   1/1 mirror(s) are fully synced: ( 100.00% )
Sleeping 15 sec
 
Creating ext on top of mirror(s) on host-112.virt.lab.msp.redhat.com...
mke2fs 1.42.9 (28-Dec-2013)
Mounting mirrored ext filesystems on host-112.virt.lab.msp.redhat.com...

PV=/dev/sdf1
     synced_two_raid10_3legs_1_rimage_2: 1.P
     synced_two_raid10_3legs_1_rmeta_2: 1.P
PV=/dev/sda1
     synced_two_raid10_3legs_1_rimage_0: 1.P
     synced_two_raid10_3legs_1_rmeta_0: 1.P

Creating a snapshot volume of each of the raids
Writing verification files (checkit) to mirror(s) on...
     ---- host-112.virt.lab.msp.redhat.com ----

<start name="host-112.virt.lab.msp.redhat.com_synced_two_raid10_3legs_1" pid="3253" time="Thu Nov 12 14:59:57 2015" type="cmd" />
Sleeping 15 seconds to get some outsanding I/O locks before the failure 
Verifying files (checkit) on mirror(s) on...
     ---- host-112.virt.lab.msp.redhat.com ----

Disabling device sda on host-112.virt.lab.msp.redhat.com
Disabling device sdf on host-112.virt.lab.msp.redhat.com
 
Getting recovery check start time from /var/log/messages: Nov 12 14:48
Attempting I/O to cause mirror down conversion(s) on host-112.virt.lab.msp.redhat.com
dd if=/dev/zero of=/mnt/synced_two_raid10_3legs_1/ddfile count=10 bs=4M
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.0892571 s, 470 MB/s

Verifying current sanity of lvm after the failure

Current mirror/raid device structure(s):
  Couldn't find device with uuid AL67Ze-viXZ-Va60-YiZu-M4CX-a2x6-W6cfvk.
  Couldn't find device with uuid HV2T11-2mnS-49DZ-8YQd-dodY-H7pe-usByHS.
  LV                                            Attr       LSize   Cpy%Sync Devices
  bb_snap1                                      swi-a-s--- 252.00m          /dev/sde1(43)
  synced_two_raid10_3legs_1                     owi-aor-p- 504.00m 100.00   synced_two_raid10_3legs_1_rimage_6(0),synced_two_raid10_3legs_1_rimage_1(0),synced_two_raid10_3legs_1_rimage_2(0),synced_two_raid10_3legs_1_rimage_3(0),synced_two_raid10_3legs_1_rimage_4(0),synced_two_raid10_3legs_1_rimage_5(0)
  synced_two_raid10_3legs_1_rimage_0__extracted -wi-----p- 168.00m          unknown device(1)
  [synced_two_raid10_3legs_1_rimage_1]          iwi-aor--- 168.00m          /dev/sde1(1)
  [synced_two_raid10_3legs_1_rimage_2]          iwi-a-r-p- 168.00m          unknown device(1)
  [synced_two_raid10_3legs_1_rimage_3]          iwi-aor--- 168.00m          /dev/sdd1(1)
  [synced_two_raid10_3legs_1_rimage_4]          iwi-aor--- 168.00m          /dev/sdb1(1)
  [synced_two_raid10_3legs_1_rimage_5]          iwi-aor--- 168.00m          /dev/sdg1(1)
  [synced_two_raid10_3legs_1_rimage_6]          iwi-aor--- 504.00m          /dev/sdc1(1)
  synced_two_raid10_3legs_1_rmeta_0__extracted  -wi-----p-   4.00m          unknown device(0)
  [synced_two_raid10_3legs_1_rmeta_1]           ewi-aor---   4.00m          /dev/sde1(0)
  [synced_two_raid10_3legs_1_rmeta_2]           ewi-a-r-p-   4.00m          unknown device(0)
  [synced_two_raid10_3legs_1_rmeta_3]           ewi-aor---   4.00m          /dev/sdd1(0)
  [synced_two_raid10_3legs_1_rmeta_4]           ewi-aor---   4.00m          /dev/sdb1(0)
  [synced_two_raid10_3legs_1_rmeta_5]           ewi-aor---   4.00m          /dev/sdg1(0)
  [synced_two_raid10_3legs_1_rmeta_6]           ewi-aor---   4.00m          /dev/sdc1(0)

 
Verifying FAILED device /dev/sda1 is *NOT* in the volume(s)
Verifying FAILED device /dev/sdf1 is *NOT* in the volume(s)
Verifying IMAGE device /dev/sde1 *IS* in the volume(s)
Verifying IMAGE device /dev/sdd1 *IS* in the volume(s)
Verifying IMAGE device /dev/sdb1 *IS* in the volume(s)
Verifying IMAGE device /dev/sdg1 *IS* in the volume(s)

Verify the rimage/rmeta dm devices remain after the failures
Checking EXISTENCE and STATE of synced_two_raid10_3legs_1_rimage_2 on: host-112.virt.lab.msp.redhat.com 
Checking EXISTENCE and STATE of synced_two_raid10_3legs_1_rmeta_2 on: host-112.virt.lab.msp.redhat.com 
Checking EXISTENCE and STATE of synced_two_raid10_3legs_1_rimage_0 on: host-112.virt.lab.msp.redhat.com 

synced_two_raid10_3legs_1_rimage_0 on host-112.virt.lab.msp.redhat.com should still exist

Version-Release number of selected component (if applicable):
3.10.0-327.el7.x86_64
lvm2-2.02.130-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
lvm2-libs-2.02.130-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
lvm2-cluster-2.02.130-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-1.02.107-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-libs-1.02.107-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-event-1.02.107-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-event-libs-1.02.107-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-persistent-data-0.5.5-1.el7    BUILT: Thu Aug 13 09:58:10 CDT 2015
cmirror-2.02.130-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015


How reproducible:
A few times now
Comment 1 Corey Marthaler 2015-11-13 15:14:56 EST
Another one...

================================================================================
Iteration 0.7 started at Fri Nov 13 10:36:10 CST 2015
================================================================================
  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
Scenario kill_three_synced_raid10_3legs: Kill three legs (none of which share the same stripe leg) of synced 3 leg raid10 volume(s)

********* RAID hash info for this scenario *********
* names:              synced_three_raid10_3legs_1
* sync:               1
* type:               raid10
* -m |-i value:       3
* leg devices:        /dev/sdi1 /dev/sde1 /dev/sdc1 /dev/sdg1 /dev/sdd1 /dev/sda1
* spanned legs:        0
* failpv(s):          /dev/sdi1 /dev/sdc1 /dev/sdd1
* failnode(s):        host-111.virt.lab.msp.redhat.com
* lvmetad:            1
* raid fault policy:  allocate
******************************************************

Creating raids(s) on host-111.virt.lab.msp.redhat.com...
host-111.virt.lab.msp.redhat.com: lvcreate  --type raid10 -i 3 -n synced_three_raid10_3legs_1 -L 500M black_bird /dev/sdi1:0-2400 /dev/sde1:0-2400 /dev/sdc1:0-2400 /dev/sdg1:0-2400 /dev/sdd1:0-2400 /dev/sda1:0-2400
 
Current mirror/raid device structure(s):
 WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
 LV                                     Attr       LSize   Cpy%Sync Devices
 synced_three_raid10_3legs_1            rwi-a-r--- 504.00m 100.00   synced_three_raid10_3legs_1_rimage_0(0),synced_three_raid10_3legs_1_rimage_1(0),synced_three_raid10_3legs_1_rimage_2(0),synced_three_raid10_3legs_1_rimage_3(0),synced_three_raid10_3legs_1_rimage_4(0),synced_three_raid10_3legs_1_rimage_5(0)
 [synced_three_raid10_3legs_1_rimage_0] iwi-aor--- 168.00m          /dev/sdi1(1)
 [synced_three_raid10_3legs_1_rimage_1] iwi-aor--- 168.00m          /dev/sde1(1)
 [synced_three_raid10_3legs_1_rimage_2] iwi-aor--- 168.00m          /dev/sdc1(1)
 [synced_three_raid10_3legs_1_rimage_3] iwi-aor--- 168.00m          /dev/sdg1(1)
 [synced_three_raid10_3legs_1_rimage_4] iwi-aor--- 168.00m          /dev/sdd1(1)
 [synced_three_raid10_3legs_1_rimage_5] iwi-aor--- 168.00m          /dev/sda1(1)
 [synced_three_raid10_3legs_1_rmeta_0]  ewi-aor---   4.00m          /dev/sdi1(0)
 [synced_three_raid10_3legs_1_rmeta_1]  ewi-aor---   4.00m          /dev/sde1(0)
 [synced_three_raid10_3legs_1_rmeta_2]  ewi-aor---   4.00m          /dev/sdc1(0)
 [synced_three_raid10_3legs_1_rmeta_3]  ewi-aor---   4.00m          /dev/sdg1(0)
 [synced_three_raid10_3legs_1_rmeta_4]  ewi-aor---   4.00m          /dev/sdd1(0)
 [synced_three_raid10_3legs_1_rmeta_5]  ewi-aor---   4.00m          /dev/sda1(0)

 * NOTE: not enough available devices for allocation fault polices to fully work *
 (well technically, since we have 1, some allocation should work)
 
 Waiting until all mirror|raid volumes become fully syncd...
    1/1 mirror(s) are fully synced: ( 100.00% )
 Sleeping 15 sec
 
 Creating ext on top of mirror(s) on host-111.virt.lab.msp.redhat.com...
 mke2fs 1.42.9 (28-Dec-2013)
 Mounting mirrored ext filesystems on host-111.virt.lab.msp.redhat.com...

 PV=/dev/sdc1
    synced_three_raid10_3legs_1_rimage_2: 1.P
    synced_three_raid10_3legs_1_rmeta_2: 1.P
 PV=/dev/sdi1
    synced_three_raid10_3legs_1_rimage_0: 1.P
    synced_three_raid10_3legs_1_rmeta_0: 1.P
 PV=/dev/sdd1
    synced_three_raid10_3legs_1_rimage_4: 1.P
    synced_three_raid10_3legs_1_rmeta_4: 1.P
 
 Writing verification files (checkit) to mirror(s) on...
    ---- host-111.virt.lab.msp.redhat.com ----
 
 <start name="host-111.virt.lab.msp.redhat.com_synced_three_raid10_3legs_1" pid="10054" time="Fri Nov 13 10:36:57 2015" type="cmd" />
 Sleeping 15 seconds to get some outsanding I/O locks before the failure 
 Verifying files (checkit) on mirror(s) on...
      ---- host-111.virt.lab.msp.redhat.com ----
 
 Disabling device sdi on host-111.virt.lab.msp.redhat.comrescan device...
 Disabling device sdc on host-111.virt.lab.msp.redhat.comrescan device...
 Disabling device sdd on host-111.virt.lab.msp.redhat.com
 
 Getting recovery check start time from /var/log/messages: Nov 13 10:37
 Attempting I/O to cause mirror down conversion(s) on host-111.virt.lab.msp.redhat.com
 dd if=/dev/zero of=/mnt/synced_three_raid10_3legs_1/ddfile count=10 bs=4M
 10+0 records in
 10+0 records out
 41943040 bytes (42 MB) copied, 0.0890728 s, 471 MB/s
 
 Verifying current sanity of lvm after the failure
 
 Current mirror/raid device structure(s):
 WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
 Couldn't find device with uuid oIT8zt-ddrl-wD03-9G1X-uGwf-WsAD-XGzClZ.
 Couldn't find device with uuid unfIvc-DqM6-TZn2-b1wd-B1Mv-KtUW-n9tcVL.
 Couldn't find device with uuid 69LtIe-OqMc-qlTr-bYUr-0dFc-zc9W-AqTFlg.
 LV                                              Attr       LSize   Cpy%Sync Devices
  synced_three_raid10_3legs_1                     rwi-aor-p- 504.00m 100.00   synced_three_raid10_3legs_1_rimage_0(0),synced_three_raid10_3legs_1_rimage_1(0),synced_three_raid10_3legs_1_rimage_6(0),synced_three_raid10_3legs_1_rimage_3(0),synced_three_raid10_3legs_1_rimage_4(0),synced_three_raid10_3legs_1_rimage_5(0)
  [synced_three_raid10_3legs_1_rimage_0]          iwi-a-r-p- 168.00m          unknown device(1)
  [synced_three_raid10_3legs_1_rimage_1]          iwi-aor--- 168.00m          /dev/sde1(1)
  synced_three_raid10_3legs_1_rimage_2__extracted -wi-----p- 168.00m          unknown device(1)
  [synced_three_raid10_3legs_1_rimage_3]          iwi-aor--- 168.00m          /dev/sdg1(1)
  [synced_three_raid10_3legs_1_rimage_4]          iwi-a-r-p- 168.00m          unknown device(1)
  [synced_three_raid10_3legs_1_rimage_5]          iwi-aor--- 168.00m          /dev/sda1(1)
  [synced_three_raid10_3legs_1_rimage_6]          iwi-aor--- 504.00m          /dev/sdh1(1)
  [synced_three_raid10_3legs_1_rmeta_0]           ewi-a-r-p-   4.00m          unknown device(0)
  [synced_three_raid10_3legs_1_rmeta_1]           ewi-aor---   4.00m          /dev/sde1(0)
  synced_three_raid10_3legs_1_rmeta_2__extracted  -wi-----p-   4.00m          unknown device(0)
  [synced_three_raid10_3legs_1_rmeta_3]           ewi-aor---   4.00m          /dev/sdg1(0)
  [synced_three_raid10_3legs_1_rmeta_4]           ewi-a-r-p-   4.00m          unknown device(0)
  [synced_three_raid10_3legs_1_rmeta_5]           ewi-aor---   4.00m          /dev/sda1(0)
  [synced_three_raid10_3legs_1_rmeta_6]           ewi-aor---   4.00m          /dev/sdh1(0)
 
 Verifying FAILED device /dev/sdi1 is *NOT* in the volume(s)
 Verifying FAILED device /dev/sdc1 is *NOT* in the volume(s)
 Verifying FAILED device /dev/sdd1 is *NOT* in the volume(s)
 Verifying IMAGE device /dev/sde1 *IS* in the volume(s)
 Verifying IMAGE device /dev/sdg1 *IS* in the volume(s)
 Verifying IMAGE device /dev/sda1 *IS* in the volume(s)
 Verify the rimage/rmeta dm devices remain after the failures
 Checking EXISTENCE and STATE of synced_three_raid10_3legs_1_rimage_2 on: host-111.virt.lab.msp.redhat.com 
 synced_three_raid10_3legs_1_rimage_2 on host-111.virt.lab.msp.redhat.com should still exist
Comment 2 Corey Marthaler 2015-11-16 10:47:02 EST
Finally repo'ed this on a non raid10 volumes as well.

[root@host-111 ~]# lvs -a -o +devices
  Couldn't find device with uuid vuitVI-DqWw-A79F-6s5s-1bo2-Q67C-Z3gAnS.
  Couldn't find device with uuid piL16B-6j6c-TOBM-5GH2-VIXZ-rE5f-dm1uam.
  LV                                                VG          Attr       LSize   Origin                        Data%  Cpy%Sync Devices
  bb_snap1                                          black_bird  swi-a-s--- 252.00m synced_multiple_raid1_5legs_1 37.07           /dev/sde1(126)
  synced_multiple_raid1_5legs_1                     black_bird  owi-aor-p- 500.00m                                      100.00   synced_multiple_raid1_5legs_1_rimage_0(0),synced_multiple_raid1_5legs_1_rimage_1(0),synced_multiple_raid1_5legs_1_rimage_2(0),synced_multiple_raid1_5legs_1_rimage_3(0),synced_multiple_raid1_5legs_1_rimage_6(0),synced_multiple_raid1_5legs_1_rimage_5(0)
  [synced_multiple_raid1_5legs_1_rimage_0]          black_bird  iwi-a-r-p- 500.00m                                               unknown device(1)
  [synced_multiple_raid1_5legs_1_rimage_1]          black_bird  iwi-aor--- 500.00m                                               /dev/sde1(1)
  [synced_multiple_raid1_5legs_1_rimage_2]          black_bird  iwi-aor--- 500.00m                                               /dev/sdg1(1)
  [synced_multiple_raid1_5legs_1_rimage_3]          black_bird  iwi-aor--- 500.00m                                               /dev/sda1(1)
  synced_multiple_raid1_5legs_1_rimage_4__extracted black_bird  -wi-----p- 500.00m                                               unknown device(1)
  [synced_multiple_raid1_5legs_1_rimage_5]          black_bird  iwi-aor--- 500.00m                                               /dev/sdd1(1)
  [synced_multiple_raid1_5legs_1_rimage_6]          black_bird  iwi-aor--- 500.00m                                               /dev/sdf1(1)
  [synced_multiple_raid1_5legs_1_rmeta_0]           black_bird  ewi-a-r-p-   4.00m                                               unknown device(0)
  [synced_multiple_raid1_5legs_1_rmeta_1]           black_bird  ewi-aor---   4.00m                                               /dev/sde1(0)
  [synced_multiple_raid1_5legs_1_rmeta_2]           black_bird  ewi-aor---   4.00m                                               /dev/sdg1(0)
  [synced_multiple_raid1_5legs_1_rmeta_3]           black_bird  ewi-aor---   4.00m                                               /dev/sda1(0)
  synced_multiple_raid1_5legs_1_rmeta_4__extracted  black_bird  -wi-----p-   4.00m                                               unknown device(0)
  [synced_multiple_raid1_5legs_1_rmeta_5]           black_bird  ewi-aor---   4.00m                                               /dev/sdd1(0)
  [synced_multiple_raid1_5legs_1_rmeta_6]           black_bird  ewi-aor---   4.00m                                               /dev/sdf1(0)



3.10.0-327.el7.x86_64
lvm2-2.02.130-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
lvm2-libs-2.02.130-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
lvm2-cluster-2.02.130-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-1.02.107-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-libs-1.02.107-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-event-1.02.107-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-event-libs-1.02.107-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-persistent-data-0.5.5-1.el7    BUILT: Thu Aug 13 09:58:10 CDT 2015
cmirror-2.02.130-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
sanlock-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
sanlock-lib-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
lvm2-lockd-2.02.130-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
Comment 3 Corey Marthaler 2016-05-27 11:49:40 EDT
This may not require multiple images in one raid to fail. I am seeing this with multiple raid volumes each experiencing just one device failure.

lvm2-2.02.152-2.el7    BUILT: Thu May  5 02:33:28 CDT 2016
lvm2-libs-2.02.152-2.el7    BUILT: Thu May  5 02:33:28 CDT 2016
lvm2-cluster-2.02.152-2.el7    BUILT: Thu May  5 02:33:28 CDT 2016
device-mapper-1.02.124-2.el7    BUILT: Thu May  5 02:33:28 CDT 2016
device-mapper-libs-1.02.124-2.el7    BUILT: Thu May  5 02:33:28 CDT 2016
device-mapper-event-1.02.124-2.el7    BUILT: Thu May  5 02:33:28 CDT 2016
device-mapper-event-libs-1.02.124-2.el7    BUILT: Thu May  5 02:33:28 CDT 2016
device-mapper-persistent-data-0.6.2-0.1.rc8.el7    BUILT: Wed May  4 02:56:34 CDT 2016



ACTUAL LEG ORDER: /dev/sda1 /dev/sdb1 /dev/sdf1 /dev/sdh1 /dev/sde1 /dev/sdg1
Scenario kill_non_primary_leg: Kill non primary leg
********* Mirror info for this scenario *********
* raids            foobar_1 foobar_2
* leg devices:        /dev/sda1 /dev/sdb1 /dev/sdf1 /dev/sdh1 /dev/sde1 /dev/sdg1
* failpv(s):          /dev/sdf1
* failnode(s):        host-075
* lvmetad:            1
* leg fault policy:   allocate
*************************************************

Current mirror/raid device structure(s):
  LV                  Attr       LSize   Cpy%Sync Devices
  foobar_1            rwi-aor---   2.00g 100.00   foobar_1_rimage_0(0),foobar_1_rimage_1(0),foobar_1_rimage_2(0),foobar_1_rimage_3(0),foobar_1_rimage_4(0),foobar_1_rimage_5(0)
  [foobar_1_rimage_0] iwi-aor--- 684.00m          /dev/sda1(1)
  [foobar_1_rimage_1] iwi-aor--- 684.00m          /dev/sdb1(1)
  [foobar_1_rimage_2] iwi-aor--- 684.00m          /dev/sdf1(1)
  [foobar_1_rimage_3] iwi-aor--- 684.00m          /dev/sdh1(1)
  [foobar_1_rimage_4] iwi-aor--- 684.00m          /dev/sde1(1)
  [foobar_1_rimage_5] iwi-aor--- 684.00m          /dev/sdg1(173)
  [foobar_1_rmeta_0]  ewi-aor---   4.00m          /dev/sda1(0)
  [foobar_1_rmeta_1]  ewi-aor---   4.00m          /dev/sdb1(0)
  [foobar_1_rmeta_2]  ewi-aor---   4.00m          /dev/sdf1(0)
  [foobar_1_rmeta_3]  ewi-aor---   4.00m          /dev/sdh1(0)
  [foobar_1_rmeta_4]  ewi-aor---   4.00m          /dev/sde1(0)
  [foobar_1_rmeta_5]  ewi-aor---   4.00m          /dev/sdg1(172)
  foobar_2            rwi-aor---   2.00g 100.00   foobar_2_rimage_0(0),foobar_2_rimage_1(0),foobar_2_rimage_2(0),foobar_2_rimage_3(0),foobar_2_rimage_4(0),foobar_2_rimage_5(0)
  [foobar_2_rimage_0] iwi-aor--- 684.00m          /dev/sda1(173)
  [foobar_2_rimage_1] iwi-aor--- 684.00m          /dev/sdb1(173)
  [foobar_2_rimage_2] iwi-aor--- 684.00m          /dev/sdf1(173)
  [foobar_2_rimage_3] iwi-aor--- 684.00m          /dev/sdh1(173)
  [foobar_2_rimage_4] iwi-aor--- 684.00m          /dev/sde1(173)
  [foobar_2_rimage_5] iwi-aor--- 684.00m          /dev/sdg1(1)
  [foobar_2_rmeta_0]  ewi-aor---   4.00m          /dev/sda1(172)
  [foobar_2_rmeta_1]  ewi-aor---   4.00m          /dev/sdb1(172)
  [foobar_2_rmeta_2]  ewi-aor---   4.00m          /dev/sdf1(172)
  [foobar_2_rmeta_3]  ewi-aor---   4.00m          /dev/sdh1(172)
  [foobar_2_rmeta_4]  ewi-aor---   4.00m          /dev/sde1(172)
  [foobar_2_rmeta_5]  ewi-aor---   4.00m          /dev/sdg1(0)

PV=/dev/sdf1
        foobar_1_rimage_2: 1.0
        foobar_1_rmeta_2: 1.0
        foobar_2_rimage_2: 1.0
        foobar_2_rmeta_2: 1.0

Writing verification files (checkit) to mirror(s) on...
        ---- host-075 ----

<start name="host-075_foobar_1"  pid="13240" time="Fri May 27 09:58:11 2016" type="cmd" />
<start name="host-075_foobar_2"  pid="13241" time="Fri May 27 09:58:11 2016" type="cmd" />
Sleeping 15 seconds to get some outsanding I/O locks before the failure 
Verifying files (checkit) on mirror(s) on...
        ---- host-075 ----


Disabling device sdf on host-075        rescan device (ALWAYS FOR NOW)...
  /dev/sdf1: read failed after 0 of 4096 at 26838958080: Input/output error
  /dev/sdf1: read failed after 0 of 4096 at 26839048192: Input/output error
  /dev/sdf1: read failed after 0 of 4096 at 0: Input/output error
  /dev/sdf1: read failed after 0 of 4096 at 4096: Input/output error


Getting recovery check start time from /var/log/messages: May 27 09:58
Attempting I/O to cause mirror down conversion(s) on host-075
dd if=/dev/zero of=/mnt/foobar_1/ddfile count=10 bs=4M
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.637608 s, 65.8 MB/s
dd if=/dev/zero of=/mnt/foobar_2/ddfile count=10 bs=4M
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.711129 s, 59.0 MB/s

Verifying current sanity of lvm after the failure

Current mirror/raid device structure(s):
  WARNING: Device for PV 2P26UM-HdiU-psQZ-YMOL-Icbf-I21Z-oKzZLJ not found or rejected by a filter.
  WARNING: Couldn't find all devices for LV revolution_1/foobar_1_rmeta_2 while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV revolution_1/foobar_1_rimage_2 while checking used and assumed devices.
  LV                  Attr       LSize   Cpy%Sync Devices
  foobar_1            rwi-aor-p-   2.00g 100.00   foobar_1_rimage_0(0),foobar_1_rimage_1(0),foobar_1_rimage_2(0),foobar_1_rimage_3(0),foobar_1_rimage_4(0),foobar_1_rimage_5(0)
  [foobar_1_rimage_0] iwi-aor--- 684.00m          /dev/sda1(1)
  [foobar_1_rimage_1] iwi-aor--- 684.00m          /dev/sdb1(1)
  [foobar_1_rimage_2] iwi-aor-p- 684.00m          [unknown](1)
  [foobar_1_rimage_3] iwi-aor--- 684.00m          /dev/sdh1(1)
  [foobar_1_rimage_4] iwi-aor--- 684.00m          /dev/sde1(1)
  [foobar_1_rimage_5] iwi-aor--- 684.00m          /dev/sdg1(173)
  [foobar_1_rmeta_0]  ewi-aor---   4.00m          /dev/sda1(0)
  [foobar_1_rmeta_1]  ewi-aor---   4.00m          /dev/sdb1(0)
  [foobar_1_rmeta_2]  ewi-aor-p-   4.00m          [unknown](0)
  [foobar_1_rmeta_3]  ewi-aor---   4.00m          /dev/sdh1(0)
  [foobar_1_rmeta_4]  ewi-aor---   4.00m          /dev/sde1(0)
  [foobar_1_rmeta_5]  ewi-aor---   4.00m          /dev/sdg1(172)
  foobar_2            rwi-aor---   2.00g 100.00   foobar_2_rimage_0(0),foobar_2_rimage_1(0),foobar_2_rimage_2(0),foobar_2_rimage_3(0),foobar_2_rimage_4(0),foobar_2_rimage_5(0)
  [foobar_2_rimage_0] iwi-aor--- 684.00m          /dev/sda1(173)
  [foobar_2_rimage_1] iwi-aor--- 684.00m          /dev/sdb1(173)
  [foobar_2_rimage_2] iwi-aor--- 684.00m          /dev/sdd1(1)
  [foobar_2_rimage_3] iwi-aor--- 684.00m          /dev/sdh1(173)
  [foobar_2_rimage_4] iwi-aor--- 684.00m          /dev/sde1(173)
  [foobar_2_rimage_5] iwi-aor--- 684.00m          /dev/sdg1(1)
  [foobar_2_rmeta_0]  ewi-aor---   4.00m          /dev/sda1(172)
  [foobar_2_rmeta_1]  ewi-aor---   4.00m          /dev/sdb1(172)
  [foobar_2_rmeta_2]  ewi-aor---   4.00m          /dev/sdd1(0)
  [foobar_2_rmeta_3]  ewi-aor---   4.00m          /dev/sdh1(172)
  [foobar_2_rmeta_4]  ewi-aor---   4.00m          /dev/sde1(172)
  [foobar_2_rmeta_5]  ewi-aor---   4.00m          /dev/sdg1(0)

[...]
lvm[10047]: WARNING: Device for PV 2P26UM-HdiU-psQZ-YMOL-Icbf-I21Z-oKzZLJ not found or rejected by a filter.
lvm[10047]: WARNING: Couldn't find all devices for LV revolution_1/foobar_1_rmeta_2 while checking used and assumed devices.
lvm[10047]: WARNING: Couldn't find all devices for LV revolution_1/foobar_1_rimage_2 while checking used and assumed devices.
lvm[10047]: WARNING: Device for PV 2P26UM-HdiU-psQZ-YMOL-Icbf-I21Z-oKzZLJ already missing, skipping.
lvm[10047]: WARNING: Device for PV 2P26UM-HdiU-psQZ-YMOL-Icbf-I21Z-oKzZLJ not found or rejected by a filter.
lvm[10047]: WARNING: Couldn't find all devices for LV revolution_1/foobar_1_rmeta_2 while checking used and assumed devices.
lvm[10047]: WARNING: Couldn't find all devices for LV revolution_1/foobar_1_rimage_2 while checking used and assumed devices.
qarshd[14256]: Running cmdline: sync
lvm[10047]: device-mapper: create ioctl on revolution_1-foobar_1_rmeta_6LVM-sUChMeIh04NQpWrw2cnmuvhD5F1iXawDEulbQOQiSJ3pnhkX8tgyjpELWbXjwgTT failed: Device or resource busy
lvm[10047]: Failed to lock logical volume revolution_1/foobar_1.
multipathd: dm-32: remove map (uevent)
lvm[10047]: Failed to replace faulty devices in revolution_1/foobar_1.
lvm[10047]: Failed to process event for revolution_1-foobar_1.
lvm[10047]: Device #0 of raid10 array, revolution_1-foobar_1, has failed.
lvm[10047]: WARNING: Device for PV 2P26UM-HdiU-psQZ-YMOL-Icbf-I21Z-oKzZLJ not found or rejected by a filter.
lvm[10047]: WARNING: Couldn't find all devices for LV revolution_1/foobar_1_rmeta_2 while checking used and assumed devices.
lvm[10047]: WARNING: Couldn't find all devices for LV revolution_1/foobar_1_rimage_2 while checking used and assumed devices.
lvm[10047]: WARNING: Device for PV 2P26UM-HdiU-psQZ-YMOL-Icbf-I21Z-oKzZLJ already missing, skipping.
lvm[10047]: WARNING: Device for PV 2P26UM-HdiU-psQZ-YMOL-Icbf-I21Z-oKzZLJ not found or rejected by a filter.
lvm[10047]: WARNING: Couldn't find all devices for LV revolution_1/foobar_1_rmeta_2 while checking used and assumed devices.
lvm[10047]: WARNING: Couldn't find all devices for LV revolution_1/foobar_1_rimage_2 while checking used and assumed devices.
lvm[10047]: device-mapper: create ioctl on revolution_1-foobar_1_rmeta_6LVM-sUChMeIh04NQpWrw2cnmuvhD5F1iXawDyCs0DYriZmE8qJkKdIIY7vPOnlSfpLgi failed: Device or resource busy
lvm[10047]: Failed to lock logical volume revolution_1/foobar_1.
multipathd: dm-32: remove map (uevent)
lvm[10047]: Failed to replace faulty devices in revolution_1/foobar_1.
lvm[10047]: Failed to process event for revolution_1-foobar_1.
lvm[10047]: Device #2 of raid10 array, revolution_1-foobar_1, has failed.
[...]


[root@host-075 ~]# dmsetup ls | grep _extract
revolution_1-foobar_1_rimage_2__extracted-missing_0_0   (253:30)
revolution_1-foobar_1_rmeta_2__extracted-missing_0_0    (253:31)
Comment 4 Heinz Mauelshagen 2016-08-05 10:46:00 EDT
Corey, can't reproduce here with:
3.10.0-472.el7.x86_64
  LVM version:     2.02.161(2)-RHEL7 (2016-07-20)
  Library version: 1.02.131-RHEL7 (2016-07-20)
  Driver version:  4.34.0
Comment 5 Corey Marthaler 2016-08-11 12:49:58 EDT
Testing for this bug is blocked until there is a fix for bug 1361328.
Comment 6 Corey Marthaler 2016-08-31 18:34:59 EDT
Now that bug 1361328 has been fixed/verified, I can reproduce this issue on the latest rpms/kernel.

3.10.0-497.el7.x86_64
lvm2-2.02.164-4.el7    BUILT: Wed Aug 31 08:47:09 CDT 2016
lvm2-libs-2.02.164-4.el7    BUILT: Wed Aug 31 08:47:09 CDT 2016
lvm2-cluster-2.02.164-4.el7    BUILT: Wed Aug 31 08:47:09 CDT 2016
device-mapper-1.02.133-4.el7    BUILT: Wed Aug 31 08:47:09 CDT 2016
device-mapper-libs-1.02.133-4.el7    BUILT: Wed Aug 31 08:47:09 CDT 2016
device-mapper-event-1.02.133-4.el7    BUILT: Wed Aug 31 08:47:09 CDT 2016
device-mapper-event-libs-1.02.133-4.el7    BUILT: Wed Aug 31 08:47:09 CDT 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016




================================================================================
Iteration 2.3 started at Wed Aug 31 15:31:57 CDT 2016
================================================================================
ACTUAL LEG ORDER: /dev/sdd1 /dev/sdb1 /dev/sdg1 /dev/sdc1 /dev/sde1 /dev/sda1
Scenario kill_multiple_legs: Kill multiple raid images (rimage_0,rimage_2 and rmeta_0,rmeta_2)
********* Mirror info for this scenario *********
* raids             raid_1 raid_2
* leg devices:      /dev/sdd1 /dev/sdb1 /dev/sdg1 /dev/sdc1 /dev/sde1 /dev/sda1
* failpv(s):        /dev/sdd1 /dev/sdg1
* failnode(s):      host-119
* lvmetad:          1
* leg fault policy: allocate
*************************************************


Current mirror/raid device structure(s):
  LV                Attr       LSize   Cpy%Sync Devices
  raid_1            rwi-aor---   2.00g 100.00   raid_1_rimage_0(0),raid_1_rimage_1(0),raid_1_rimage_2(0),raid_1_rimage_3(0),raid_1_rimage_4(0),raid_1_rimage_5(0)
  [raid_1_rimage_0] iwi-aor--- 684.00m          /dev/sdd1(173)
  [raid_1_rimage_1] iwi-aor--- 684.00m          /dev/sdb1(1)
  [raid_1_rimage_2] iwi-aor--- 684.00m          /dev/sdg1(1)
  [raid_1_rimage_3] iwi-aor--- 684.00m          /dev/sdc1(173)
  [raid_1_rimage_4] iwi-aor--- 684.00m          /dev/sde1(1)
  [raid_1_rimage_5] iwi-aor--- 684.00m          /dev/sda1(1)
  [raid_1_rmeta_0]  ewi-aor---   4.00m          /dev/sdd1(172)
  [raid_1_rmeta_1]  ewi-aor---   4.00m          /dev/sdb1(0)
  [raid_1_rmeta_2]  ewi-aor---   4.00m          /dev/sdg1(0)
  [raid_1_rmeta_3]  ewi-aor---   4.00m          /dev/sdc1(172)
  [raid_1_rmeta_4]  ewi-aor---   4.00m          /dev/sde1(0)
  [raid_1_rmeta_5]  ewi-aor---   4.00m          /dev/sda1(0)
  raid_2            rwi-aor---   2.00g 100.00   raid_2_rimage_0(0),raid_2_rimage_1(0),raid_2_rimage_2(0),raid_2_rimage_3(0),raid_2_rimage_4(0),raid_2_rimage_5(0)
  [raid_2_rimage_0] iwi-aor--- 684.00m          /dev/sdd1(1)
  [raid_2_rimage_1] iwi-aor--- 684.00m          /dev/sdb1(173)
  [raid_2_rimage_2] iwi-aor--- 684.00m          /dev/sdg1(173)
  [raid_2_rimage_3] iwi-aor--- 684.00m          /dev/sdc1(1)
  [raid_2_rimage_4] iwi-aor--- 684.00m          /dev/sde1(173)
  [raid_2_rimage_5] iwi-aor--- 684.00m          /dev/sda1(173)
  [raid_2_rmeta_0]  ewi-aor---   4.00m          /dev/sdd1(0)
  [raid_2_rmeta_1]  ewi-aor---   4.00m          /dev/sdb1(172)
  [raid_2_rmeta_2]  ewi-aor---   4.00m          /dev/sdg1(172)
  [raid_2_rmeta_3]  ewi-aor---   4.00m          /dev/sdc1(0)
  [raid_2_rmeta_4]  ewi-aor---   4.00m          /dev/sde1(172)
  [raid_2_rmeta_5]  ewi-aor---   4.00m          /dev/sda1(172)

PV=/dev/sdg1
        raid_1_rimage_2: 1.0
        raid_1_rmeta_2: 1.0
        raid_2_rimage_2: 1.0
        raid_2_rmeta_2: 1.0
PV=/dev/sdd1
        raid_1_rimage_0: 1.0
        raid_1_rmeta_0: 1.0
        raid_2_rimage_0: 1.0
        raid_2_rmeta_0: 1.0

Writing verification files (checkit) to mirror(s) on...
        ---- host-119 ----

Sleeping 15 seconds to get some outsanding I/O locks before the failure 
Verifying files (checkit) on mirror(s) on...
        ---- host-119 ----


Disabling device sdd on host-119rescan device...
Disabling device sdg on host-119rescan device...

Getting recovery check start time from /var/log/messages: Aug 31 15:32
Attempting I/O to cause mirror down conversion(s) on host-119
dd if=/dev/zero of=/mnt/raid_1/ddfile count=10 bs=4M
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.523368 s, 80.1 MB/s
dd if=/dev/zero of=/mnt/raid_2/ddfile count=10 bs=4M
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.605639 s, 69.3 MB/s

Verifying current sanity of lvm after the failure

Current mirror/raid device structure(s):
  WARNING: Device for PV 6QUyK5-5PsN-vJXd-VGt6-RPXo-NO6e-mSLVW2 not found or rejected by a filter.
  WARNING: Device for PV HZQCP4-xhEz-VOoZ-hbRH-V01S-ei3K-00Jsqx not found or rejected by a filter.
  WARNING: Reading VG revolution_1 from disk because lvmetad metadata is invalid.
  WARNING: Couldn't find all devices for LV revolution_1/raid_2_rmeta_2 while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV revolution_1/raid_2_rimage_2 while checking used and assumed devices.
  LV                Attr       LSize   Cpy%Sync Devices
   raid_1            rwi-aor---   2.00g 27.10    raid_1_rimage_0(0),raid_1_rimage_1(0),raid_1_rimage_2(0),raid_1_rimage_3(0),raid_1_rimage_4(0),raid_1_rimage_5(0)
   [raid_1_rimage_0] Iwi-aor--- 684.00m          /dev/sdf1(173)
   [raid_1_rimage_1] iwi-aor--- 684.00m          /dev/sdb1(1)
   [raid_1_rimage_2] Iwi-aor--- 684.00m          /dev/sdh1(1)
   [raid_1_rimage_3] iwi-aor--- 684.00m          /dev/sdc1(173)
   [raid_1_rimage_4] iwi-aor--- 684.00m          /dev/sde1(1)
   [raid_1_rimage_5] iwi-aor--- 684.00m          /dev/sda1(1)
   [raid_1_rmeta_0]  ewi-aor---   4.00m          /dev/sdf1(172)
   [raid_1_rmeta_1]  ewi-aor---   4.00m          /dev/sdb1(0)
   [raid_1_rmeta_2]  ewi-aor---   4.00m          /dev/sdh1(0)
   [raid_1_rmeta_3]  ewi-aor---   4.00m          /dev/sdc1(172)
   [raid_1_rmeta_4]  ewi-aor---   4.00m          /dev/sde1(0)
   [raid_1_rmeta_5]  ewi-aor---   4.00m          /dev/sda1(0)
   raid_2            rwi-aor-p-   2.00g 100.00   raid_2_rimage_0(0),raid_2_rimage_1(0),raid_2_rimage_2(0),raid_2_rimage_3(0),raid_2_rimage_4(0),raid_2_rimage_5(0)
   [raid_2_rimage_0] iwi-aor--- 684.00m          /dev/sdf1(1)
   [raid_2_rimage_1] iwi-aor--- 684.00m          /dev/sdb1(173)
   [raid_2_rimage_2] iwi-a-r-p- 684.00m          [unknown](173)
   [raid_2_rimage_3] iwi-aor--- 684.00m          /dev/sdc1(1)
   [raid_2_rimage_4] iwi-aor--- 684.00m          /dev/sde1(173)
   [raid_2_rimage_5] iwi-aor--- 684.00m          /dev/sda1(173)
   [raid_2_rmeta_0]  ewi-aor---   4.00m          /dev/sdf1(0)
   [raid_2_rmeta_1]  ewi-aor---   4.00m          /dev/sdb1(172)
   [raid_2_rmeta_2]  ewi-a-r-p-   4.00m          [unknown](172)
   [raid_2_rmeta_3]  ewi-aor---   4.00m          /dev/sdc1(0)
   [raid_2_rmeta_4]  ewi-aor---   4.00m          /dev/sde1(172)
   [raid_2_rmeta_5]  ewi-aor---   4.00m          /dev/sda1(172)

Verify that each of the raid repairs finished successfully

Verifying FAILED device /dev/sdd1 is *NOT* in the volume(s)
Verifying FAILED device /dev/sdg1 is *NOT* in the volume(s)
Verifying LEG device /dev/sdb1 *IS* in the volume(s)
Verifying LEG device /dev/sdc1 *IS* in the volume(s)
Verifying LEG device /dev/sde1 *IS* in the volume(s)
Verifying LEG device /dev/sda1 *IS* in the volume(s)
verify the newly allocated dm devices were added as a result of the failures
Checking EXISTENCE and STATE of raid_1_rimage_2 on:  host-119  WARNING: Device for PV 6QUyK5-5PsN-vJXd-VGt6-RPXo-NO6e-mSLVW2 not found or rejected by a filter.
Checking EXISTENCE and STATE of raid_1_rmeta_2 on:  host-119  WARNING: Device for PV 6QUyK5-5PsN-vJXd-VGt6-RPXo-NO6e-mSLVW2 not found or rejected by a filter.
Checking EXISTENCE and STATE of raid_2_rimage_2 on:  host-119  WARNING: Device for PV 6QUyK5-5PsN-vJXd-VGt6-RPXo-NO6e-mSLVW2 not found or rejected by a filter.

There is no pv associated w/ raid_2_rimage_2

[root@host-119 ~]# lvs -a -o +devices
  WARNING: Device for PV 6QUyK5-5PsN-vJXd-VGt6-RPXo-NO6e-mSLVW2 not found or rejected by a filter.
  WARNING: Device for PV HZQCP4-xhEz-VOoZ-hbRH-V01S-ei3K-00Jsqx not found or rejected by a filter.
  WARNING: Reading VG revolution_1 from disk because lvmetad metadata is invalid.
  WARNING: Couldn't find all devices for LV revolution_1/raid_2_rmeta_2 while checking used and assumed devices.
  WARNING: Couldn't find all devices for LV revolution_1/raid_2_rimage_2 while checking used and assumed devices.
  LV                VG            Attr       LSize   Cpy%Sync Devices
  raid_1            revolution_1  rwi-aor---   2.00g 100.00   raid_1_rimage_0(0),raid_1_rimage_1(0),raid_1_rimage_2(0),raid_1_rimage_3(0),raid_1_rimage_4(0),raid_1_rimage_5(0)
  [raid_1_rimage_0] revolution_1  iwi-aor--- 684.00m          /dev/sdf1(173)
  [raid_1_rimage_1] revolution_1  iwi-aor--- 684.00m          /dev/sdb1(1)
  [raid_1_rimage_2] revolution_1  iwi-aor--- 684.00m          /dev/sdh1(1)
  [raid_1_rimage_3] revolution_1  iwi-aor--- 684.00m          /dev/sdc1(173)
  [raid_1_rimage_4] revolution_1  iwi-aor--- 684.00m          /dev/sde1(1)
  [raid_1_rimage_5] revolution_1  iwi-aor--- 684.00m          /dev/sda1(1)
  [raid_1_rmeta_0]  revolution_1  ewi-aor---   4.00m          /dev/sdf1(172)
  [raid_1_rmeta_1]  revolution_1  ewi-aor---   4.00m          /dev/sdb1(0)
  [raid_1_rmeta_2]  revolution_1  ewi-aor---   4.00m          /dev/sdh1(0)
  [raid_1_rmeta_3]  revolution_1  ewi-aor---   4.00m          /dev/sdc1(172)
  [raid_1_rmeta_4]  revolution_1  ewi-aor---   4.00m          /dev/sde1(0)
  [raid_1_rmeta_5]  revolution_1  ewi-aor---   4.00m          /dev/sda1(0)
  raid_2            revolution_1  rwi-aor-p-   2.00g 100.00   raid_2_rimage_0(0),raid_2_rimage_1(0),raid_2_rimage_2(0),raid_2_rimage_3(0),raid_2_rimage_4(0),raid_2_rimage_5(0)
  [raid_2_rimage_0] revolution_1  iwi-aor--- 684.00m          /dev/sdf1(1)
  [raid_2_rimage_1] revolution_1  iwi-aor--- 684.00m          /dev/sdb1(173)
  [raid_2_rimage_2] revolution_1  iwi-a-r-p- 684.00m          [unknown](173)
  [raid_2_rimage_3] revolution_1  iwi-aor--- 684.00m          /dev/sdc1(1)
  [raid_2_rimage_4] revolution_1  iwi-aor--- 684.00m          /dev/sde1(173)
  [raid_2_rimage_5] revolution_1  iwi-aor--- 684.00m          /dev/sda1(173)
  [raid_2_rmeta_0]  revolution_1  ewi-aor---   4.00m          /dev/sdf1(0)
  [raid_2_rmeta_1]  revolution_1  ewi-aor---   4.00m          /dev/sdb1(172)
  [raid_2_rmeta_2]  revolution_1  ewi-a-r-p-   4.00m          [unknown](172)
  [raid_2_rmeta_3]  revolution_1  ewi-aor---   4.00m          /dev/sdc1(0)
  [raid_2_rmeta_4]  revolution_1  ewi-aor---   4.00m          /dev/sde1(172)
  [raid_2_rmeta_5]  revolution_1  ewi-aor---   4.00m          /dev/sda1(172)


[root@host-119 ~]# dmsetup ls
revolution_1-raid_1_rimage_0    (253:9)
revolution_1-raid_2_rmeta_3     (253:19)
revolution_1-raid_2_rmeta_2     (253:15)
revolution_1-raid_2_rmeta_1     (253:17)
revolution_1-raid_2_rmeta_0     (253:6)
revolution_1-raid_2_rmeta_2__extracted-missing_0_0      (253:29)
revolution_1-raid_2_rimage_6    (253:3)
revolution_1-raid_1_rmeta_5     (253:25)
revolution_1-raid_2_rimage_5    (253:13)
revolution_1-raid_2     (253:27)
revolution_1-raid_1_rmeta_4     (253:10)
revolution_1-raid_1_rmeta_3     (253:21)
revolution_1-raid_2_rimage_4    (253:24)
revolution_1-raid_1     (253:14)
revolution_1-raid_1_rimage_5    (253:26)
revolution_1-raid_1_rmeta_2     (253:32)
revolution_1-raid_2_rimage_3    (253:20)
revolution_1-raid_1_rimage_4    (253:11)
revolution_1-raid_2_rimage_2    (253:16)
revolution_1-raid_1_rmeta_1     (253:4)
revolution_1-raid_2_rmeta_6     (253:2)
revolution_1-raid_1_rimage_3    (253:22)
revolution_1-raid_1_rmeta_0     (253:8)
revolution_1-raid_2_rimage_1    (253:18)
revolution_1-raid_1_rimage_2    (253:33)
revolution_1-raid_2_rmeta_5     (253:12)
revolution_1-raid_2_rimage_2__extracted-missing_0_0     (253:28)
revolution_1-raid_2_rimage_0    (253:7)
revolution_1-raid_2_rmeta_4     (253:23)
revolution_1-raid_1_rimage_1    (253:5)



Aug 31 15:32:47 host-119 lvm[1754]: Faulty devices in revolution_1/raid_1 successfully replaced.
Aug 31 15:32:48 host-119 lvm[1754]: WARNING: Device for PV 6QUyK5-5PsN-vJXd-VGt6-RPXo-NO6e-mSLVW2 not found or rejected by a filter.
Aug 31 15:32:48 host-119 lvm[1754]: WARNING: Device for PV HZQCP4-xhEz-VOoZ-hbRH-V01S-ei3K-00Jsqx not found or rejected by a filter.
Aug 31 15:32:48 host-119 lvm[1754]: WARNING: Couldn't find all devices for LV revolution_1/raid_2_rmeta_2 while checking used and assumed devices.
Aug 31 15:32:48 host-119 lvm[1754]: WARNING: Couldn't find all devices for LV revolution_1/raid_2_rimage_2 while checking used and assumed devices.
Aug 31 15:32:48 host-119 lvm[1754]: WARNING: Device for PV HZQCP4-xhEz-VOoZ-hbRH-V01S-ei3K-00Jsqx already missing, skipping.
Aug 31 15:32:48 host-119 lvm[1754]: WARNING: Device for PV 6QUyK5-5PsN-vJXd-VGt6-RPXo-NO6e-mSLVW2 not found or rejected by a filter.
Aug 31 15:32:48 host-119 lvm[1754]: WARNING: Device for PV HZQCP4-xhEz-VOoZ-hbRH-V01S-ei3K-00Jsqx not found or rejected by a filter.
Aug 31 15:32:48 host-119 lvm[1754]: WARNING: Couldn't find all devices for LV revolution_1/raid_2_rmeta_2 while checking used and assumed devices.
Aug 31 15:32:48 host-119 lvm[1754]: WARNING: Couldn't find all devices for LV revolution_1/raid_2_rimage_2 while checking used and assumed devices.
Aug 31 15:32:48 host-119 lvm[1754]: Using default stripesize 64.00 KiB.
Aug 31 15:32:50 host-119 kernel: device-mapper: raid: Device 2 specified for rebuild; clearing superblock
Aug 31 15:32:50 host-119 kernel: md/raid10:mdX: active with 5 out of 6 devices
Aug 31 15:32:50 host-119 kernel: created bitmap (3 pages) for device mdX
Aug 31 15:32:50 host-119 lvm[1754]: device-mapper: create ioctl on revolution_1-raid_2_rimage_2__extracted-missing_0_0LVM-EXXdF1UO3WnYhy2tk4wsJFRqEfBKVV0tNDexCrNSjmQIr2d0EeOvMMYTfwXd9sZt-missing_0_0 failed: Device or resource busy
Aug 31 15:32:50 host-119 lvm[1754]: Failed to lock logical volume revolution_1/raid_2.
Aug 31 15:32:50 host-119 multipathd: dm-30: remove map (uevent)
Aug 31 15:32:50 host-119 lvm[1754]: Failed to replace faulty devices in revolution_1/raid_2.
Aug 31 15:32:50 host-119 lvm[1754]: Failed to process event for revolution_1-raid_2.
Aug 31 15:32:57 host-119 kernel: md: mdX: recovery done.
Aug 31 15:32:58 host-119 lvm[1754]: Device #0 of raid10 array, revolution_1-raid_1, has failed.
Aug 31 15:32:58 host-119 lvm[1754]: WARNING: Device for PV 6QUyK5-5PsN-vJXd-VGt6-RPXo-NO6e-mSLVW2 not found or rejected by a filter.
Aug 31 15:32:58 host-119 lvm[1754]: WARNING: Device for PV HZQCP4-xhEz-VOoZ-hbRH-V01S-ei3K-00Jsqx not found or rejected by a filter.
Aug 31 15:32:58 host-119 lvm[1754]: WARNING: Reading VG revolution_1 from disk because lvmetad metadata is invalid.
Aug 31 15:32:58 host-119 lvm[1754]: WARNING: Couldn't find all devices for LV revolution_1/raid_2_rmeta_2 while checking used and assumed devices.
Aug 31 15:32:58 host-119 lvm[1754]: WARNING: Couldn't find all devices for LV revolution_1/raid_2_rimage_2 while checking used and assumed devices.
Aug 31 15:32:58 host-119 lvm[1754]: WARNING: Device for PV 6QUyK5-5PsN-vJXd-VGt6-RPXo-NO6e-mSLVW2 not found or rejected by a filter.
Aug 31 15:32:58 host-119 lvm[1754]: WARNING: Device for PV HZQCP4-xhEz-VOoZ-hbRH-V01S-ei3K-00Jsqx not found or rejected by a filter.
Aug 31 15:32:58 host-119 lvm[1754]: WARNING: Reading VG revolution_1 from disk because lvmetad metadata is invalid.
Aug 31 15:32:58 host-119 lvm[1754]: WARNING: Couldn't find all devices for LV revolution_1/raid_2_rmeta_2 while checking used and assumed devices.
Aug 31 15:32:58 host-119 lvm[1754]: WARNING: Couldn't find all devices for LV revolution_1/raid_2_rimage_2 while checking used and assumed devices.
Aug 31 15:32:58 host-119 lvm[1754]: Using default stripesize 64.00 KiB.
Aug 31 15:32:58 host-119 lvm[1754]: revolution_1/raid_1 does not contain devices specified to replace
Aug 31 15:32:58 host-119 lvm[1754]: Faulty devices in revolution_1/raid_1 successfully replaced.
Comment 8 Roman Bednář 2017-01-02 09:57:18 EST
I was able to reproduce this with latest 7.3.z rpms using similar scenario.

3.10.0-514.el7.x86_64

lvm2-2.02.166-1.el7    BUILT: Wed Sep 28 09:26:52 CEST 2016
lvm2-libs-2.02.166-1.el7    BUILT: Wed Sep 28 09:26:52 CEST 2016
lvm2-cluster-2.02.166-1.el7    BUILT: Wed Sep 28 09:26:52 CEST 2016
device-mapper-1.02.135-1.el7    BUILT: Wed Sep 28 09:26:52 CEST 2016
device-mapper-libs-1.02.135-1.el7    BUILT: Wed Sep 28 09:26:52 CEST 2016
device-mapper-event-1.02.135-1.el7    BUILT: Wed Sep 28 09:26:52 CEST 2016
device-mapper-event-libs-1.02.135-1.el7    BUILT: Wed Sep 28 09:26:52 CEST 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 12:29:13 CEST 2016


Test name provided in QA Whiteboard field.

Device is disabled by:
'echo offline > /sys/block/dev/sdd1/device/state'

===========================================================================

[2017-01-02 10:41:31.610119]  Current mirror/raid device structure(s):
[2017-01-02 10:41:31.771201]    LV                                             Attr       LSize   Cpy%Sync Devices                                                                                                                                                                                        
[2017-01-02 10:41:31.771859]     synced_primary_raid10_2legs_1                  twi-aotz-- 504.00m          synced_primary_raid10_2legs_1_tdata(0)                                                                                                                                                         
[2017-01-02 10:41:31.772310]     [synced_primary_raid10_2legs_1_tdata]          rwi-aor--- 504.00m 100.00   synced_primary_raid10_2legs_1_tdata_rimage_0(0),synced_primary_raid10_2legs_1_tdata_rimage_1(0),synced_primary_raid10_2legs_1_tdata_rimage_2(0),synced_primary_raid10_2legs_1_tdata_rimage_3(0)
[2017-01-02 10:41:31.772875]     [synced_primary_raid10_2legs_1_tdata_rimage_0] iwi-aor--- 252.00m          /dev/sdd1(1)                                                                                                                                                                                   
[2017-01-02 10:41:31.773368]     [synced_primary_raid10_2legs_1_tdata_rimage_1] iwi-aor--- 252.00m          /dev/sdh1(1)                                                                                                                                                                                   
[2017-01-02 10:41:31.773825]     [synced_primary_raid10_2legs_1_tdata_rimage_2] iwi-aor--- 252.00m          /dev/sdc1(1)                                                                                                                                                                                   
[2017-01-02 10:41:31.774262]     [synced_primary_raid10_2legs_1_tdata_rimage_3] iwi-aor--- 252.00m          /dev/sda1(1)                                                                                                                                                                                   
[2017-01-02 10:41:31.774773]     [synced_primary_raid10_2legs_1_tdata_rmeta_0]  ewi-aor---   4.00m          /dev/sdd1(0)                                                                                                                                                                                   
[2017-01-02 10:41:31.775218]     [synced_primary_raid10_2legs_1_tdata_rmeta_1]  ewi-aor---   4.00m          /dev/sdh1(0)                                                                                                                                                                                   
[2017-01-02 10:41:31.775705]     [synced_primary_raid10_2legs_1_tdata_rmeta_2]  ewi-aor---   4.00m          /dev/sdc1(0)                                                                                                                                                                                   
[2017-01-02 10:41:31.776142]     [synced_primary_raid10_2legs_1_tdata_rmeta_3]  ewi-aor---   4.00m          /dev/sda1(0)                                                                                                                                                                                   
[2017-01-02 10:41:31.776711]     [synced_primary_raid10_2legs_1_tmeta]          ewi-ao---- 500.00m          /dev/sda1(64)                                                                                                                                                                                  
[2017-01-02 10:41:31.777388]     virt_synced_primary_raid10_2legs_1             Vwi-aotz-- 500.00m                                                                                                                                                                                                         
[2017-01-02 10:41:31.777886]     root                                           -wi-ao----   6.38g          /dev/vda2(210)                                                                                                                                                                                 
[2017-01-02 10:41:31.778387]     swap                                           -wi-ao---- 840.00m          /dev/vda2(0)                                                                                                                                                                                   
[2017-01-02 10:41:31.778506]  
[2017-01-02 10:41:31.778621]  
[2017-01-02 10:41:32.437956]  PV=/dev/sdd1
[2017-01-02 10:41:32.438491]  	synced_primary_raid10_2legs_1_tdata_rimage_0: 1.0
[2017-01-02 10:41:32.438750]  	synced_primary_raid10_2legs_1_tdata_rmeta_0: 1.0
[2017-01-02 10:41:32.438933]  
[2017-01-02 10:41:32.439187]  Writing verification files (checkit) to mirror(s) on...
[2017-01-02 10:41:32.439468]  	---- virt-249 ----
[2017-01-02 10:41:33.529880]  
[2017-01-02 10:41:33.536823]  <start name="virt-249_synced_primary_raid10_2legs_1"  pid="14674" time="Mon Jan  2 10:41:33 2017" type="cmd" />
[2017-01-02 10:41:35.538579]  Sleeping 15 seconds to get some outsanding I/O locks before the failure 
[2017-01-02 10:41:50.542750]  
[2017-01-02 10:41:50.543075]  lvcreate -k n -s /dev/black_bird/virt_synced_primary_raid10_2legs_1 -n snap1_synced_primary_raid10_2legs_1
[2017-01-02 10:41:50.756399]    WARNING: Sum of all thin volume sizes (1000.00 MiB) exceeds the size of thin pool black_bird/synced_primary_raid10_2legs_1 (504.00 MiB)!
[2017-01-02 10:41:51.545143]  lvcreate -k n -s /dev/black_bird/virt_synced_primary_raid10_2legs_1 -n snap2_synced_primary_raid10_2legs_1
[2017-01-02 10:41:51.752541]    WARNING: Sum of all thin volume sizes (1.46 GiB) exceeds the size of thin pool black_bird/synced_primary_raid10_2legs_1 (504.00 MiB)!
[2017-01-02 10:41:52.315314]  lvcreate -k n -s /dev/black_bird/virt_synced_primary_raid10_2legs_1 -n snap3_synced_primary_raid10_2legs_1
[2017-01-02 10:41:52.522674]    WARNING: Sum of all thin volume sizes (1.95 GiB) exceeds the size of thin pool black_bird/synced_primary_raid10_2legs_1 (504.00 MiB)!
[2017-01-02 10:41:53.202640]  
[2017-01-02 10:41:54.204000]  Verifying files (checkit) on mirror(s) on...
[2017-01-02 10:41:54.204238]  	---- virt-249 ----
[2017-01-02 10:41:57.428730]  
[2017-01-02 10:41:57.428938]  
[2017-01-02 10:41:57.429056]  
[2017-01-02 10:41:57.549595]  Disabling device sdd on virt-249rescan device...
[2017-01-02 10:41:57.751017]    /dev/sdd1: read failed after 0 of 1024 at 42944036864: Input/output error
[2017-01-02 10:41:57.751382]    /dev/sdd1: read failed after 0 of 1024 at 42944143360: Input/output error
[2017-01-02 10:41:57.751597]    /dev/sdd1: read failed after 0 of 1024 at 0: Input/output error
[2017-01-02 10:41:57.751796]    /dev/sdd1: read failed after 0 of 1024 at 4096: Input/output error
[2017-01-02 10:41:57.751988]    /dev/sdd1: read failed after 0 of 2048 at 0: Input/output error
[2017-01-02 10:41:57.752102]  
[2017-01-02 10:41:57.752214]  
[2017-01-02 10:42:03.141658]  Getting recovery check start time from /var/log/messages: Jan  2 10:42
[2017-01-02 10:42:03.141955]  Attempting I/O to cause mirror down conversion(s) on virt-249
[2017-01-02 10:42:03.142166]  dd if=/dev/zero of=/mnt/synced_primary_raid10_2legs_1/ddfile count=10 bs=4M
[2017-01-02 10:42:03.431511]  10+0 records in
[2017-01-02 10:42:03.431738]  10+0 records out
[2017-01-02 10:42:03.431932]  41943040 bytes (42 MB) copied, 0.141419 s, 297 MB/s
[2017-01-02 10:42:03.920678]  
[2017-01-02 10:42:09.921838]  Verifying current sanity of lvm after the failure
[2017-01-02 10:42:10.139466]  
[2017-01-02 10:42:10.139732]  Current mirror/raid device structure(s):
[2017-01-02 10:42:10.313878]    WARNING: Device for PV CgqqOh-mpSN-hwQw-TzNH-p6mj-SReH-7CSF5W not found or rejected by a filter.
[2017-01-02 10:42:10.316820]    LV                                                      Attr       LSize   Cpy%Sync Devices                                                                                                                                                                                        
[2017-01-02 10:42:10.317395]     snap1_synced_primary_raid10_2legs_1                     Vwi-a-tz-- 500.00m                                                                                                                                                                                                         
[2017-01-02 10:42:10.317878]     snap2_synced_primary_raid10_2legs_1                     Vwi-a-tz-- 500.00m                                                                                                                                                                                                         
[2017-01-02 10:42:10.318396]     snap3_synced_primary_raid10_2legs_1                     Vwi-a-tz-- 500.00m                                                                                                                                                                                                         
[2017-01-02 10:42:10.318878]     synced_primary_raid10_2legs_1                           twi-aotz-- 504.00m          synced_primary_raid10_2legs_1_tdata(0)                                                                                                                                                         
[2017-01-02 10:42:10.319394]     [synced_primary_raid10_2legs_1_tdata]                   rwi-aor-r- 504.00m 100.00   synced_primary_raid10_2legs_1_tdata_rimage_4(0),synced_primary_raid10_2legs_1_tdata_rimage_1(0),synced_primary_raid10_2legs_1_tdata_rimage_2(0),synced_primary_raid10_2legs_1_tdata_rimage_3(0)
[2017-01-02 10:42:10.319913]     synced_primary_raid10_2legs_1_tdata_rimage_0__extracted -wi-ao--p- 252.00m          [unknown](1)                                                                                                                                                                                   
[2017-01-02 10:42:10.320535]     [synced_primary_raid10_2legs_1_tdata_rimage_1]          iwi-aor--- 252.00m          /dev/sdh1(1)                                                                                                                                                                                   
[2017-01-02 10:42:10.321026]     [synced_primary_raid10_2legs_1_tdata_rimage_2]          iwi-aor--- 252.00m          /dev/sdc1(1)                                                                                                                                                                                   
[2017-01-02 10:42:10.321563]     [synced_primary_raid10_2legs_1_tdata_rimage_3]          iwi-aor--- 252.00m          /dev/sda1(1)                                                                                                                                                                                   
[2017-01-02 10:42:10.322071]     [synced_primary_raid10_2legs_1_tdata_rimage_4]          Iwi---r--- 252.00m          /dev/sdf1(1)                                                                                                                                                                                   
[2017-01-02 10:42:10.322612]     synced_primary_raid10_2legs_1_tdata_rmeta_0__extracted  -wi-ao--p-   4.00m          [unknown](0)                                                                                                                                                                                   
[2017-01-02 10:42:10.323082]     [synced_primary_raid10_2legs_1_tdata_rmeta_1]           ewi-aor---   4.00m          /dev/sdh1(0)                                                                                                                                                                                   
[2017-01-02 10:42:10.323615]     [synced_primary_raid10_2legs_1_tdata_rmeta_2]           ewi-aor---   4.00m          /dev/sdc1(0)                                                                                                                                                                                   
[2017-01-02 10:42:10.324083]     [synced_primary_raid10_2legs_1_tdata_rmeta_3]           ewi-aor---   4.00m          /dev/sda1(0)                                                                                                                                                                                   
[2017-01-02 10:42:10.324609]     [synced_primary_raid10_2legs_1_tdata_rmeta_4]           ewi---r---   4.00m          /dev/sdf1(0)                                                                                                                                                                                   
[2017-01-02 10:42:10.325090]     [synced_primary_raid10_2legs_1_tmeta]                   ewi-ao---- 500.00m          /dev/sda1(64)                                                                                                                                                                                  
[2017-01-02 10:42:10.325614]     virt_synced_primary_raid10_2legs_1                      Vwi-aotz-- 500.00m                                                                                                                                                                                                         
[2017-01-02 10:42:10.326156]     root                                                    -wi-ao----   6.38g          /dev/vda2(210)                                                                                                                                                                                 
[2017-01-02 10:42:10.326732]     swap                                                    -wi-ao---- 840.00m          /dev/vda2(0)                                                                                                                                                                                    
[2017-01-02 10:42:10.327792]  
[2017-01-02 10:42:10.328159]  Verifying FAILED device /dev/sdd1 is *NOT* in the volume(s)
[2017-01-02 10:42:10.496268]    WARNING: Device for PV CgqqOh-mpSN-hwQw-TzNH-p6mj-SReH-7CSF5W not found or rejected by a filter.
[2017-01-02 10:42:10.498245]  Verifying IMAGE device /dev/sdh1 *IS* in the volume(s)
[2017-01-02 10:42:10.687200]    WARNING: Device for PV CgqqOh-mpSN-hwQw-TzNH-p6mj-SReH-7CSF5W not found or rejected by a filter.
[2017-01-02 10:42:10.689506]  Verifying IMAGE device /dev/sdc1 *IS* in the volume(s)
[2017-01-02 10:42:10.868284]    WARNING: Device for PV CgqqOh-mpSN-hwQw-TzNH-p6mj-SReH-7CSF5W not found or rejected by a filter.
[2017-01-02 10:42:10.871817]  Verifying IMAGE device /dev/sda1 *IS* in the volume(s)
[2017-01-02 10:42:11.043151]    WARNING: Device for PV CgqqOh-mpSN-hwQw-TzNH-p6mj-SReH-7CSF5W not found or rejected by a filter.
[2017-01-02 10:45:11.045369]  Verify the rimage/rmeta dm devices remain after the failures
[2017-01-02 10:45:11.422674]  Checking EXISTENCE and STATE of synced_primary_raid10_2legs_1_tdata_rimage_0 on: virt-249 
[2017-01-02 10:45:11.422898]  
[2017-01-02 10:45:11.423189]  (ALLOCATE POLICY) there should not be an 'unknown' device associated with synced_primary_raid10_2legs_1_tdata_rimage_0 on virt-249
[2017-01-02 10:45:11.589522]  
[2017-01-02 10:45:11.590056]    synced_primary_raid10_2legs_1_tdata_rimage_0__extracted          [unknown](1)                                                                                                                                                                                   
[2017-01-02 10:45:11.590177]  
[2017-01-02 10:45:11.590387]  Attempt to trigger automatic repair again...
[2017-01-02 10:45:11.590577]  Attempting I/O to cause mirror down conversion(s) on virt-249
[2017-01-02 10:45:11.590802]  dd if=/dev/zero of=/mnt/synced_primary_raid10_2legs_1/ddfile count=10 bs=4M
[2017-01-02 10:45:11.934389]  10+0 records in
[2017-01-02 10:45:11.934631]  10+0 records out
[2017-01-02 10:45:11.934830]  41943040 bytes (42 MB) copied, 0.197525 s, 212 MB/s
[2017-01-02 10:45:12.235885]  
[2017-01-02 10:46:42.503674]  
[2017-01-02 10:46:42.504009]  	[unknown] device(s) still exist in raid that should have been repaired by now
[2017-01-02 10:46:42.504176]  FI_engine: recover() method failed
Comment 9 Jonathan Earl Brassow 2017-05-10 11:29:00 EDT
not enough time to fix this issue in 7.4
low priority - workaround is to simply run 'dmsetup remove...'

Note You need to log in before you can comment on or make changes to this bug.