RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1073666 - failing multiple raid volumes can lead to repair failures and _extracted images
Summary: failing multiple raid volumes can lead to repair failures and _extracted images
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.0
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: rc
: ---
Assignee: Jonathan Earl Brassow
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
: 1067229 (view as bug list)
Depends On: 1030130
Blocks: 1205796
TreeView+ depends on / blocked
 
Reported: 2014-03-06 22:29 UTC by Corey Marthaler
Modified: 2023-03-08 07:26 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1030130
Environment:
Last Closed: 2015-07-06 20:48:08 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Comment 1 Corey Marthaler 2014-03-06 22:36:06 UTC
================================================================================
Iteration 1.4 started at Thu Mar  6 15:55:48 CST 2014
================================================================================
Scenario kill_multiple_synced_raid6_3legs: Kill multiple legs of synced 3 leg raid6 volume(s)

********* RAID hash info for this scenario *********
* names:              synced_multiple_raid6_3legs_1 synced_multiple_raid6_3legs_2
* sync:               1
* type:               raid6
* -m |-i value:       3
* leg devices:        /dev/sdg1 /dev/sdh1 /dev/sda1 /dev/sde1 /dev/sdb1
* failpv(s):          /dev/sdg1 /dev/sdh1
* failnode(s):        host-050.virt.lab.msp.redhat.com
* lvmetad:             1
* raid fault policy:   allocate
******************************************************

Creating raids(s) on host-050.virt.lab.msp.redhat.com...
host-050.virt.lab.msp.redhat.com: lvcreate --type raid6 -i 3 -n synced_multiple_raid6_3legs_1 -L 500M black_bird /dev/sdg1:0-2000 /dev/sdh1:0-2000 /dev/sda1:0-2000 /dev/sde1:0-2000 /dev/sdb1:0-2000
WARNING: ext3 signature detected on /dev/black_bird/synced_multiple_raid6_3legs_1 at offset 1080. Wipe it? [y/n] 
  1 existing signature left on the device.
host-050.virt.lab.msp.redhat.com: lvcreate --type raid6 -i 3 -n synced_multiple_raid6_3legs_2 -L 500M black_bird /dev/sdg1:0-2000 /dev/sdh1:0-2000 /dev/sda1:0-2000 /dev/sde1:0-2000 /dev/sdb1:0-2000
WARNING: ext3 signature detected on /dev/black_bird/synced_multiple_raid6_3legs_2 at offset 1080. Wipe it? [y/n] 
  1 existing signature left on the device.

Current mirror/raid device structure(s):
  LV                                       Attr       LSize   Cpy%Sync Devices
   synced_multiple_raid6_3legs_1            rwi-a-r--- 504.00m    88.89 synced_multiple_raid6_3legs_1_rimage_0(0),synced_multiple_raid6_3legs_1_rimage_1(0),synced_multiple_raid6_3legs_1_rimage_2(0),synced_multiple_raid6_3legs_1_rimage_3(0),synced_multiple_raid6_3legs_1_rimage_4(0)
   [synced_multiple_raid6_3legs_1_rimage_0] Iwi-aor--- 168.00m          /dev/sdg1(1)
   [synced_multiple_raid6_3legs_1_rimage_1] Iwi-aor--- 168.00m          /dev/sdh1(1)
   [synced_multiple_raid6_3legs_1_rimage_2] Iwi-aor--- 168.00m          /dev/sda1(1)
   [synced_multiple_raid6_3legs_1_rimage_3] Iwi-aor--- 168.00m          /dev/sde1(1)
   [synced_multiple_raid6_3legs_1_rimage_4] Iwi-aor--- 168.00m          /dev/sdb1(1)
   [synced_multiple_raid6_3legs_1_rmeta_0]  ewi-aor---   4.00m          /dev/sdg1(0)
   [synced_multiple_raid6_3legs_1_rmeta_1]  ewi-aor---   4.00m          /dev/sdh1(0)
   [synced_multiple_raid6_3legs_1_rmeta_2]  ewi-aor---   4.00m          /dev/sda1(0)
   [synced_multiple_raid6_3legs_1_rmeta_3]  ewi-aor---   4.00m          /dev/sde1(0)
   [synced_multiple_raid6_3legs_1_rmeta_4]  ewi-aor---   4.00m          /dev/sdb1(0)

   synced_multiple_raid6_3legs_2            rwi-aor--- 504.00m     5.56 synced_multiple_raid6_3legs_2_rimage_0(0),synced_multiple_raid6_3legs_2_rimage_1(0),synced_multiple_raid6_3legs_2_rimage_2(0),synced_multiple_raid6_3legs_2_rimage_3(0),synced_multiple_raid6_3legs_2_rimage_4(0)
   [synced_multiple_raid6_3legs_2_rimage_0] Iwi-aor--- 168.00m          /dev/sdg1(44)
   [synced_multiple_raid6_3legs_2_rimage_1] Iwi-aor--- 168.00m          /dev/sdh1(44)
   [synced_multiple_raid6_3legs_2_rimage_2] Iwi-aor--- 168.00m          /dev/sda1(44)
   [synced_multiple_raid6_3legs_2_rimage_3] Iwi-aor--- 168.00m          /dev/sde1(44)
   [synced_multiple_raid6_3legs_2_rimage_4] Iwi-aor--- 168.00m          /dev/sdb1(44)
   [synced_multiple_raid6_3legs_2_rmeta_0]  ewi-aor---   4.00m          /dev/sdg1(43)
   [synced_multiple_raid6_3legs_2_rmeta_1]  ewi-aor---   4.00m          /dev/sdh1(43)
   [synced_multiple_raid6_3legs_2_rmeta_2]  ewi-aor---   4.00m          /dev/sda1(43)
   [synced_multiple_raid6_3legs_2_rmeta_3]  ewi-aor---   4.00m          /dev/sde1(43)
   [synced_multiple_raid6_3legs_2_rmeta_4]  ewi-aor---   4.00m          /dev/sdb1(43)

* NOTE: not enough available devices for allocation fault polices to fully work *
(well technically, since we have 1, some allocation should work)

Waiting until all mirror|raid volumes become fully syncd...
   2/2 mirror(s) are fully synced: ( 100.00% 100.00% )

Creating ext on top of mirror(s) on host-050.virt.lab.msp.redhat.com...
mke2fs 1.42.9 (28-Dec-2013)
mke2fs 1.42.9 (28-Dec-2013)
Mounting mirrored ext filesystems on host-050.virt.lab.msp.redhat.com...

PV=/dev/sdh1
        synced_multiple_raid6_3legs_1_rimage_1: 1.P
        synced_multiple_raid6_3legs_1_rmeta_1: 1.P
        synced_multiple_raid6_3legs_2_rimage_1: 1.P
        synced_multiple_raid6_3legs_2_rmeta_1: 1.P
PV=/dev/sdg1
        synced_multiple_raid6_3legs_1_rimage_0: 1.P
        synced_multiple_raid6_3legs_1_rmeta_0: 1.P
        synced_multiple_raid6_3legs_2_rimage_0: 1.P
        synced_multiple_raid6_3legs_2_rmeta_0: 1.P

Writing verification files (checkit) to mirror(s) on...
        ---- host-050.virt.lab.msp.redhat.com ----

Sleeping 15 seconds to get some outsanding EXT I/O locks before the failure 
Verifying files (checkit) on mirror(s) on...
        ---- host-050.virt.lab.msp.redhat.com ----


Disabling device sdg on host-050.virt.lab.msp.redhat.com  /dev/sdg1: read failed after 0 of 2048 at 0: Input/output error
Disabling device sdh on host-050.virt.lab.msp.redhat.com  /dev/sdh1: read failed after 0 of 2048 at 0: Input/output error

Getting recovery check start time from /var/log/messages: Mar  6 15:57
Attempting I/O to cause mirror down conversion(s) on host-050.virt.lab.msp.redhat.com
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 1.35408 s, 31.0 MB/s
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 1.30658 s, 32.1 MB/s

Verifying current sanity of lvm after the failure

Current mirror/raid device structure(s):
  PV 1443aU-ncU8-OMxB-Lw8S-vS0m-8UlE-yI4B9n not recognised. Is the device missing?
  LV                                               Attr       LSize   Cpy%Sync Devices
   synced_multiple_raid6_3legs_1                    rwi-aor-p- 504.00m   100.00 synced_multiple_raid6_3legs_1_rimage_0(0),synced_multiple_raid6_3legs_1_rimage_1(0),synced_multiple_raid6_3legs_1_rimage_2(0),synced_multiple_raid6_3legs_1_rimage_3(0),synced_multiple_raid6_3legs_1_rimage_4(0)
   [synced_multiple_raid6_3legs_1_rimage_0]         iwi-aor--- 168.00m          /dev/sdd1(1)
   [synced_multiple_raid6_3legs_1_rimage_1]         iwi-aor-p- 168.00m          unknown device(1)
   [synced_multiple_raid6_3legs_1_rimage_2]         iwi-aor--- 168.00m          /dev/sda1(1)
   [synced_multiple_raid6_3legs_1_rimage_3]         iwi-aor--- 168.00m          /dev/sde1(1)
   [synced_multiple_raid6_3legs_1_rimage_4]         iwi-aor--- 168.00m          /dev/sdb1(1)
   [synced_multiple_raid6_3legs_1_rmeta_0]          ewi-aor---   4.00m          /dev/sdd1(0)
   [synced_multiple_raid6_3legs_1_rmeta_1]          ewi-aor-p-   4.00m          unknown device(0)
   [synced_multiple_raid6_3legs_1_rmeta_2]          ewi-aor---   4.00m          /dev/sda1(0)
   [synced_multiple_raid6_3legs_1_rmeta_3]          ewi-aor---   4.00m          /dev/sde1(0)
   [synced_multiple_raid6_3legs_1_rmeta_4]          ewi-aor---   4.00m          /dev/sdb1(0)

   synced_multiple_raid6_3legs_2                    rwi-aor-p- 504.00m   100.00 synced_multiple_raid6_3legs_2_rimage_5(0),synced_multiple_raid6_3legs_2_rimage_1(0),synced_multiple_raid6_3legs_2_rimage_2(0),synced_multiple_raid6_3legs_2_rimage_3(0),synced_multiple_raid6_3legs_2_rimage_4(0)
   synced_multiple_raid6_3legs_2_rimage_0_extracted -wi-----p- 168.00m          unknown device(44)
   [synced_multiple_raid6_3legs_2_rimage_1]         iwi-aor-p- 168.00m          unknown device(44)
   [synced_multiple_raid6_3legs_2_rimage_2]         iwi-aor--- 168.00m          /dev/sda1(44)
   [synced_multiple_raid6_3legs_2_rimage_3]         iwi-aor--- 168.00m          /dev/sde1(44)
   [synced_multiple_raid6_3legs_2_rimage_4]         iwi-aor--- 168.00m          /dev/sdb1(44)
   [synced_multiple_raid6_3legs_2_rimage_5]         iwi-aor--- 168.00m          /dev/sdd1(44)
   synced_multiple_raid6_3legs_2_rmeta_0_extracted  -wi-----p-   4.00m          unknown device(43)
   [synced_multiple_raid6_3legs_2_rmeta_1]          ewi-aor-p-   4.00m          unknown device(43)
   [synced_multiple_raid6_3legs_2_rmeta_2]          ewi-aor---   4.00m          /dev/sda1(43)
   [synced_multiple_raid6_3legs_2_rmeta_3]          ewi-aor---   4.00m          /dev/sde1(43)
   [synced_multiple_raid6_3legs_2_rmeta_4]          ewi-aor---   4.00m          /dev/sdb1(43)
   [synced_multiple_raid6_3legs_2_rmeta_5]          ewi-aor---   4.00m          /dev/sdd1(43)

Verifying FAILED device /dev/sdg1 is *NOT* in the volume(s)
Verifying FAILED device /dev/sdh1 is *NOT* in the volume(s)
Verifying IMAGE device /dev/sda1 *IS* in the volume(s)
Verifying IMAGE device /dev/sde1 *IS* in the volume(s)
Verifying IMAGE device /dev/sdb1 *IS* in the volume(s)
verify the rimage/rmeta dm devices remain after the failures

Checking EXISTENCE and STATE of synced_multiple_raid6_3legs_1_rimage_1 on: host-050.virt.lab.msp.redhat.com 
Checking EXISTENCE and STATE of synced_multiple_raid6_3legs_1_rmeta_1 on: host-050.virt.lab.msp.redhat.com 
Checking EXISTENCE and STATE of synced_multiple_raid6_3legs_2_rimage_1 on: host-050.virt.lab.msp.redhat.com 
Checking EXISTENCE and STATE of synced_multiple_raid6_3legs_2_rmeta_1 on: host-050.virt.lab.msp.redhat.com 
Checking EXISTENCE and STATE of synced_multiple_raid6_3legs_1_rimage_0 on: host-050.virt.lab.msp.redhat.com 
Checking EXISTENCE and STATE of synced_multiple_raid6_3legs_1_rmeta_0 on: host-050.virt.lab.msp.redhat.com 
Checking EXISTENCE and STATE of synced_multiple_raid6_3legs_2_rimage_0 on: host-050.virt.lab.msp.redhat.com 
synced_multiple_raid6_3legs_2_rimage_0 on host-050.virt.lab.msp.redhat.com should still exist
        Looks like this is BUG 1030130 (_extracted), moving on...

Checking EXISTENCE and STATE of synced_multiple_raid6_3legs_2_rmeta_0 on: host-050.virt.lab.msp.redhat.com 
synced_multiple_raid6_3legs_2_rmeta_0 on host-050.virt.lab.msp.redhat.com should still exist
        Looks like this is BUG 1030130 (_extracted), moving on...

Verify the raid image order is what's expected based on raid fault policy
EXPECTED LEG ORDER: unknown unknown /dev/sda1 /dev/sde1 /dev/sdb1

Comment 4 Jonathan Earl Brassow 2014-03-11 16:53:32 UTC
*** Bug 1067229 has been marked as a duplicate of this bug. ***

Comment 7 Jonathan Earl Brassow 2015-06-30 17:02:54 UTC
Currently running:
$> ./black_bird -o bp-01 -e kill_multiple_synced_raid1_4legs,kill_multiple_synced_raid6_3legs

No results yet.  If there are any others that I should be running, plz inform.  If I fail to repo, then I will switch to using all tests.

Comment 8 Jonathan Earl Brassow 2015-06-30 21:08:20 UTC
15 iterations of:
 kill_multiple_synced_raid1_4legs
 kill_multiple_synced_raid6_3legs

Can you retest?  I'll be running the full suite overnight also.

Comment 9 Jonathan Earl Brassow 2015-07-06 20:48:08 UTC
No _extracted images after running tests.  I'm closing this bug WORKSFORME.  It might be more appropriate to close it {CURRENT|NEXT}RELEASE, but I've not been able to reproduce after trying for two consecutive releases.

Comment 10 Corey Marthaler 2015-11-13 15:37:25 UTC
It appears I may have hit this bug again in 7.2. However I'm going to leave this closed until I have more to go on...

3.10.0-327.el7.x86_64
lvm2-2.02.130-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
lvm2-libs-2.02.130-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
lvm2-cluster-2.02.130-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-1.02.107-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-libs-1.02.107-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-event-1.02.107-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-event-libs-1.02.107-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015
device-mapper-persistent-data-0.5.5-1.el7    BUILT: Thu Aug 13 09:58:10 CDT 2015
cmirror-2.02.130-5.el7    BUILT: Wed Oct 14 08:27:29 CDT 2015


[...]
================================================================================
Iteration 0.36 started at Thu Nov 12 14:59:14 CST 2015
================================================================================
Scenario kill_two_synced_raid10_3legs: Kill two legs (none of which share the same stripe leg) of synced 3 leg raid10 volume(s)

********* RAID hash info for this scenario *********
* names:              synced_two_raid10_3legs_1
* sync:               1
* type:               raid10
* -m |-i value:       3
* leg devices:        /dev/sda1 /dev/sde1 /dev/sdf1 /dev/sdd1 /dev/sdb1 /dev/sdg1
* spanned legs:        0
* failpv(s):          /dev/sda1 /dev/sdf1
* additional snap:    /dev/sde1
* failnode(s):        host-112.virt.lab.msp.redhat.com
* lvmetad:            0
* raid fault policy:  allocate
******************************************************

Creating raids(s) on host-112.virt.lab.msp.redhat.com...
host-112.virt.lab.msp.redhat.com: lvcreate  --type raid10 -i 3 -n synced_two_raid10_3legs_1 -L 500M black_bird /dev/sda1:0-2400 /dev/sde1:0-2400 /dev/sdf1:0-2400 /dev/sdd1:0-2400 /dev/sdb1:0-2400 /dev/sdg1:0-2400

Current mirror/raid device structure(s):
  LV                                   Attr       LSize   Cpy%Sync Devices
  synced_two_raid10_3legs_1            rwi-a-r--- 504.00m 100.00   synced_two_raid10_3legs_1_rimage_0(0),synced_two_raid10_3legs_1_rimage_1(0),synced_two_raid10_3legs_1_rimage_2(0),synced_two_raid10_3legs_1_rimage_3(0),synced_two_raid10_3legs_1_rimage_4(0),synced_two_raid10_3legs_1_rimage_5(0)
  [synced_two_raid10_3legs_1_rimage_0] iwi-aor--- 168.00m          /dev/sda1(1)
  [synced_two_raid10_3legs_1_rimage_1] iwi-aor--- 168.00m          /dev/sde1(1)
  [synced_two_raid10_3legs_1_rimage_2] iwi-aor--- 168.00m          /dev/sdf1(1)
  [synced_two_raid10_3legs_1_rimage_3] iwi-aor--- 168.00m          /dev/sdd1(1)
  [synced_two_raid10_3legs_1_rimage_4] iwi-aor--- 168.00m          /dev/sdb1(1)
  [synced_two_raid10_3legs_1_rimage_5] iwi-aor--- 168.00m          /dev/sdg1(1)
  [synced_two_raid10_3legs_1_rmeta_0]  ewi-aor---   4.00m          /dev/sda1(0)
  [synced_two_raid10_3legs_1_rmeta_1]  ewi-aor---   4.00m          /dev/sde1(0)
  [synced_two_raid10_3legs_1_rmeta_2]  ewi-aor---   4.00m          /dev/sdf1(0)
  [synced_two_raid10_3legs_1_rmeta_3]  ewi-aor---   4.00m          /dev/sdd1(0)
  [synced_two_raid10_3legs_1_rmeta_4]  ewi-aor---   4.00m          /dev/sdb1(0)
  [synced_two_raid10_3legs_1_rmeta_5]  ewi-aor---   4.00m          /dev/sdg1(0)

* NOTE: not enough available devices for allocation fault polices to fully work *
(well technically, since we have 1, some allocation should work)

Waiting until all mirror|raid volumes become fully syncd...
   1/1 mirror(s) are fully synced: ( 100.00% )
Sleeping 15 sec
 
Creating ext on top of mirror(s) on host-112.virt.lab.msp.redhat.com...
mke2fs 1.42.9 (28-Dec-2013)
Mounting mirrored ext filesystems on host-112.virt.lab.msp.redhat.com...

PV=/dev/sdf1
     synced_two_raid10_3legs_1_rimage_2: 1.P
     synced_two_raid10_3legs_1_rmeta_2: 1.P
PV=/dev/sda1
     synced_two_raid10_3legs_1_rimage_0: 1.P
     synced_two_raid10_3legs_1_rmeta_0: 1.P

Creating a snapshot volume of each of the raids
Writing verification files (checkit) to mirror(s) on...
     ---- host-112.virt.lab.msp.redhat.com ----

<start name="host-112.virt.lab.msp.redhat.com_synced_two_raid10_3legs_1" pid="3253" time="Thu Nov 12 14:59:57 2015" type="cmd" />
Sleeping 15 seconds to get some outsanding I/O locks before the failure 
Verifying files (checkit) on mirror(s) on...
     ---- host-112.virt.lab.msp.redhat.com ----

Disabling device sda on host-112.virt.lab.msp.redhat.com
Disabling device sdf on host-112.virt.lab.msp.redhat.com
 
Getting recovery check start time from /var/log/messages: Nov 12 14:48
Attempting I/O to cause mirror down conversion(s) on host-112.virt.lab.msp.redhat.com
dd if=/dev/zero of=/mnt/synced_two_raid10_3legs_1/ddfile count=10 bs=4M
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.0892571 s, 470 MB/s

Verifying current sanity of lvm after the failure

Current mirror/raid device structure(s):
  Couldn't find device with uuid AL67Ze-viXZ-Va60-YiZu-M4CX-a2x6-W6cfvk.
  Couldn't find device with uuid HV2T11-2mnS-49DZ-8YQd-dodY-H7pe-usByHS.
  LV                                            Attr       LSize   Cpy%Sync Devices
  bb_snap1                                      swi-a-s--- 252.00m          /dev/sde1(43)
  synced_two_raid10_3legs_1                     owi-aor-p- 504.00m 100.00   synced_two_raid10_3legs_1_rimage_6(0),synced_two_raid10_3legs_1_rimage_1(0),synced_two_raid10_3legs_1_rimage_2(0),synced_two_raid10_3legs_1_rimage_3(0),synced_two_raid10_3legs_1_rimage_4(0),synced_two_raid10_3legs_1_rimage_5(0)
  synced_two_raid10_3legs_1_rimage_0__extracted -wi-----p- 168.00m          unknown device(1)
  [synced_two_raid10_3legs_1_rimage_1]          iwi-aor--- 168.00m          /dev/sde1(1)
  [synced_two_raid10_3legs_1_rimage_2]          iwi-a-r-p- 168.00m          unknown device(1)
  [synced_two_raid10_3legs_1_rimage_3]          iwi-aor--- 168.00m          /dev/sdd1(1)
  [synced_two_raid10_3legs_1_rimage_4]          iwi-aor--- 168.00m          /dev/sdb1(1)
  [synced_two_raid10_3legs_1_rimage_5]          iwi-aor--- 168.00m          /dev/sdg1(1)
  [synced_two_raid10_3legs_1_rimage_6]          iwi-aor--- 504.00m          /dev/sdc1(1)
  synced_two_raid10_3legs_1_rmeta_0__extracted  -wi-----p-   4.00m          unknown device(0)
  [synced_two_raid10_3legs_1_rmeta_1]           ewi-aor---   4.00m          /dev/sde1(0)
  [synced_two_raid10_3legs_1_rmeta_2]           ewi-a-r-p-   4.00m          unknown device(0)
  [synced_two_raid10_3legs_1_rmeta_3]           ewi-aor---   4.00m          /dev/sdd1(0)
  [synced_two_raid10_3legs_1_rmeta_4]           ewi-aor---   4.00m          /dev/sdb1(0)
  [synced_two_raid10_3legs_1_rmeta_5]           ewi-aor---   4.00m          /dev/sdg1(0)
  [synced_two_raid10_3legs_1_rmeta_6]           ewi-aor---   4.00m          /dev/sdc1(0)

 
Verifying FAILED device /dev/sda1 is *NOT* in the volume(s)
Verifying FAILED device /dev/sdf1 is *NOT* in the volume(s)
Verifying IMAGE device /dev/sde1 *IS* in the volume(s)
Verifying IMAGE device /dev/sdd1 *IS* in the volume(s)
Verifying IMAGE device /dev/sdb1 *IS* in the volume(s)
Verifying IMAGE device /dev/sdg1 *IS* in the volume(s)

Verify the rimage/rmeta dm devices remain after the failures
Checking EXISTENCE and STATE of synced_two_raid10_3legs_1_rimage_2 on: host-112.virt.lab.msp.redhat.com 
Checking EXISTENCE and STATE of synced_two_raid10_3legs_1_rmeta_2 on: host-112.virt.lab.msp.redhat.com 
Checking EXISTENCE and STATE of synced_two_raid10_3legs_1_rimage_0 on: host-112.virt.lab.msp.redhat.com 

synced_two_raid10_3legs_1_rimage_0 on host-112.virt.lab.msp.redhat.com should still exist

Comment 11 Corey Marthaler 2015-11-13 17:32:41 UTC
Reproduced the "_extracted" image issue again, however like the scenario in comment #10 above, and *unlike* this original report, this appears to so far only happen with raid10 volumes and does not require multiple raid volumes. So, opening a new bug...


Note You need to log in before you can comment on or make changes to this bug.