Bug 1130329
| Summary: | lvconvert --repair won't reuse physical volumes if the device isn't specified | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Corey Marthaler <cmarthal> |
| Component: | lvm2 | Assignee: | Heinz Mauelshagen <heinzm> |
| lvm2 sub component: | Mirroring and RAID (RHEL6) | QA Contact: | cluster-qe <cluster-qe> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | medium | ||
| Priority: | unspecified | CC: | agk, heinzm, jbrassow, msnitzer, prajnoha, prockai, tlavigne, zkabelac |
| Version: | 6.6 | ||
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.02.143-2.el6 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-05-11 01:15:41 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Corey Marthaler
2014-08-14 21:48:54 UTC
Here's the output from the entire test case attempt (in this case with a fault policy of warn):
Scenario kill_spanned_primary_synced_raid1_2legs: Kill primary leg of synced 2 leg raid1 volume(s)
********* RAID hash info for this scenario *********
* names: synced_spanned_primary_raid1_2legs_1
* sync: 1
* type: raid1
* -m |-i value: 1
* leg devices: /dev/sdg1 /dev/sdb1 /dev/sdd1 /dev/sde1
* spanned legs: 1
* failpv(s): /dev/sdd1
* additional snap: /dev/sdg1
* failnode(s): host-025.virt.lab.msp.redhat.com
* lvmetad: 0
* raid fault policy: warn
******************************************************
Creating raids(s) on host-025.virt.lab.msp.redhat.com...
host-025.virt.lab.msp.redhat.com: lvcreate --type raid1 -m 1 -n synced_spanned_primary_raid1_2legs_1 -L 500M black_bird /dev/sdg1:0-62 /dev/sdb1:0-62 /dev/sdd1:0-62 /dev/sde1:0-62
Current mirror/raid device structure(s):
LV Attr LSize Cpy%Sync Devices
synced_spanned_primary_raid1_2legs_1 rwi-a-r--- 500.00m 0.00 synced_spanned_primary_raid1_2legs_1_rimage_0(0),synced_spanned_primary_raid1_2legs_1_rimage_1(0)
[synced_spanned_primary_raid1_2legs_1_rimage_0] Iwi-aor--- 500.00m /dev/sdg1(1)
[synced_spanned_primary_raid1_2legs_1_rimage_0] Iwi-aor--- 500.00m /dev/sdd1(0)
[synced_spanned_primary_raid1_2legs_1_rimage_1] Iwi-aor--- 500.00m /dev/sdb1(1)
[synced_spanned_primary_raid1_2legs_1_rimage_1] Iwi-aor--- 500.00m /dev/sde1(0)
[synced_spanned_primary_raid1_2legs_1_rmeta_0] ewi-aor--- 4.00m /dev/sdg1(0)
[synced_spanned_primary_raid1_2legs_1_rmeta_1] ewi-aor--- 4.00m /dev/sdb1(0)
Waiting until all mirror|raid volumes become fully syncd...
1/1 mirror(s) are fully synced: ( 100.00% )
Creating ext on top of mirror(s) on host-025.virt.lab.msp.redhat.com...
mke2fs 1.41.12 (17-May-2010)
Mounting mirrored ext filesystems on host-025.virt.lab.msp.redhat.com...
PV=/dev/sdd1
synced_spanned_primary_raid1_2legs_1_rimage_0: 2
Creating a snapshot volume of each of the raids
Writing verification files (checkit) to mirror(s) on...
---- host-025.virt.lab.msp.redhat.com ----
Sleeping 15 seconds to get some outsanding EXT I/O locks before the failure
Verifying files (checkit) on mirror(s) on...
---- host-025.virt.lab.msp.redhat.com ----
Disabling device sdd on host-025.virt.lab.msp.redhat.com
Getting recovery check start time from /var/log/messages: Aug 14 15:57
Attempting I/O to cause mirror down conversion(s) on host-025.virt.lab.msp.redhat.com
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.274696 s, 153 MB/s
Verifying current sanity of lvm after the failure
Current mirror/raid device structure(s):
/dev/sdd1: read failed after 0 of 2048 at 0: Input/output error
[...]
/dev/sdd1: read failed after 0 of 512 at 4096: Input/output error
Couldn't find device with uuid mPAkzT-uXZL-03I7-eUJ3-vHTD-LNhh-hrGoP9.
LV Attr LSize Cpy%Sync Devices
bb_snap1 swi-a-s--- 252.00m /dev/sdg1(63)
synced_spanned_primary_raid1_2legs_1 owi-aor-p- 500.00m 100.00 synced_spanned_primary_raid1_2legs_1_rimage_0(0),synced_spanned_primary_raid1_2legs_1_rimage_1(0)
[synced_spanned_primary_raid1_2legs_1_rimage_0] iwi-aor-p- 500.00m /dev/sdg1(1)
[synced_spanned_primary_raid1_2legs_1_rimage_0] iwi-aor-p- 500.00m unknown device(0)
[synced_spanned_primary_raid1_2legs_1_rimage_1] iwi-aor--- 500.00m /dev/sdb1(1)
[synced_spanned_primary_raid1_2legs_1_rimage_1] iwi-aor--- 500.00m /dev/sde1(0)
[synced_spanned_primary_raid1_2legs_1_rmeta_0] ewi-aor-r- 4.00m /dev/sdg1(0)
[synced_spanned_primary_raid1_2legs_1_rmeta_1] ewi-aor--- 4.00m /dev/sdb1(0)
Verifying FAILED device /dev/sdd1 is *NOT* in the volume(s)
Verifying IMAGE device /dev/sdg1 *IS* in the volume(s)
Verifying IMAGE device /dev/sdb1 *IS* in the volume(s)
Verifying IMAGE device /dev/sde1 *IS* in the volume(s)
Verify the rimage/rmeta dm devices remain after the failures
Checking EXISTENCE and STATE of synced_spanned_primary_raid1_2legs_1_rimage_0 on: host-025.virt.lab.msp.redhat.com
Verify the raid image order is what's expected based on raid fault policy
EXPECTED LEG ORDER: /dev/sdg1 unknown /dev/sdb1 /dev/sde1
ACTUAL LEG ORDER: /dev/sdg1 unknown /dev/sdb1 /dev/sde1
Waiting until all mirror|raid volumes become fully syncd...
1/1 mirror(s) are fully synced: ( 100.00% )
Fault policy is warn, manually repairing failed raid volumes
host-025.virt.lab.msp.redhat.com: 'lvconvert --yes --repair black_bird/synced_spanned_primary_raid1_2legs_1'
[root@host-025 ~]# lvs -a -o +devices
/dev/sdd1: read failed after 0 of 2048 at 0: Input/output error
[...]
Couldn't find device with uuid mPAkzT-uXZL-03I7-eUJ3-vHTD-LNhh-hrGoP9.
LV Attr LSize Cpy%Sync Devices
bb_snap1 swi-a-s--- 252.00m /dev/sdg1(63)
synced_spanned_primary_raid1_2legs_1 owi-aor-p- 500.00m 100.00 synced_spanned_primary_raid1_2legs_1_rimage_0(0),synced_spanned_primary_raid1_2legs_1_rimage_1(0)
[synced_spanned_primary_raid1_2legs_1_rimage_0] iwi-aor-p- 500.00m /dev/sdg1(1)
[synced_spanned_primary_raid1_2legs_1_rimage_0] iwi-aor-p- 500.00m unknown device(0)
[synced_spanned_primary_raid1_2legs_1_rimage_1] iwi-aor--- 500.00m /dev/sdb1(1)
[synced_spanned_primary_raid1_2legs_1_rimage_1] iwi-aor--- 500.00m /dev/sde1(0)
[synced_spanned_primary_raid1_2legs_1_rmeta_0] ewi-aor-r- 4.00m /dev/sdg1(0)
[synced_spanned_primary_raid1_2legs_1_rmeta_1] ewi-aor--- 4.00m /dev/sdb1(0)
[root@host-025 ~]# lvconvert --yes --repair black_bird/synced_spanned_primary_raid1_2legs_1
/dev/sdd1: read failed after 0 of 2048 at 0: Input/output error
[...]
/dev/sdd1: read failed after 0 of 512 at 4096: Input/output error
Couldn't find device with uuid mPAkzT-uXZL-03I7-eUJ3-vHTD-LNhh-hrGoP9.
Unable to extract enough images to satisfy request
Failed to remove the specified images from black_bird/synced_spanned_primary_raid1_2legs_1
Failed to replace faulty devices in black_bird/synced_spanned_primary_raid1_2legs_1.
# THIS WORKS WHEN YOU SPECIFY THE DEVICE TO USE (like what was verified in bug 877221)
[root@host-025 ~]# lvconvert --yes --repair black_bird/synced_spanned_primary_raid1_2legs_1 /dev/sdg1
/dev/sdd1: read failed after 0 of 2048 at 0: Input/output error
[...]
/dev/sdd1: read failed after 0 of 512 at 4096: Input/output error
Couldn't find device with uuid mPAkzT-uXZL-03I7-eUJ3-vHTD-LNhh-hrGoP9.
Insufficient suitable allocatable extents for logical volume : 126 more required
Faulty devices in black_bird/synced_spanned_primary_raid1_2legs_1 successfully replaced.
[root@host-025 ~]# lvs -a -o +devices
/dev/sdd1: open failed: No such device or address
Couldn't find device with uuid mPAkzT-uXZL-03I7-eUJ3-vHTD-LNhh-hrGoP9.
LV Attr LSize Cpy%Sync Devices
bb_snap1 swi-a-s--- 252.00m /dev/sdg1(63)
synced_spanned_primary_raid1_2legs_1 owi-aor--- 500.00m 100.00 synced_spanned_primary_raid1_2legs_1_rimage_0(0),synced_spanned_primary_raid1_2legs_1_rimage_1(0)
[synced_spanned_primary_raid1_2legs_1_rimage_0] iwi-aor--- 500.00m /dev/sdg1(127)
[synced_spanned_primary_raid1_2legs_1_rimage_1] iwi-aor--- 500.00m /dev/sdb1(1)
[synced_spanned_primary_raid1_2legs_1_rimage_1] iwi-aor--- 500.00m /dev/sde1(0)
[synced_spanned_primary_raid1_2legs_1_rmeta_0] ewi-aor--- 4.00m /dev/sdg1(126)
[synced_spanned_primary_raid1_2legs_1_rmeta_1] ewi-aor--- 4.00m /dev/sdb1(0)
Exact same failure with the latest rpms and *without* lvmetad running.
2.6.32-621.el6.x86_64
lvm2-2.02.143-1.el6 BUILT: Wed Feb 24 07:59:50 CST 2016
lvm2-libs-2.02.143-1.el6 BUILT: Wed Feb 24 07:59:50 CST 2016
lvm2-cluster-2.02.143-1.el6 BUILT: Wed Feb 24 07:59:50 CST 2016
udev-147-2.71.el6 BUILT: Wed Feb 10 07:07:17 CST 2016
device-mapper-1.02.117-1.el6 BUILT: Wed Feb 24 07:59:50 CST 2016
device-mapper-libs-1.02.117-1.el6 BUILT: Wed Feb 24 07:59:50 CST 2016
device-mapper-event-1.02.117-1.el6 BUILT: Wed Feb 24 07:59:50 CST 2016
device-mapper-event-libs-1.02.117-1.el6 BUILT: Wed Feb 24 07:59:50 CST 2016
device-mapper-persistent-data-0.6.2-0.1.rc5.el6 BUILT: Wed Feb 24 07:07:09 CST 2016
cmirror-2.02.143-1.el6 BUILT: Wed Feb 24 07:59:50 CST 2016
================================================================================
Iteration 0.1 started at Thu Mar 3 10:33:49 CST 2016
================================================================================
Scenario kill_second_spanned_primary_synced_raid1_2legs: Kill primary leg of synced 2 leg raid1 volume(s)
********* RAID hash info for this scenario *********
* names: synced_spanned_primary_raid1_2legs_1
* sync: 1
* type: raid1
* -m |-i value: 1
* leg devices: /dev/sdb1 /dev/sde1 /dev/sdg1 /dev/sdh1
* spanned legs: 1
* failpv(s): /dev/sdg1
* failnode(s): host-137.virt.lab.msp.redhat.com
* lvmetad: 0
* raid fault policy: warn
******************************************************
Creating raids(s) on host-137.virt.lab.msp.redhat.com...
host-137.virt.lab.msp.redhat.com: lvcreate --type raid1 -m 1 -n synced_spanned_primary_raid1_2legs_1 -L 500M black_bird /dev/sdb1:0-62 /dev/sde1:0-62 /dev/sdg1:0-62 /dev/sdh1:0-62
Current mirror/raid device structure(s):
LV Attr LSize Cpy%Sync Devices
synced_spanned_primary_raid1_2legs_1 rwi-a-r--- 500.00m 0.00 synced_spanned_primary_raid1_2legs_1_rimage_0(0),synced_spanned_primary_raid1_2legs_1_rimage_1(0)
[synced_spanned_primary_raid1_2legs_1_rimage_0] Iwi-aor--- 500.00m /dev/sdb1(1)
[synced_spanned_primary_raid1_2legs_1_rimage_0] Iwi-aor--- 500.00m /dev/sdg1(0)
[synced_spanned_primary_raid1_2legs_1_rimage_1] Iwi-aor--- 500.00m /dev/sde1(1)
[synced_spanned_primary_raid1_2legs_1_rimage_1] Iwi-aor--- 500.00m /dev/sdh1(0)
[synced_spanned_primary_raid1_2legs_1_rmeta_0] ewi-aor--- 4.00m /dev/sdb1(0)
[synced_spanned_primary_raid1_2legs_1_rmeta_1] ewi-aor--- 4.00m /dev/sde1(0)
Waiting until all mirror|raid volumes become fully syncd...
1/1 mirror(s) are fully synced: ( 100.00% )
Creating ext on top of mirror(s) on host-137.virt.lab.msp.redhat.com...
mke2fs 1.41.12 (17-May-2010)
Mounting mirrored ext filesystems on host-137.virt.lab.msp.redhat.com...
PV=/dev/sdg1
synced_spanned_primary_raid1_2legs_1_rimage_0: 2
Writing verification files (checkit) to mirror(s) on...
---- host-137.virt.lab.msp.redhat.com ----
Sleeping 15 seconds to get some outsanding I/O locks before the failure
Verifying files (checkit) on mirror(s) on...
---- host-137.virt.lab.msp.redhat.com ----
Disabling device sdg on host-137.virt.lab.msp.redhat.com
Getting recovery check start time from /var/log/messages: Mar 3 10:34
Attempting I/O to cause mirror down conversion(s) on host-137.virt.lab.msp.redhat.com
dd if=/dev/zero of=/mnt/synced_spanned_primary_raid1_2legs_1/ddfile count=10 bs=4M
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.344544 s, 122 MB/s
dd if=/dev/zero of=/mnt/synced_spanned_primary_raid1_2legs_1/ddfile seek=200 count=50 bs=1M
50+0 records in
50+0 records out
52428800 bytes (52 MB) copied, 0.387311 s, 135 MB/s
Verifying current sanity of lvm after the failure
Current mirror/raid device structure(s):
/dev/sdg1: read failed after 0 of 2048 at 0: Input/output error
/dev/sdg1: read failed after 0 of 512 at 21467824128: Input/output error
/dev/sdg1: read failed after 0 of 512 at 21467938816: Input/output error
/dev/sdg1: read failed after 0 of 512 at 0: Input/output error
/dev/sdg1: read failed after 0 of 512 at 4096: Input/output error
Couldn't find device with uuid CEJvEl-72JI-CFZo-qwaU-l9Eh-g4g4-uACQXM.
LV Attr LSize Cpy%Sync Devices
synced_spanned_primary_raid1_2legs_1 rwi-aor-p- 500.00m 100.00 synced_spanned_primary_raid1_2legs_1_rimage_0(0),synced_spanned_primary_raid1_2legs_1_rimage_1(0)
[synced_spanned_primary_raid1_2legs_1_rimage_0] iwi-aor-p- 500.00m /dev/sdb1(1)
[synced_spanned_primary_raid1_2legs_1_rimage_0] iwi-aor-p- 500.00m unknown device(0)
[synced_spanned_primary_raid1_2legs_1_rimage_1] iwi-aor--- 500.00m /dev/sde1(1)
[synced_spanned_primary_raid1_2legs_1_rimage_1] iwi-aor--- 500.00m /dev/sdh1(0)
[synced_spanned_primary_raid1_2legs_1_rmeta_0] ewi-aor-r- 4.00m /dev/sdb1(0)
[synced_spanned_primary_raid1_2legs_1_rmeta_1] ewi-aor--- 4.00m /dev/sde1(0)
Verifying FAILED device /dev/sdg1 is *NOT* in the volume(s)
Verifying IMAGE device /dev/sdb1 *IS* in the volume(s)
Verifying IMAGE device /dev/sde1 *IS* in the volume(s)
Verifying IMAGE device /dev/sdh1 *IS* in the volume(s)
Verify the rimage/rmeta dm devices remain after the failures
Checking EXISTENCE and STATE of synced_spanned_primary_raid1_2legs_1_rimage_0 on: host-137.virt.lab.msp.redhat.com
Verify the raid image order is what's expected based on raid fault policy
SPAN / WARN
EXPECTED SPAN LEG ORDER: /dev/sdb1 unknown /dev/sde1 /dev/sdh1
ACTUAL LEG ORDER: /dev/sdb1 unknown /dev/sde1 /dev/sdh1
Waiting until all mirror|raid volumes become fully syncd...
1/1 mirror(s) are fully synced: ( 100.00% )
Fault policy is warn, manually repairing failed raid volumes
host-137.virt.lab.msp.redhat.com: 'lvconvert --yes --repair black_bird/synced_spanned_primary_raid1_2legs_1'
/dev/sdg1: read failed after 0 of 2048 at 0: Input/output error
/dev/sdg1: read failed after 0 of 512 at 21467824128: Input/output error
/dev/sdg1: read failed after 0 of 512 at 21467938816: Input/output error
/dev/sdg1: read failed after 0 of 512 at 0: Input/output error
/dev/sdg1: read failed after 0 of 512 at 4096: Input/output error
Couldn't find device with uuid CEJvEl-72JI-CFZo-qwaU-l9Eh-g4g4-uACQXM.
Unable to extract enough images to satisfy request
Failed to remove the specified images from black_bird/synced_spanned_primary_raid1_2legs_1
Failed to replace faulty devices in black_bird/synced_spanned_primary_raid1_2legs_1.
lvconvert repair failed for black_bird/synced_spanned_primary_raid1_2legs_1 on host-137.virt.lab.msp.redhat.com
[root@host-137 ~]# lvs -a -o +devices
/dev/sdg1: read failed after 0 of 2048 at 0: Input/output error
/dev/sdg1: read failed after 0 of 512 at 21467824128: Input/output error
/dev/sdg1: read failed after 0 of 512 at 21467938816: Input/output error
/dev/sdg1: read failed after 0 of 512 at 0: Input/output error
/dev/sdg1: read failed after 0 of 512 at 4096: Input/output error
Couldn't find device with uuid CEJvEl-72JI-CFZo-qwaU-l9Eh-g4g4-uACQXM.
LV VG Attr LSize Cpy%Sync Devices
synced_spanned_primary_raid1_2legs_1 black_bird rwi-aor-p- 500.00m 100.00 synced_spanned_primary_raid1_2legs_1_rimage_0(0),synced_spanned_primary_raid1_2legs_1_rimage_1(0)
[synced_spanned_primary_raid1_2legs_1_rimage_0] black_bird iwi-aor-p- 500.00m /dev/sdb1(1)
[synced_spanned_primary_raid1_2legs_1_rimage_0] black_bird iwi-aor-p- 500.00m unknown device(0)
[synced_spanned_primary_raid1_2legs_1_rimage_1] black_bird iwi-aor--- 500.00m /dev/sde1(1)
[synced_spanned_primary_raid1_2legs_1_rimage_1] black_bird iwi-aor--- 500.00m /dev/sdh1(0)
[synced_spanned_primary_raid1_2legs_1_rmeta_0] black_bird ewi-aor-r- 4.00m /dev/sdb1(0)
[synced_spanned_primary_raid1_2legs_1_rmeta_1] black_bird ewi-aor--- 4.00m /dev/sde1(0)
[root@host-137 ~]# ps -ef | grep lvmetad
root 12283 2320 0 10:41 pts/0 00:00:00 grep lvmetad
# Without the device specified, the repair fails
[root@host-137 ~]# lvconvert --yes --repair black_bird/synced_spanned_primary_raid1_2legs_1
/dev/sdg1: read failed after 0 of 2048 at 0: Input/output error
/dev/sdg1: read failed after 0 of 512 at 21467824128: Input/output error
/dev/sdg1: read failed after 0 of 512 at 21467938816: Input/output error
/dev/sdg1: read failed after 0 of 512 at 0: Input/output error
/dev/sdg1: read failed after 0 of 512 at 4096: Input/output error
Couldn't find device with uuid CEJvEl-72JI-CFZo-qwaU-l9Eh-g4g4-uACQXM.
Unable to extract enough images to satisfy request
Failed to remove the specified images from black_bird/synced_spanned_primary_raid1_2legs_1
Failed to replace faulty devices in black_bird/synced_spanned_primary_raid1_2legs_1.
# With the device specifiec, the repair works as expected
[root@host-137 ~]# lvconvert --yes --repair black_bird/synced_spanned_primary_raid1_2legs_1 /dev/sdb1
/dev/sdg1: read failed after 0 of 2048 at 0: Input/output error
/dev/sdg1: read failed after 0 of 512 at 21467824128: Input/output error
/dev/sdg1: read failed after 0 of 512 at 21467938816: Input/output error
/dev/sdg1: read failed after 0 of 512 at 0: Input/output error
/dev/sdg1: read failed after 0 of 512 at 4096: Input/output error
Couldn't find device with uuid CEJvEl-72JI-CFZo-qwaU-l9Eh-g4g4-uACQXM.
Insufficient free space: 126 extents needed, but only 0 available
Faulty devices in black_bird/synced_spanned_primary_raid1_2legs_1 successfully replaced.
[root@host-137 ~]# lvs -a -o +devices
/dev/sdg1: open failed: No such device or address
Couldn't find device with uuid CEJvEl-72JI-CFZo-qwaU-l9Eh-g4g4-uACQXM.
LV VG Attr LSize Cpy%Sync Devices
synced_spanned_primary_raid1_2legs_1 black_bird rwi-aor--- 500.00m 100.00 synced_spanned_primary_raid1_2legs_1_rimage_0(0),synced_spanned_primary_raid1_2legs_1_rimage_1(0)
[synced_spanned_primary_raid1_2legs_1_rimage_0] black_bird iwi-aor--- 500.00m /dev/sdb1(2)
[synced_spanned_primary_raid1_2legs_1_rimage_1] black_bird iwi-aor--- 500.00m /dev/sde1(1)
[synced_spanned_primary_raid1_2legs_1_rimage_1] black_bird iwi-aor--- 500.00m /dev/sdh1(0)
[synced_spanned_primary_raid1_2legs_1_rmeta_0] black_bird ewi-aor--- 4.00m /dev/sdb1(1)
[synced_spanned_primary_raid1_2legs_1_rmeta_1] black_bird ewi-aor--- 4.00m /dev/sde1(0)
Got reproducer script. Fixed in upstream: https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=18cf5e8e6758db59c9413bd9c9abcc183c49293d *** Bug 1215156 has been marked as a duplicate of this bug. *** Marking verified in the latest rpms. Repair now works both manually and automatically, even if the "smart choice" existing device isn't being used (new rfe for that to be opened...). # Manual attempt with allocate warn fault policy [root@host-115 ~]# lvs -a -o +devices /dev/sdd1: read failed after 0 of 4096 at 0: Input/output error /dev/sdd1: read failed after 0 of 4096 at 26838958080: Input/output error /dev/sdd1: read failed after 0 of 4096 at 26839048192: Input/output error /dev/sdd1: read failed after 0 of 4096 at 4096: Input/output error Couldn't find device with uuid xTPPbl-TQIN-o2xA-7qhV-rv1p-Vxb2-qAsVTE. LV VG Attr LSize Cpy%Sync Devices synced_spanned_primary_raid1_2legs_1 black_bird rwi-aor-p- 500.00m 100.00 synced_spanned_primary_raid1_2legs_1_rimage_0(0),synced_spanned_primary_raid1_2legs_1_rimage_1(0) [synced_spanned_primary_raid1_2legs_1_rimage_0] black_bird iwi-aor-p- 500.00m /dev/sdc1(1) [synced_spanned_primary_raid1_2legs_1_rimage_0] black_bird iwi-aor-p- 500.00m unknown device(0) [synced_spanned_primary_raid1_2legs_1_rimage_1] black_bird iwi-aor--- 500.00m /dev/sdg1(1) [synced_spanned_primary_raid1_2legs_1_rimage_1] black_bird iwi-aor--- 500.00m /dev/sda1(0) [synced_spanned_primary_raid1_2legs_1_rmeta_0] black_bird ewi-aor-r- 4.00m /dev/sdc1(0) [synced_spanned_primary_raid1_2legs_1_rmeta_1] black_bird ewi-aor--- 4.00m /dev/sdg1(0) [root@host-115 ~]# lvconvert --yes --repair black_bird/synced_spanned_primary_raid1_2legs_1 /dev/sdd1: read failed after 0 of 4096 at 0: Input/output error /dev/sdd1: read failed after 0 of 4096 at 26838958080: Input/output error /dev/sdd1: read failed after 0 of 4096 at 26839048192: Input/output error /dev/sdd1: read failed after 0 of 4096 at 4096: Input/output error Couldn't find device with uuid xTPPbl-TQIN-o2xA-7qhV-rv1p-Vxb2-qAsVTE. Faulty devices in black_bird/synced_spanned_primary_raid1_2legs_1 successfully replaced. [root@host-115 ~]# lvs -a -o +devices /dev/sdd1: open failed: No such device or address Couldn't find device with uuid xTPPbl-TQIN-o2xA-7qhV-rv1p-Vxb2-qAsVTE. LV VG Attr LSize Cpy%Sync Devices synced_spanned_primary_raid1_2legs_1 black_bird rwi-aor--- 500.00m 100.00 synced_spanned_primary_raid1_2legs_1_rimage_0(0),synced_spanned_primary_raid1_2legs_1_rimage_1(0) [synced_spanned_primary_raid1_2legs_1_rimage_0] black_bird iwi-aor--- 500.00m /dev/sdb1(1) [synced_spanned_primary_raid1_2legs_1_rimage_1] black_bird iwi-aor--- 500.00m /dev/sdg1(1) [synced_spanned_primary_raid1_2legs_1_rimage_1] black_bird iwi-aor--- 500.00m /dev/sda1(0) [synced_spanned_primary_raid1_2legs_1_rmeta_0] black_bird ewi-aor--- 4.00m /dev/sdb1(0) [synced_spanned_primary_raid1_2legs_1_rmeta_1] black_bird ewi-aor--- 4.00m /dev/sdg1(0) # Automatic attempt with allocate fault policy Mar 16 14:55:59 host-113 lvm[4084]: Device #0 of raid1 array, black_bird-synced_spanned_primary_raid1_2legs_1-real, has failed. Mar 16 14:55:59 host-113 lvm[4084]: Couldn't find device with uuid AD2Zyr-DvX3-foct-puu0-UMkE-P6k2-1ZfLyH. Mar 16 14:55:59 host-113 lvm[4084]: Couldn't find device with uuid AD2Zyr-DvX3-foct-puu0-UMkE-P6k2-1ZfLyH. Mar 16 14:55:59 host-113 lvm[4084]: Faulty devices in black_bird/synced_spanned_primary_raid1_2legs_1 successfully replaced. 2.6.32-616.el6.x86_64 lvm2-2.02.143-2.el6 BUILT: Wed Mar 16 08:30:42 CDT 2016 lvm2-libs-2.02.143-2.el6 BUILT: Wed Mar 16 08:30:42 CDT 2016 lvm2-cluster-2.02.143-2.el6 BUILT: Wed Mar 16 08:30:42 CDT 2016 udev-147-2.72.el6 BUILT: Tue Mar 1 06:14:05 CST 2016 device-mapper-1.02.117-2.el6 BUILT: Wed Mar 16 08:30:42 CDT 2016 device-mapper-libs-1.02.117-2.el6 BUILT: Wed Mar 16 08:30:42 CDT 2016 device-mapper-event-1.02.117-2.el6 BUILT: Wed Mar 16 08:30:42 CDT 2016 device-mapper-event-libs-1.02.117-2.el6 BUILT: Wed Mar 16 08:30:42 CDT 2016 device-mapper-persistent-data-0.6.2-0.1.rc5.el6 BUILT: Wed Feb 24 07:07:09 CST 2016 cmirror-2.02.143-2.el6 BUILT: Wed Mar 16 08:30:42 CDT 2016 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0964.html |