RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1418832 - possible data corruption after the transient failure of raid volume
Summary: possible data corruption after the transient failure of raid volume
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.9
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-02 20:45 UTC by Corey Marthaler
Modified: 2017-12-06 12:07 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-06 12:07:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2017-02-02 20:45:31 UTC
Description of problem:
In this scenario all files were verified before a transient failure of one of the the two devices that made up the first raid5 leg. The device was then re-enabled and the volume was refreshed.


./black_bird -T -e kill_second_spanned_primary_synced_raid5_2legs

================================================================================
Iteration 0.6 started at Thu Feb  2 11:52:51 CST 2017
================================================================================
Scenario kill_second_spanned_primary_synced_raid5_2legs: Kill primary leg of synced 2 leg raid5 volume(s)

********* RAID hash info for this scenario *********
* names:              synced_spanned_primary_raid5_2legs_1
* sync:               1
* type:               raid5
* -m |-i value:       2
* leg devices:        /dev/sdg1 /dev/sda1 /dev/sde1 /dev/sdd1 /dev/sdb1 /dev/sdf1
* spanned legs:       1
* manual repair:      0
* failpv(s):          /dev/sdd1
* failnode(s):        host-112
* lvmetad:            0
* raid fault policy:  warn
******************************************************

Creating raids(s) on host-112...
host-112: lvcreate --type raid5 -i 2 -n synced_spanned_primary_raid5_2legs_1 -L 500M black_bird /dev/sdg1:0-62 /dev/sda1:0-62 /dev/sde1:0-62 /dev/sdd1:0-62 /dev/sdb1:0-62 /dev/sdf1:0-62

Current mirror/raid device structure(s):
  LV                                              Attr       LSize   Cpy%Sync Devices
  synced_spanned_primary_raid5_2legs_1            rwi-a-r--- 504.00m 0.00     synced_spanned_primary_raid5_2legs_1_rimage_0(0),synced_spanned_primary_raid5_2legs_1_rimage_1(0),synced_spanned_primary_raid5_2legs_1_rimage_2(0)
  [synced_spanned_primary_raid5_2legs_1_rimage_0] Iwi-aor--- 252.00m          /dev/sdg1(1)
  [synced_spanned_primary_raid5_2legs_1_rimage_0] Iwi-aor--- 252.00m          /dev/sdd1(0)
  [synced_spanned_primary_raid5_2legs_1_rimage_1] Iwi-aor--- 252.00m          /dev/sda1(1)
  [synced_spanned_primary_raid5_2legs_1_rimage_1] Iwi-aor--- 252.00m          /dev/sdb1(0)
  [synced_spanned_primary_raid5_2legs_1_rimage_2] Iwi-aor--- 252.00m          /dev/sde1(1)
  [synced_spanned_primary_raid5_2legs_1_rimage_2] Iwi-aor--- 252.00m          /dev/sdf1(0)
  [synced_spanned_primary_raid5_2legs_1_rmeta_0]  ewi-aor---   4.00m          /dev/sdg1(0)
  [synced_spanned_primary_raid5_2legs_1_rmeta_1]  ewi-aor---   4.00m          /dev/sda1(0)
  [synced_spanned_primary_raid5_2legs_1_rmeta_2]  ewi-aor---   4.00m          /dev/sde1(0)


Waiting until all mirror|raid volumes become fully syncd...
   1/1 mirror(s) are fully synced: ( 100.00% )

Creating ext on top of mirror(s) on host-112...
mke2fs 1.41.12 (17-May-2010)
Mounting mirrored ext filesystems on host-112...

PV=/dev/sdd1
        synced_spanned_primary_raid5_2legs_1_rimage_0: 2

Writing verification files (checkit) to mirror(s) on...
        ---- host-112 ----

<start name="host-112_synced_spanned_primary_raid5_2legs_1"  pid="28749" time="Thu Feb  2 11:53:31 2017 -0600" type="cmd" />
Sleeping 15 seconds to get some outsanding I/O locks before the failure 


# Data was verified before failure

Verifying files (checkit) on mirror(s) on...
        ---- host-112 ----


Disabling device sdd on host-112

Attempting I/O to cause mirror down conversion(s) on host-112
dd if=/dev/zero of=/mnt/synced_spanned_primary_raid5_2legs_1/ddfile count=10 bs=4M
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.199886 s, 210 MB/s
dd if=/dev/zero of=/mnt/synced_spanned_primary_raid5_2legs_1/ddfile seek=200 count=50 bs=1M
50+0 records in
50+0 records out
52428800 bytes (52 MB) copied, 0.33839 s, 155 MB/s

HACK TO KILL XDOIO...
<fail name="host-112_synced_spanned_primary_raid5_2legs_1"  pid="28749" time="Thu Feb  2 11:54:01 2017 -0600" type="cmd" duration="30" ec="143" />
ALL STOP!
Unmounting ext and removing mnt point on host-112...

Verifying proper "D"ead kernel status state for failed raid images(s)
black_bird-synced_spanned_primary_raid5_2legs_1: 0 1032192 raid raid5_ls 3 DAA 516096/516096 idle 0

Reactivaing the raids containing transiently failed raid images
lvchange -an black_bird/synced_spanned_primary_raid5_2legs_1
  /dev/sdd1: read failed after 0 of 1024 at 22545367040: Input/output error
  /dev/sdd1: read failed after 0 of 1024 at 22545448960: Input/output error
  /dev/sdd1: read failed after 0 of 1024 at 0: Input/output error
  /dev/sdd1: read failed after 0 of 1024 at 4096: Input/output error
  /dev/sdd1: read failed after 0 of 2048 at 0: Input/output error
  Couldn't find device with uuid 8YLcFi-GNWz-os8B-8PEx-EYyX-oD2f-LMcPYc.
  Couldn't find device for segment belonging to black_bird/synced_spanned_primary_raid5_2legs_1_rimage_0 while checking used and assumed devices.


lvchange -ay  black_bird/synced_spanned_primary_raid5_2legs_1
  Couldn't find device with uuid 8YLcFi-GNWz-os8B-8PEx-EYyX-oD2f-LMcPYc.

Verifying proper kernel table state of failed image(s)
Verifying proper "D"ead kernel status state for failed raid images(s)
No dead "D" kernel state was found for this raid image
This is a known issue where triggering a repair in raid4|5 spanned volumes is difficult and inconsistent, moving on...


Enabling device sdd on host-112 Running vgs to make LVM update metadata version if possible (will restore a-m PVs)

Refreshing raids now that transiently failed raid images should be back
lvchange --refresh black_bird/synced_spanned_primary_raid5_2legs_1

Verifying current sanity of lvm after the failure
Verifying proper "A"ctive kernel status state for raid image(s)

Current mirror/raid device structure(s):
  LV                                              Attr       LSize   Cpy%Sync Devices
  synced_spanned_primary_raid5_2legs_1            rwi-a-r--- 504.00m 100.00   synced_spanned_primary_raid5_2legs_1_rimage_0(0),synced_spanned_primary_raid5_2legs_1_rimage_1(0),synced_spanned_primary_raid5_2legs_1_rimage_2(0)
  [synced_spanned_primary_raid5_2legs_1_rimage_0] iwi-aor--- 252.00m          /dev/sdg1(1)
  [synced_spanned_primary_raid5_2legs_1_rimage_0] iwi-aor--- 252.00m          /dev/sdd1(0)
  [synced_spanned_primary_raid5_2legs_1_rimage_1] iwi-aor--- 252.00m          /dev/sda1(1)
  [synced_spanned_primary_raid5_2legs_1_rimage_1] iwi-aor--- 252.00m          /dev/sdb1(0)
  [synced_spanned_primary_raid5_2legs_1_rimage_2] iwi-aor--- 252.00m          /dev/sde1(1)
  [synced_spanned_primary_raid5_2legs_1_rimage_2] iwi-aor--- 252.00m          /dev/sdf1(0)
  [synced_spanned_primary_raid5_2legs_1_rmeta_0]  ewi-aor---   4.00m          /dev/sdg1(0)
  [synced_spanned_primary_raid5_2legs_1_rmeta_1]  ewi-aor---   4.00m          /dev/sda1(0)
  [synced_spanned_primary_raid5_2legs_1_rmeta_2]  ewi-aor---   4.00m          /dev/sde1(0)


Checking for leftover '-missing_0_0' or 'unknown devices'
Verifying FAILED device /dev/sdd1 *IS BACK* in the volume(s)
Verifying IMAGE device /dev/sdg1 *IS* in the volume(s)
Verifying IMAGE device /dev/sda1 *IS* in the volume(s)
Verifying IMAGE device /dev/sde1 *IS* in the volume(s)
Verifying IMAGE device /dev/sdb1 *IS* in the volume(s)
Verifying IMAGE device /dev/sdf1 *IS* in the volume(s)
Verify the rimage/rmeta dm devices remain after the failures
DM:synced_spanned_primary_raid5_2legs_1_rimage_0
Verify the rimage/rmeta dm devices remain after the failures
Checking EXISTENCE and STATE of synced_spanned_primary_raid5_2legs_1_rimage_0 on: host-112 

Verify the raid image order is what's expected based on raid fault policy
EXPECTED SPAN LEG ORDER: /dev/sdg1 unknown /dev/sda1 /dev/sdb1 /dev/sde1 /dev/sdf1
SPAN EXPECTED ORDER: /dev/sdg1 unknown /dev/sda1 /dev/sdb1 /dev/sde1 /dev/sdf1
ACTUAL LEG ORDER: /dev/sdg1 /dev/sdd1 /dev/sda1 /dev/sdb1 /dev/sde1 /dev/sdf1

Verifying proper kernel table state of failed image(s)

Mounting mirrored ext filesystems on host-112...
Verifying files (checkit) on mirror(s) on...
        ---- host-112 ----
*** DATA COMPARISON ERROR [file:oekjufkpcufhjveluqwblwpboqqqrmmiswrnefggcnnlmry] ***
Corrupt regions follow - unprintable chars are represented as '.'
-----------------------------------------------------------------
corrupt bytes starting at file offset 274432
    1st 32 expected bytes:  88888888888888888888888888888888
    1st 32 actual bytes:    ................................

checkit write verify failed


Version-Release number of selected component (if applicable):
2.6.32-688.el6.x86_64

lvm2-2.02.143-12.el6    BUILT: Wed Jan 11 09:35:04 CST 2017
lvm2-libs-2.02.143-12.el6    BUILT: Wed Jan 11 09:35:04 CST 2017
lvm2-cluster-2.02.143-12.el6    BUILT: Wed Jan 11 09:35:04 CST 2017
udev-147-2.73.el6_8.2    BUILT: Tue Aug 30 08:17:19 CDT 2016
device-mapper-1.02.117-12.el6    BUILT: Wed Jan 11 09:35:04 CST 2017
device-mapper-libs-1.02.117-12.el6    BUILT: Wed Jan 11 09:35:04 CST 2017
device-mapper-event-1.02.117-12.el6    BUILT: Wed Jan 11 09:35:04 CST 2017
device-mapper-event-libs-1.02.117-12.el6    BUILT: Wed Jan 11 09:35:04 CST 2017
device-mapper-persistent-data-0.6.2-0.1.rc7.el6    BUILT: Tue Mar 22 08:58:09 CDT 2016


How reproducible:
Only once so far

Comment 2 Corey Marthaler 2017-02-08 22:35:46 UTC
Looks like this doesn't require a raid image spanning multiple devices. Here, I saw corruption with a regular in sync raid10 volume experiencing a "transient" primary leg failure. 


================================================================================
Iteration 0.15 started at Wed Feb  8 16:11:13 CST 2017
================================================================================                                                                                     Scenario kill_primary_synced_raid10_2legs: Kill primary leg of synced 2 leg raid10 volume(s)                                                                                                                             
                                                                                                                                                                                                                         
********* RAID hash info for this scenario *********                                                                                                                                                                          
* names:              synced_primary_raid10_2legs_1                                                                                                                                                                                 
* sync:               1                                                                                                                                                                                                             
* type:               raid10                                                                                                                                                                                                              
* -m |-i value:       2                                                                                                                                                                                                                   
* leg devices:        /dev/sdd1 /dev/sdc1 /dev/sda1 /dev/sdg1                                                                                                                                                                                   
* spanned legs:       0                                                                                                                                                                                                                            
* manual repair:      0                                                                                                                                                                                                                            
* failpv(s):          /dev/sdd1
* failnode(s):        host-077
* lvmetad:            0
* raid fault policy:  warn
******************************************************

  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
Creating raids(s) on host-077...
host-077: lvcreate --type raid10 -i 2 -n synced_primary_raid10_2legs_1 -L 500M black_bird /dev/sdd1:0-2400 /dev/sdc1:0-2400 /dev/sda1:0-2400 /dev/sdg1:0-2400
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.

Current mirror/raid device structure(s):
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  LV                                       Attr       LSize   Cpy%Sync Devices
   synced_primary_raid10_2legs_1            rwi-a-r--- 504.00m 0.00     synced_primary_raid10_2legs_1_rimage_0(0),synced_primary_raid10_2legs_1_rimage_1(0),synced_primary_raid10_2legs_1_rimage_2(0),synced_primary_raid10_2legs_1_rimage_3(0)
   [synced_primary_raid10_2legs_1_rimage_0] Iwi-aor--- 252.00m          /dev/sdd1(1)
   [synced_primary_raid10_2legs_1_rimage_1] Iwi-aor--- 252.00m          /dev/sdc1(1)
   [synced_primary_raid10_2legs_1_rimage_2] Iwi-aor--- 252.00m          /dev/sda1(1)
   [synced_primary_raid10_2legs_1_rimage_3] Iwi-aor--- 252.00m          /dev/sdg1(1)
   [synced_primary_raid10_2legs_1_rmeta_0]  ewi-aor---   4.00m          /dev/sdd1(0)
   [synced_primary_raid10_2legs_1_rmeta_1]  ewi-aor---   4.00m          /dev/sdc1(0)
   [synced_primary_raid10_2legs_1_rmeta_2]  ewi-aor---   4.00m          /dev/sda1(0)
   [synced_primary_raid10_2legs_1_rmeta_3]  ewi-aor---   4.00m          /dev/sdg1(0)


  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
Waiting until all mirror|raid volumes become fully syncd...
   1/1 mirror(s) are fully synced: ( 100.00% )

Creating ext on top of mirror(s) on host-077...
mke2fs 1.41.12 (17-May-2010)
Mounting mirrored ext filesystems on host-077...

  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
PV=/dev/sdd1
        synced_primary_raid10_2legs_1_rimage_0: 2
        synced_primary_raid10_2legs_1_rmeta_0: 2

Writing verification files (checkit) to mirror(s) on...
        ---- host-077 ----

<start name="host-077_synced_primary_raid10_2legs_1"  pid="28769" time="Wed Feb  8 16:11:54 2017 -0600" type="cmd" />
Sleeping 15 seconds to get some outsanding I/O locks before the failure 
Verifying files (checkit) on mirror(s) on...
        ---- host-077 ----



Disabling device sdd on host-077

Attempting I/O to cause mirror down conversion(s) on host-077
dd if=/dev/zero of=/mnt/synced_primary_raid10_2legs_1/ddfile count=10 bs=4M
10+0 records in
10+0 records out
41943040 bytes (42 MB) copied, 0.307287 s, 136 MB/s

HACK TO KILL XDOIO...
<fail name="host-077_synced_primary_raid10_2legs_1"  pid="28769" time="Wed Feb  8 16:12:24 2017 -0600" type="cmd" duration="30" ec="143" />
ALL STOP!
Unmounting ext and removing mnt point on host-077...

Verifying proper "D"ead kernel status state for failed raid images(s)
black_bird-synced_primary_raid10_2legs_1: 0 1032192 raid raid10 4 DAAA 1032192/1032192 idle 0

Reactivaing the raids containing transiently failed raid images
lvchange -an black_bird/synced_primary_raid10_2legs_1
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  /dev/sdd1: read failed after 0 of 1024 at 22545367040: Input/output error
  /dev/sdd1: read failed after 0 of 1024 at 22545448960: Input/output error
  /dev/sdd1: read failed after 0 of 1024 at 0: Input/output error
  /dev/sdd1: read failed after 0 of 1024 at 4096: Input/output error
  /dev/sdd1: read failed after 0 of 2048 at 0: Input/output error
  Couldn't find device with uuid 5P3zIl-P9Uf-aFQo-yPEJ-Mgvj-GFWo-w5dRTI.
  Couldn't find device for segment belonging to black_bird/synced_primary_raid10_2legs_1_rimage_0 while checking used and assumed devices.


lvchange -ay  black_bird/synced_primary_raid10_2legs_1
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  Couldn't find device with uuid 5P3zIl-P9Uf-aFQo-yPEJ-Mgvj-GFWo-w5dRTI.

Verifying proper kernel table state of failed image(s)
Verifying proper "D"ead kernel status state for failed raid images(s)
black_bird-synced_primary_raid10_2legs_1: 0 1032192 raid raid10 4 DAAA 1032192/1032192 idle 0

Enabling device sdd on host-077 Running vgs to make LVM update metadata version if possible (will restore a-m PVs)
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.

Refreshing raids now that transiently failed raid images should be back
lvchange --refresh black_bird/synced_primary_raid10_2legs_1
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
HACK: additional refresh as outlined in bug 1265191#c22: lvchange --refresh black_bird/synced_primary_raid10_2legs_1
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.

Verifying current sanity of lvm after the failure
Waiting until all mirror|raid volumes become fully syncd...
   1/1 mirror(s) are fully synced: ( 100.00% )

Verifying proper "A"ctive kernel status state for raid image(s) after refresh

Current mirror/raid device structure(s):
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  LV                                       Attr       LSize   Cpy%Sync Devices
  synced_primary_raid10_2legs_1            rwi-a-r--- 504.00m 100.00   synced_primary_raid10_2legs_1_rimage_0(0),synced_primary_raid10_2legs_1_rimage_1(0),synced_primary_raid10_2legs_1_rimage_2(0),synced_primary_raid10_2legs_1_rimage_3(0)
  [synced_primary_raid10_2legs_1_rimage_0] iwi-aor--- 252.00m          /dev/sdd1(1)
  [synced_primary_raid10_2legs_1_rimage_1] iwi-aor--- 252.00m          /dev/sdc1(1)
  [synced_primary_raid10_2legs_1_rimage_2] iwi-aor--- 252.00m          /dev/sda1(1)
  [synced_primary_raid10_2legs_1_rimage_3] iwi-aor--- 252.00m          /dev/sdg1(1)
  [synced_primary_raid10_2legs_1_rmeta_0]  ewi-aor---   4.00m          /dev/sdd1(0)
  [synced_primary_raid10_2legs_1_rmeta_1]  ewi-aor---   4.00m          /dev/sdc1(0)
  [synced_primary_raid10_2legs_1_rmeta_2]  ewi-aor---   4.00m          /dev/sda1(0)
  [synced_primary_raid10_2legs_1_rmeta_3]  ewi-aor---   4.00m          /dev/sdg1(0)


Checking for leftover '-missing_0_0' or 'unknown devices'
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
Verifying FAILED device /dev/sdd1 *IS BACK* in the volume(s)
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
Verifying IMAGE device /dev/sdc1 *IS* in the volume(s)
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
Verifying IMAGE device /dev/sda1 *IS* in the volume(s)
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
Verifying IMAGE device /dev/sdg1 *IS* in the volume(s)
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
Verify the rimage/rmeta dm devices remain after the failures
DM:synced_primary_raid10_2legs_1_rimage_0
DM:synced_primary_raid10_2legs_1_rmeta_0
Verify the rimage/rmeta dm devices remain after the failures
Checking EXISTENCE and STATE of synced_primary_raid10_2legs_1_rimage_0 on: host-077 
Checking EXISTENCE and STATE of synced_primary_raid10_2legs_1_rmeta_0 on: host-077 

Verify the raid image order is what's expected based on raid fault policy
EXPECTED ORDER IS JUST THE ORIGINAL LEG LIST: /dev/sdd1 /dev/sdc1 /dev/sda1 /dev/sdg1
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
  /var/run/lvm/lvmetad.socket: connect failed: No such file or directory
  WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
ACTUAL LEG ORDER: /dev/sdd1 /dev/sdc1 /dev/sda1 /dev/sdg1
/dev/sdd1 ne /dev/sdd1
/dev/sdc1 ne /dev/sdc1
/dev/sda1 ne /dev/sda1
/dev/sdg1 ne /dev/sdg1

Verifying proper kernel table state of failed image(s)
Mounting mirrored ext filesystems on host-077...
Verifying files (checkit) on mirror(s) on...
        ---- host-077 ----
*** DATA COMPARISON ERROR [file:polaxbbrdlwgxdvqsytolrbnyaoqwakoqxqrysjfgil] ***
Corrupt regions follow - unprintable chars are represented as '.'
-----------------------------------------------------------------
corrupt bytes starting at file offset 274432
    1st 32 expected bytes:  OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
    1st 32 actual bytes:    ................................



2.6.32-688.el6.x86_64

lvm2-2.02.143-12.el6    BUILT: Wed Jan 11 09:35:04 CST 2017
lvm2-libs-2.02.143-12.el6    BUILT: Wed Jan 11 09:35:04 CST 2017
lvm2-cluster-2.02.143-12.el6    BUILT: Wed Jan 11 09:35:04 CST 2017
udev-147-2.73.el6_8.2    BUILT: Tue Aug 30 08:17:19 CDT 2016
device-mapper-1.02.117-12.el6    BUILT: Wed Jan 11 09:35:04 CST 2017
device-mapper-libs-1.02.117-12.el6    BUILT: Wed Jan 11 09:35:04 CST 2017
device-mapper-event-1.02.117-12.el6    BUILT: Wed Jan 11 09:35:04 CST 2017
device-mapper-event-libs-1.02.117-12.el6    BUILT: Wed Jan 11 09:35:04 CST 2017
device-mapper-persistent-data-0.6.2-0.1.rc7.el6    BUILT: Tue Mar 22 08:58:09 CDT 2016

Comment 4 Jan Kurik 2017-12-06 12:07:29 UTC
Red Hat Enterprise Linux 6 is in the Production 3 Phase. During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available.

The official life cycle policy can be reviewed here:

http://redhat.com/rhel/lifecycle

This issue does not meet the inclusion criteria for the Production 3 Phase and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification. Note that a strong business justification will be required for re-evaluation. Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL:

https://access.redhat.com/


Note You need to log in before you can comment on or make changes to this bug.