This service will be undergoing maintenance at 00:00 UTC, 2016-09-28. It is expected to last about 1 hours
Bug 758552 - RFE: LVM RAID - Handle device failures
RFE: LVM RAID - Handle device failures
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2 (Show other bugs)
6.2
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Jonathan Earl Brassow
Cluster QE
: FutureFeature
Depends On:
Blocks: 593119 732458
  Show dependency treegraph
 
Reported: 2011-11-29 21:38 EST by Jonathan Earl Brassow
Modified: 2012-08-27 10:54 EDT (History)
10 users (show)

See Also:
Fixed In Version: lvm2-2.02.95-1.el6
Doc Type: Enhancement
Doc Text:
New Feature to 6.3. No documentation required. Bug 732458 is the bug that requires a release note for the RAID features. Other documentation is found in the LVM manual. Operational bugs need no documentation because they are being fixed before their initial release.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-06-20 11:00:28 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Jonathan Earl Brassow 2011-11-29 21:38:04 EST
LVM RAID should be able to handle device failures in an automatic fashion based on user define preferences.  RAID logical volumes should be usable after a failure (not hang or require user intervention), provided the necessary number of devices remain for the particular RAID level to maintain usability.
Comment 1 Jonathan Earl Brassow 2011-12-06 15:36:38 EST
LVM's configuration file (lvm.conf) now contains a field called 'raid_fault_policy', which determines the automated action taken in the event of a device failure.  The options for this field are:
  "warn" - produce a warning
  "allocate" - Attempt to replace the failed device with a spare found in the VG

The release criteria is that these two options work as expected.  That is, if the policy is set to "warn", you should find notices in the log that a device has failed (and preferably, what to do about it).  If the policy is set to "allocate", then the failed device should be replaced by a spare if there is one and if not, report the failure to replace the device in the system log.
Comment 2 Jonathan Earl Brassow 2011-12-06 15:37:39 EST
This feature has been added to LVM version 2.02.89.
Comment 7 Jonathan Earl Brassow 2012-01-17 17:07:26 EST
# Automated recovery in action (raid_fault_policy = "allocate")
[root@bp-01 ~]# devices vg
  LV            Copy%  Devices                                     
  lv            100.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0)
  [lv_rimage_0]        /dev/sde1(1)                                
  [lv_rimage_1]        /dev/sdf1(1)                                
  [lv_rimage_2]        /dev/sdg1(1)                                
  [lv_rmeta_0]         /dev/sde1(0)                                
  [lv_rmeta_1]         /dev/sdf1(0)                                
  [lv_rmeta_2]         /dev/sdg1(0)                                


[root@bp-01 ~]# grep raid_fau //etc/lvm/lvm.conf 
    raid_fault_policy = "allocate"


# kill device
[root@bp-01 ~]# off.sh sde
Turning off sde


#write to LV
[root@bp-01 ~]# dd if=/dev/zero of=/dev/vg/lv bs=4M count=1
1+0 records in
1+0 records out
4194304 bytes (4.2 MB) copied, 0.191669 s, 21.9 MB/s


# system log doesn't have details of fix - this is probably worth a new bug!
[root@bp-01 ~]# grep lvm /var/log/messages 
Jan 17 15:57:18 bp-01 lvm[8599]: Device #0 of raid1 array, vg-lv, has failed.
Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 250994294784: Input/output error
Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 250994376704: Input/output error
Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 0: Input/output error
Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 4096: Input/output error
Jan 17 15:57:19 bp-01 lvm[8599]: Couldn't find device with uuid 3lugiV-3eSP-AFAR-sdrP-H20O-wM2M-qdMANy.
Jan 17 15:57:27 bp-01 lvm[8599]: raid1 array, vg-lv, is not in-sync.
Jan 17 15:57:36 bp-01 lvm[8599]: raid1 array, vg-lv, is now in-sync.


# Notice the device has been replaced with a new one
[root@bp-01 ~]# devices 
  Couldn't find device with uuid 3lugiV-3eSP-AFAR-sdrP-H20O-wM2M-qdMANy.
  LV            Copy%  Devices                                     
  lv            100.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0)
  [lv_rimage_0]        /dev/sdh1(1)                                
  [lv_rimage_1]        /dev/sdf1(1)                                
  [lv_rimage_2]        /dev/sdg1(1)                                
  [lv_rmeta_0]         /dev/sdh1(0)                                
  [lv_rmeta_1]         /dev/sdf1(0)                                
  [lv_rmeta_2]         /dev/sdg1(0)                                
  lv_home              /dev/sda2(12800)                            
  lv_root              /dev/sda2(0)                                
  lv_swap              /dev/sda2(35493)                            



# Manual handling of failures
[root@bp-01 ~]# devices vg
  LV            Copy%  Devices                                     
  lv            100.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0)
  [lv_rimage_0]        /dev/sdh1(1)                                
  [lv_rimage_1]        /dev/sdf1(1)                                
  [lv_rimage_2]        /dev/sdg1(1)                                
  [lv_rmeta_0]         /dev/sdh1(0)                                
  [lv_rmeta_1]         /dev/sdf1(0)                                
  [lv_rmeta_2]         /dev/sdg1(0)


#kill device
[root@bp-01 ~]# off.sh sdh
Turning off sdh


# run repair by hand
# Again note the scarce output signifying success - this is probably worth a new
# bug
[root@bp-01 ~]# lvconvert --repair vg/lv
  /dev/sdh1: read failed after 0 of 2048 at 250994294784: Input/output error
  /dev/sdh1: read failed after 0 of 2048 at 250994376704: Input/output error
  /dev/sdh1: read failed after 0 of 2048 at 0: Input/output error
  /dev/sdh1: read failed after 0 of 2048 at 4096: Input/output error
  Couldn't find device with uuid fbI0YO-GX7x-firU-Vy5o-vzwx-vAKZ-feRxfF.
Attempt to replace failed RAID images (requires full device resync)? [y/n]: y


# Note the replaced device
[root@bp-01 ~]# devices vg
  Couldn't find device with uuid fbI0YO-GX7x-firU-Vy5o-vzwx-vAKZ-feRxfF.
  LV            Copy%  Devices                                     
  lv             64.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0)
  [lv_rimage_0]        /dev/sde1(1)                                
  [lv_rimage_1]        /dev/sdf1(1)                                
  [lv_rimage_2]        /dev/sdg1(1)                                
  [lv_rmeta_0]         /dev/sde1(0)                                
  [lv_rmeta_1]         /dev/sdf1(0)                                
  [lv_rmeta_2]         /dev/sdg1(0)
Comment 10 Corey Marthaler 2012-04-20 12:55:45 EDT
The following raid device failure test cases now pass:

        kill_primary_synced_raid1_2legs
        kill_primary_synced_raid4_2legs
        kill_primary_synced_raid5_2legs
        kill_primary_synced_raid1_3legs
        kill_primary_synced_raid4_3legs
        kill_primary_synced_raid5_3legs
        kill_primary_synced_raid6_3legs
        kill_random_synced_raid1_2legs
        kill_random_synced_raid4_2legs
        kill_random_synced_raid5_2legs
        kill_random_synced_raid1_3legs
        kill_random_synced_raid4_3legs
        kill_random_synced_raid5_3legs
        kill_random_synced_raid6_3legs
        kill_primary_synced_raid6_4legs
        kill_random_synced_raid6_4legs
        kill_multiple_synced_raid1_3legs
        kill_multiple_synced_raid6_3legs
        kill_multiple_synced_raid1_4legs
        kill_primary_non_synced_raid1_2legs
        kill_primary_non_synced_raid1_3legs
        kill_random_non_synced_raid1_2legs
        kill_random_non_synced_raid1_3legs

Marking this feature verified in the latest rpms.
2.6.32-262.el6.x86_64
lvm2-2.02.95-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
lvm2-libs-2.02.95-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
lvm2-cluster-2.02.95-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
udev-147-2.41.el6    BUILT: Thu Mar  1 13:01:08 CST 2012
device-mapper-1.02.74-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
device-mapper-libs-1.02.74-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
device-mapper-event-1.02.74-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
device-mapper-event-libs-1.02.74-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
cmirror-2.02.95-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
Comment 11 Jonathan Earl Brassow 2012-04-23 14:27:37 EDT
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
New Feature to 6.3.  No documentation required.

Bug 732458 is the bug that requires a release note for the RAID features.  Other documentation is found in the LVM manual.

Operational bugs need no documentation because they are being fixed before their initial release.
Comment 13 errata-xmlrpc 2012-06-20 11:00:28 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2012-0962.html

Note You need to log in before you can comment on or make changes to this bug.