Bug 758552

Summary: RFE: LVM RAID - Handle device failures
Product: Red Hat Enterprise Linux 6 Reporter: Jonathan Earl Brassow <jbrassow>
Component: lvm2Assignee: Jonathan Earl Brassow <jbrassow>
Status: CLOSED ERRATA QA Contact: Cluster QE <mspqa-list>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 6.2CC: agk, cmarthal, dwysocha, heinzm, jbrassow, mbroz, prajnoha, prockai, thornber, zkabelac
Target Milestone: rcKeywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: lvm2-2.02.95-1.el6 Doc Type: Enhancement
Doc Text:
New Feature to 6.3. No documentation required. Bug 732458 is the bug that requires a release note for the RAID features. Other documentation is found in the LVM manual. Operational bugs need no documentation because they are being fixed before their initial release.
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-06-20 15:00:28 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 593119, 732458    

Description Jonathan Earl Brassow 2011-11-30 02:38:04 UTC
LVM RAID should be able to handle device failures in an automatic fashion based on user define preferences.  RAID logical volumes should be usable after a failure (not hang or require user intervention), provided the necessary number of devices remain for the particular RAID level to maintain usability.

Comment 1 Jonathan Earl Brassow 2011-12-06 20:36:38 UTC
LVM's configuration file (lvm.conf) now contains a field called 'raid_fault_policy', which determines the automated action taken in the event of a device failure.  The options for this field are:
  "warn" - produce a warning
  "allocate" - Attempt to replace the failed device with a spare found in the VG

The release criteria is that these two options work as expected.  That is, if the policy is set to "warn", you should find notices in the log that a device has failed (and preferably, what to do about it).  If the policy is set to "allocate", then the failed device should be replaced by a spare if there is one and if not, report the failure to replace the device in the system log.

Comment 2 Jonathan Earl Brassow 2011-12-06 20:37:39 UTC
This feature has been added to LVM version 2.02.89.

Comment 7 Jonathan Earl Brassow 2012-01-17 22:07:26 UTC
# Automated recovery in action (raid_fault_policy = "allocate")
[root@bp-01 ~]# devices vg
  LV            Copy%  Devices                                     
  lv            100.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0)
  [lv_rimage_0]        /dev/sde1(1)                                
  [lv_rimage_1]        /dev/sdf1(1)                                
  [lv_rimage_2]        /dev/sdg1(1)                                
  [lv_rmeta_0]         /dev/sde1(0)                                
  [lv_rmeta_1]         /dev/sdf1(0)                                
  [lv_rmeta_2]         /dev/sdg1(0)                                


[root@bp-01 ~]# grep raid_fau //etc/lvm/lvm.conf 
    raid_fault_policy = "allocate"


# kill device
[root@bp-01 ~]# off.sh sde
Turning off sde


#write to LV
[root@bp-01 ~]# dd if=/dev/zero of=/dev/vg/lv bs=4M count=1
1+0 records in
1+0 records out
4194304 bytes (4.2 MB) copied, 0.191669 s, 21.9 MB/s


# system log doesn't have details of fix - this is probably worth a new bug!
[root@bp-01 ~]# grep lvm /var/log/messages 
Jan 17 15:57:18 bp-01 lvm[8599]: Device #0 of raid1 array, vg-lv, has failed.
Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 250994294784: Input/output error
Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 250994376704: Input/output error
Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 0: Input/output error
Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 4096: Input/output error
Jan 17 15:57:19 bp-01 lvm[8599]: Couldn't find device with uuid 3lugiV-3eSP-AFAR-sdrP-H20O-wM2M-qdMANy.
Jan 17 15:57:27 bp-01 lvm[8599]: raid1 array, vg-lv, is not in-sync.
Jan 17 15:57:36 bp-01 lvm[8599]: raid1 array, vg-lv, is now in-sync.


# Notice the device has been replaced with a new one
[root@bp-01 ~]# devices 
  Couldn't find device with uuid 3lugiV-3eSP-AFAR-sdrP-H20O-wM2M-qdMANy.
  LV            Copy%  Devices                                     
  lv            100.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0)
  [lv_rimage_0]        /dev/sdh1(1)                                
  [lv_rimage_1]        /dev/sdf1(1)                                
  [lv_rimage_2]        /dev/sdg1(1)                                
  [lv_rmeta_0]         /dev/sdh1(0)                                
  [lv_rmeta_1]         /dev/sdf1(0)                                
  [lv_rmeta_2]         /dev/sdg1(0)                                
  lv_home              /dev/sda2(12800)                            
  lv_root              /dev/sda2(0)                                
  lv_swap              /dev/sda2(35493)                            



# Manual handling of failures
[root@bp-01 ~]# devices vg
  LV            Copy%  Devices                                     
  lv            100.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0)
  [lv_rimage_0]        /dev/sdh1(1)                                
  [lv_rimage_1]        /dev/sdf1(1)                                
  [lv_rimage_2]        /dev/sdg1(1)                                
  [lv_rmeta_0]         /dev/sdh1(0)                                
  [lv_rmeta_1]         /dev/sdf1(0)                                
  [lv_rmeta_2]         /dev/sdg1(0)


#kill device
[root@bp-01 ~]# off.sh sdh
Turning off sdh


# run repair by hand
# Again note the scarce output signifying success - this is probably worth a new
# bug
[root@bp-01 ~]# lvconvert --repair vg/lv
  /dev/sdh1: read failed after 0 of 2048 at 250994294784: Input/output error
  /dev/sdh1: read failed after 0 of 2048 at 250994376704: Input/output error
  /dev/sdh1: read failed after 0 of 2048 at 0: Input/output error
  /dev/sdh1: read failed after 0 of 2048 at 4096: Input/output error
  Couldn't find device with uuid fbI0YO-GX7x-firU-Vy5o-vzwx-vAKZ-feRxfF.
Attempt to replace failed RAID images (requires full device resync)? [y/n]: y


# Note the replaced device
[root@bp-01 ~]# devices vg
  Couldn't find device with uuid fbI0YO-GX7x-firU-Vy5o-vzwx-vAKZ-feRxfF.
  LV            Copy%  Devices                                     
  lv             64.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0)
  [lv_rimage_0]        /dev/sde1(1)                                
  [lv_rimage_1]        /dev/sdf1(1)                                
  [lv_rimage_2]        /dev/sdg1(1)                                
  [lv_rmeta_0]         /dev/sde1(0)                                
  [lv_rmeta_1]         /dev/sdf1(0)                                
  [lv_rmeta_2]         /dev/sdg1(0)

Comment 10 Corey Marthaler 2012-04-20 16:55:45 UTC
The following raid device failure test cases now pass:

        kill_primary_synced_raid1_2legs
        kill_primary_synced_raid4_2legs
        kill_primary_synced_raid5_2legs
        kill_primary_synced_raid1_3legs
        kill_primary_synced_raid4_3legs
        kill_primary_synced_raid5_3legs
        kill_primary_synced_raid6_3legs
        kill_random_synced_raid1_2legs
        kill_random_synced_raid4_2legs
        kill_random_synced_raid5_2legs
        kill_random_synced_raid1_3legs
        kill_random_synced_raid4_3legs
        kill_random_synced_raid5_3legs
        kill_random_synced_raid6_3legs
        kill_primary_synced_raid6_4legs
        kill_random_synced_raid6_4legs
        kill_multiple_synced_raid1_3legs
        kill_multiple_synced_raid6_3legs
        kill_multiple_synced_raid1_4legs
        kill_primary_non_synced_raid1_2legs
        kill_primary_non_synced_raid1_3legs
        kill_random_non_synced_raid1_2legs
        kill_random_non_synced_raid1_3legs

Marking this feature verified in the latest rpms.
2.6.32-262.el6.x86_64
lvm2-2.02.95-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
lvm2-libs-2.02.95-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
lvm2-cluster-2.02.95-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
udev-147-2.41.el6    BUILT: Thu Mar  1 13:01:08 CST 2012
device-mapper-1.02.74-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
device-mapper-libs-1.02.74-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
device-mapper-event-1.02.74-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
device-mapper-event-libs-1.02.74-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
cmirror-2.02.95-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012

Comment 11 Jonathan Earl Brassow 2012-04-23 18:27:37 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
New Feature to 6.3.  No documentation required.

Bug 732458 is the bug that requires a release note for the RAID features.  Other documentation is found in the LVM manual.

Operational bugs need no documentation because they are being fixed before their initial release.

Comment 13 errata-xmlrpc 2012-06-20 15:00:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2012-0962.html