RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 758552 - RFE: LVM RAID - Handle device failures
Summary: RFE: LVM RAID - Handle device failures
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Jonathan Earl Brassow
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks: 593119 732458
TreeView+ depends on / blocked
 
Reported: 2011-11-30 02:38 UTC by Jonathan Earl Brassow
Modified: 2012-08-27 14:54 UTC (History)
10 users (show)

Fixed In Version: lvm2-2.02.95-1.el6
Doc Type: Enhancement
Doc Text:
New Feature to 6.3. No documentation required. Bug 732458 is the bug that requires a release note for the RAID features. Other documentation is found in the LVM manual. Operational bugs need no documentation because they are being fixed before their initial release.
Clone Of:
Environment:
Last Closed: 2012-06-20 15:00:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2012:0962 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2012-06-19 21:12:11 UTC

Description Jonathan Earl Brassow 2011-11-30 02:38:04 UTC
LVM RAID should be able to handle device failures in an automatic fashion based on user define preferences.  RAID logical volumes should be usable after a failure (not hang or require user intervention), provided the necessary number of devices remain for the particular RAID level to maintain usability.

Comment 1 Jonathan Earl Brassow 2011-12-06 20:36:38 UTC
LVM's configuration file (lvm.conf) now contains a field called 'raid_fault_policy', which determines the automated action taken in the event of a device failure.  The options for this field are:
  "warn" - produce a warning
  "allocate" - Attempt to replace the failed device with a spare found in the VG

The release criteria is that these two options work as expected.  That is, if the policy is set to "warn", you should find notices in the log that a device has failed (and preferably, what to do about it).  If the policy is set to "allocate", then the failed device should be replaced by a spare if there is one and if not, report the failure to replace the device in the system log.

Comment 2 Jonathan Earl Brassow 2011-12-06 20:37:39 UTC
This feature has been added to LVM version 2.02.89.

Comment 7 Jonathan Earl Brassow 2012-01-17 22:07:26 UTC
# Automated recovery in action (raid_fault_policy = "allocate")
[root@bp-01 ~]# devices vg
  LV            Copy%  Devices                                     
  lv            100.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0)
  [lv_rimage_0]        /dev/sde1(1)                                
  [lv_rimage_1]        /dev/sdf1(1)                                
  [lv_rimage_2]        /dev/sdg1(1)                                
  [lv_rmeta_0]         /dev/sde1(0)                                
  [lv_rmeta_1]         /dev/sdf1(0)                                
  [lv_rmeta_2]         /dev/sdg1(0)                                


[root@bp-01 ~]# grep raid_fau //etc/lvm/lvm.conf 
    raid_fault_policy = "allocate"


# kill device
[root@bp-01 ~]# off.sh sde
Turning off sde


#write to LV
[root@bp-01 ~]# dd if=/dev/zero of=/dev/vg/lv bs=4M count=1
1+0 records in
1+0 records out
4194304 bytes (4.2 MB) copied, 0.191669 s, 21.9 MB/s


# system log doesn't have details of fix - this is probably worth a new bug!
[root@bp-01 ~]# grep lvm /var/log/messages 
Jan 17 15:57:18 bp-01 lvm[8599]: Device #0 of raid1 array, vg-lv, has failed.
Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 250994294784: Input/output error
Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 250994376704: Input/output error
Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 0: Input/output error
Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 4096: Input/output error
Jan 17 15:57:19 bp-01 lvm[8599]: Couldn't find device with uuid 3lugiV-3eSP-AFAR-sdrP-H20O-wM2M-qdMANy.
Jan 17 15:57:27 bp-01 lvm[8599]: raid1 array, vg-lv, is not in-sync.
Jan 17 15:57:36 bp-01 lvm[8599]: raid1 array, vg-lv, is now in-sync.


# Notice the device has been replaced with a new one
[root@bp-01 ~]# devices 
  Couldn't find device with uuid 3lugiV-3eSP-AFAR-sdrP-H20O-wM2M-qdMANy.
  LV            Copy%  Devices                                     
  lv            100.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0)
  [lv_rimage_0]        /dev/sdh1(1)                                
  [lv_rimage_1]        /dev/sdf1(1)                                
  [lv_rimage_2]        /dev/sdg1(1)                                
  [lv_rmeta_0]         /dev/sdh1(0)                                
  [lv_rmeta_1]         /dev/sdf1(0)                                
  [lv_rmeta_2]         /dev/sdg1(0)                                
  lv_home              /dev/sda2(12800)                            
  lv_root              /dev/sda2(0)                                
  lv_swap              /dev/sda2(35493)                            



# Manual handling of failures
[root@bp-01 ~]# devices vg
  LV            Copy%  Devices                                     
  lv            100.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0)
  [lv_rimage_0]        /dev/sdh1(1)                                
  [lv_rimage_1]        /dev/sdf1(1)                                
  [lv_rimage_2]        /dev/sdg1(1)                                
  [lv_rmeta_0]         /dev/sdh1(0)                                
  [lv_rmeta_1]         /dev/sdf1(0)                                
  [lv_rmeta_2]         /dev/sdg1(0)


#kill device
[root@bp-01 ~]# off.sh sdh
Turning off sdh


# run repair by hand
# Again note the scarce output signifying success - this is probably worth a new
# bug
[root@bp-01 ~]# lvconvert --repair vg/lv
  /dev/sdh1: read failed after 0 of 2048 at 250994294784: Input/output error
  /dev/sdh1: read failed after 0 of 2048 at 250994376704: Input/output error
  /dev/sdh1: read failed after 0 of 2048 at 0: Input/output error
  /dev/sdh1: read failed after 0 of 2048 at 4096: Input/output error
  Couldn't find device with uuid fbI0YO-GX7x-firU-Vy5o-vzwx-vAKZ-feRxfF.
Attempt to replace failed RAID images (requires full device resync)? [y/n]: y


# Note the replaced device
[root@bp-01 ~]# devices vg
  Couldn't find device with uuid fbI0YO-GX7x-firU-Vy5o-vzwx-vAKZ-feRxfF.
  LV            Copy%  Devices                                     
  lv             64.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0)
  [lv_rimage_0]        /dev/sde1(1)                                
  [lv_rimage_1]        /dev/sdf1(1)                                
  [lv_rimage_2]        /dev/sdg1(1)                                
  [lv_rmeta_0]         /dev/sde1(0)                                
  [lv_rmeta_1]         /dev/sdf1(0)                                
  [lv_rmeta_2]         /dev/sdg1(0)

Comment 10 Corey Marthaler 2012-04-20 16:55:45 UTC
The following raid device failure test cases now pass:

        kill_primary_synced_raid1_2legs
        kill_primary_synced_raid4_2legs
        kill_primary_synced_raid5_2legs
        kill_primary_synced_raid1_3legs
        kill_primary_synced_raid4_3legs
        kill_primary_synced_raid5_3legs
        kill_primary_synced_raid6_3legs
        kill_random_synced_raid1_2legs
        kill_random_synced_raid4_2legs
        kill_random_synced_raid5_2legs
        kill_random_synced_raid1_3legs
        kill_random_synced_raid4_3legs
        kill_random_synced_raid5_3legs
        kill_random_synced_raid6_3legs
        kill_primary_synced_raid6_4legs
        kill_random_synced_raid6_4legs
        kill_multiple_synced_raid1_3legs
        kill_multiple_synced_raid6_3legs
        kill_multiple_synced_raid1_4legs
        kill_primary_non_synced_raid1_2legs
        kill_primary_non_synced_raid1_3legs
        kill_random_non_synced_raid1_2legs
        kill_random_non_synced_raid1_3legs

Marking this feature verified in the latest rpms.
2.6.32-262.el6.x86_64
lvm2-2.02.95-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
lvm2-libs-2.02.95-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
lvm2-cluster-2.02.95-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
udev-147-2.41.el6    BUILT: Thu Mar  1 13:01:08 CST 2012
device-mapper-1.02.74-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
device-mapper-libs-1.02.74-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
device-mapper-event-1.02.74-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
device-mapper-event-libs-1.02.74-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012
cmirror-2.02.95-5.el6    BUILT: Thu Apr 19 10:29:01 CDT 2012

Comment 11 Jonathan Earl Brassow 2012-04-23 18:27:37 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
New Feature to 6.3.  No documentation required.

Bug 732458 is the bug that requires a release note for the RAID features.  Other documentation is found in the LVM manual.

Operational bugs need no documentation because they are being fixed before their initial release.

Comment 13 errata-xmlrpc 2012-06-20 15:00:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2012-0962.html


Note You need to log in before you can comment on or make changes to this bug.