Bug 523000 - raid-check only checking first raid volume
Summary: raid-check only checking first raid volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: mdadm
Version: 5.4
Hardware: All
OS: Linux
low
low
Target Milestone: rc
: ---
Assignee: Doug Ledford
QA Contact: BaseOS QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-09-13 08:04 UTC by Dirk Gfroerer
Modified: 2013-04-12 20:27 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-01-06 08:53:15 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Fixed script (1.85 KB, text/plain)
2009-11-04 19:17 UTC, Doug Ledford
no flags Details
RHEL5 compatible version of script (1.85 KB, text/plain)
2009-11-23 15:42 UTC, Doug Ledford
no flags Details
Updated RHEL5 compatible raid-check script (1.89 KB, text/plain)
2009-12-07 20:35 UTC, Doug Ledford
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2010:0006 0 normal SHIPPED_LIVE mdadm bug fix update 2010-01-06 08:53:12 UTC

Description Dirk Gfroerer 2009-09-13 08:04:39 UTC
Description of problem:


Version-Release number of selected component (if applicable):
mdadm-2.6.9-2.el5.x86_64

How reproducible:
always

Steps to Reproduce:

1. Create a software RAID like this:
# cat /proc/mdstat
Personalities : [raid1] 
md0 : active raid1 sdb1[1] sda1[0]
      264960 blocks [2/2] [UU]
      
md1 : active raid1 sdb2[1] sda2[0]
      155983040 blocks [2/2] [UU]
 
2. Let /etc/cron.weekly/99-raid-check execute
  
Actual results:
md0 is being checked for consistency, md1 is not being checked for consistency.

Expected results:
md0 and md1 are being checked for consistency.

Additional info:
As soon as 
"echo check > /sys/block/md0/md/sync_action" is being issued /sys/block/md1/md/array_state changes from clean to active. Therefor when the array_state of md1 is being checked in line 25 of /etc/cron.weekly/99-raid-check the check for "active" == "clean" fails and thus md1 will never be checked.

Comment 3 Colin.Simpson 2009-10-27 19:08:58 UTC
The reporter is correct. Basically if you have two md devices living on different partitions of the same physical disks it fails to check the second one. This is not an uncommon occurrence 

e.g. 

/boot a RAID 1 (md0) on /dev/sda1, /dev/sdb1
LVM or / a RAID 5 (md1) with components  /dev/sda2, /dev/sdb2 and others.

You have to have the RAID 1 /boot so grub/BIOS can boot it (they can't do a RAID 5 boot). (or another reason /boot md0 being ext3 to make it bootable
by grub)

As soon at the "check" action is echo'd to /sys/block/md0/md/sync_action the other device (sharing the same physical disks) goes to active. There by it never gets set to check the second md device living on these physical devices.

i.e
# echo "check" >/sys/block/md0/md/sync_action  ; cat /sys/block/md1/md/array_state
active

It works fine if the md device is on different underlying physical devices. 

How to fix? One way is to instead of having two loops an initiator and a result checker, change to just a single loop that initiates a check on that single md device. Then have an inner loop waiting for it to finish on that single device, then repeat on the next md device.

Am I being over CPU sensitive, but does 3 seconds seem a bit low for a wait period? Every minute would seem a bit lighter, a 1 TB mirror takes a while to check. 

I do like the script reporting mismatch_cnt, that's really good.

Comment 4 Doug Ledford 2009-11-04 19:17:06 UTC
Created attachment 367522 [details]
Fixed script

This updated script solves the problem.  It gets the state from all the arrays on all the drives before it ever initiates the first check operation.

Comment 7 Colin.Simpson 2009-11-05 18:12:56 UTC
Script seems to work, once I commented out

declare -A check

Which doesn't seem present as a function when I run from the shell (maybe declared somewhere else in the cron scripts?)

Maybe I'm being really complaining, but would it not be better to serialize the checks, rather than run them all at once?

Comment 8 Doug Ledford 2009-11-05 18:47:20 UTC
That declare -A doesn't work means I'll have to modify the script slightly for the particular version of bash on rhel5 (I wrote the script with bash-4.0 on Fedora, and declare -A is a bash built-in there).

As far as serializing the checks, the script allows for either check or repair operations, and we wait on one operation and not the other.  Serializing would have no point of serialization in the event of repair operations.

In addition, if we have multiple arrays on different drives/controllers then it makes no sense to serialize those check operations, especially considering that large arrays may take very significant amounts of time to check.  We most definitely want those operations happening in parallel.  In fact, it's only the specific case of multiple arrays on the same physical drives (common with /boot and / arrays, but almost never done for large data arrays) where we don't want things to run in parallel, and since the md raid layer will already make sure that doesn't happen for us, adding logic to detect that situation to what is arguably a very simple script would be redundant and simply another possible source of failure.

So, no, I don't think serializing this script is the right way to go.

Comment 11 Tuomo Soini 2009-11-23 13:20:40 UTC
Shouldn't that be "declare -a check".

Comment 12 Doug Ledford 2009-11-23 15:42:03 UTC
Created attachment 373146 [details]
RHEL5 compatible version of script

No, for the way it was being used, declare -A is correct.  However, it's not supported by the version of bash in rhel5.  So, I modified the script to do things differently.  Updated script attached.

Comment 13 Tuomo Soini 2009-12-07 20:06:51 UTC
Both attached scripts are identical and use declare -A.

Comment 14 Doug Ledford 2009-12-07 20:35:55 UTC
Created attachment 376776 [details]
Updated RHEL5 compatible raid-check script

Sorry, I failed to get the updated script the first time around.  Re-attaching.

Comment 15 Tuomo Soini 2009-12-09 11:59:50 UTC
Verified, updated script works as expected.

Comment 18 errata-xmlrpc 2010-01-06 08:53:15 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2010-0006.html


Note You need to log in before you can comment on or make changes to this bug.