Bug 217233 - problems with software RAID5 and 2.6.9.-42.0.2.ELsmp
problems with software RAID5 and 2.6.9.-42.0.2.ELsmp
Status: CLOSED INSUFFICIENT_DATA
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: kernel (Show other bugs)
4.3
i686 Linux
medium Severity high
: ---
: ---
Assigned To: Alasdair Kergon
Brian Brock
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2006-11-25 07:21 EST by Rajive Aggarwal
Modified: 2011-02-09 20:11 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2011-02-09 20:11:21 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Rajive Aggarwal 2006-11-25 07:21:55 EST
Description of problem:
An upgrade of the kernel from 2.6.9.-34.0.2.ELsmp to 2.6.9.-42.0.2.ELsmp led to
this problem. The new kernel seems unable to recognise an existing RAID5 setup,
over which I have created a LVM2 group and volume.

The RAID5 setup spans 4x320Gb IDE drives using SIL0680 controllers. Each drive
has 7 equal partitions, and hence I have 7 RAID5 arrays. 6 of the RAID5 arrays
have been added to a LVM2 volume group, and the whole volume group is in a
single logical volume. Formatted as ext3. These drives are not part of the Linux
system as that is on a separate RAID1 array.

md5  | md6  | md7  | md8  | md9  | md10  | md11
---- | ---- | ---- | ---- | ---- | ----- | -----
hde5 | hde6 | hde7 | hde8 | hde9 | hde10 | hde11
hdc5 | hdc6 | hdc7 | hdc8 | hdc9 | hdc10 | hdc11
hdg5 | hdg6 | hdg7 | hdg8 | hdg9 | hdg10 | hdg11
hdk5 | hdk6 | hdk7 | hdk8 | hdk9 | hdk10 | hdk11 

On boot, the RAID5 arrays are not identified correctly, hence the volume cannot
mount.


Version-Release number of selected component (if applicable):
Kernel 2.6.9.-42.0.2.ELsmp


How reproducible:
Fully reproduceable, in that with the new kernel the RAID5 arrays do not get
recognised, but if I boot using the previous kernel it works fine. Not able to
try re-creating the issue on alternative hardware.


Steps to Reproduce:
1. Boot with updated kernel
2. RAID5 arrays report an error
3.
  
Actual results:
#>cat /proc/mdstat
md5 : active raid5 hdk5[1] hdg5[2] hdc5[3]
      133957632 blocks level 5,64K chunk, algorithm 2 [4/3] [_UUU]
etc... for md6 to md11, except
md9 : active raid1 hde[3]
      488368 blocks [4/1] [____U] 


Expected results:
#>cat /proc/mdstat
md5 : active raid5 hdk5[1] hdg5[2] hde5[0] hdc5[3]
      133957632 blocks level 5,64K chunk, algorithm 2 [4/4] [UUUU]
etc... for md6 to md11 

Additional info:
Comment 1 dirk 2007-01-12 03:45:28 EST
Ibm xseries 206 e-server
2 40 gb uw320 scsi disks in mirror raid
linux softraid
adaptec scsi controller hostraid (hostraid not used only softraid)

problem: crashes on this kernel with a lvm error

revert back to the old kernel and everything works fine
Comment 2 Rajive Aggarwal 2007-03-23 09:41:55 EDT
Tried kernel 2.6.9.-42.0.10. This fails exactly as 2.6.9.-42.0.2.
2.6.9.-34.0.2 still works well.
Comment 3 Alasdair Kergon 2011-02-09 20:11:21 EST
If there's still something that needs looking into, please go through Red Hat support at https://www.redhat.com/support/ (referencing this bugzilla).

Note You need to log in before you can comment on or make changes to this bug.