Description of problem: An upgrade of the kernel from 2.6.9.-34.0.2.ELsmp to 2.6.9.-42.0.2.ELsmp led to this problem. The new kernel seems unable to recognise an existing RAID5 setup, over which I have created a LVM2 group and volume. The RAID5 setup spans 4x320Gb IDE drives using SIL0680 controllers. Each drive has 7 equal partitions, and hence I have 7 RAID5 arrays. 6 of the RAID5 arrays have been added to a LVM2 volume group, and the whole volume group is in a single logical volume. Formatted as ext3. These drives are not part of the Linux system as that is on a separate RAID1 array. md5 | md6 | md7 | md8 | md9 | md10 | md11 ---- | ---- | ---- | ---- | ---- | ----- | ----- hde5 | hde6 | hde7 | hde8 | hde9 | hde10 | hde11 hdc5 | hdc6 | hdc7 | hdc8 | hdc9 | hdc10 | hdc11 hdg5 | hdg6 | hdg7 | hdg8 | hdg9 | hdg10 | hdg11 hdk5 | hdk6 | hdk7 | hdk8 | hdk9 | hdk10 | hdk11 On boot, the RAID5 arrays are not identified correctly, hence the volume cannot mount. Version-Release number of selected component (if applicable): Kernel 2.6.9.-42.0.2.ELsmp How reproducible: Fully reproduceable, in that with the new kernel the RAID5 arrays do not get recognised, but if I boot using the previous kernel it works fine. Not able to try re-creating the issue on alternative hardware. Steps to Reproduce: 1. Boot with updated kernel 2. RAID5 arrays report an error 3. Actual results: #>cat /proc/mdstat md5 : active raid5 hdk5[1] hdg5[2] hdc5[3] 133957632 blocks level 5,64K chunk, algorithm 2 [4/3] [_UUU] etc... for md6 to md11, except md9 : active raid1 hde[3] 488368 blocks [4/1] [____U] Expected results: #>cat /proc/mdstat md5 : active raid5 hdk5[1] hdg5[2] hde5[0] hdc5[3] 133957632 blocks level 5,64K chunk, algorithm 2 [4/4] [UUUU] etc... for md6 to md11 Additional info:
Ibm xseries 206 e-server 2 40 gb uw320 scsi disks in mirror raid linux softraid adaptec scsi controller hostraid (hostraid not used only softraid) problem: crashes on this kernel with a lvm error revert back to the old kernel and everything works fine
Tried kernel 2.6.9.-42.0.10. This fails exactly as 2.6.9.-42.0.2. 2.6.9.-34.0.2 still works well.
If there's still something that needs looking into, please go through Red Hat support at https://www.redhat.com/support/ (referencing this bugzilla).