Red Hat Bugzilla – Bug 604570
LVM on RAID 5 not working
Last modified: 2010-06-18 14:33:07 EDT
Description of problem:
LVM over raid not working
Version-Release number of selected component (if applicable):
RHEL 6 64bit (downloaded DVD within last 5-7 days)
During install create a RAID 5 array and put LVM on it. Install will proceed normally but system will stop booting after "press I for interactive startup" prompt
Steps to Reproduce:
1. Boot RH6 and begin install
2. Create RAID5 with LVM on it
3. Finish install and reboot
Using Asus KGPE-D16 MB and AMD 6128 CPU. SAME install using RAID but putting / on RAID5 rather than LVM will allow system to boot.
/boot is on a RAID1 (md0) using sda1 and sdb1
the PV is on a RAID 5 (md1) using sda2, sbd2, and sdc2
sdc1 is a swap partition the same size as sda1 or sdb1
Motherboard is ASUS KGPE-16 with 4G RAM and 1 AMD 6128 CPU
Server has 3 1TB SATA HDD's total
ALL other partitiona:
Are on the PV.
Have verified the system will boot normally if a ext4 partition is on the RAID 5 array. Also discovered issue still exists when SDA2, SDB2, and SDC2 are only 25G for a 50G RAID5 MD1 (32M extents) so array/PV/extents size does not seem to be a factor.
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release. Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release. This request is not yet committed for
Chances are this is a dracut issue. But I'll be triaging this shortly.
Mike: I can take a look at this if you wish. However, I haven't noticed anything even remotely similar to this in my testing so far.
which version of dracut and mdadm?
I just tested RHEL6 Beta2-3.0 (newer than snapshot6) and it worked perfectly fine for me.
- cciss controller with 3 disks
- 200GB partitions on all cciss LUNs (for root)
- 100M partition on first cciss LUN (for /boot)
- created MD raid5 across all 200GB cciss partitions:
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 cciss/c0d0p1 cciss/c0d2p1 cciss/c0d1p1
209712128 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
- created PV/VG/LV on md device; assigned it to be /
# vgs -o +pv_name
VG #PV #LV #SN Attr VSize VFree PV
vg_storageqe01 1 1 0 wz--n- 200.00g 4.00m /dev/md0
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
LogVol00 vg_storageqe01 -wi-ao 199.99g
- installed system, rebooted, all worked as expected
Closing WORKSFORME, please reopen if you continue to have problems when testing against latest RHEL6 snapshots (e.g. the beta2 release that will be coming RSN).