Bug 604570
Summary: | LVM on RAID 5 not working | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Steven Mercurio <rhce_v3> |
Component: | lvm2 | Assignee: | Mike Snitzer <msnitzer> |
Status: | CLOSED WORKSFORME | QA Contact: | Corey Marthaler <cmarthal> |
Severity: | high | Docs Contact: | |
Priority: | low | ||
Version: | 6.0 | CC: | agk, dledford, dwysocha, harald, heinzm, jbrassow, joe.thornber, mbroz, msnitzer, prockai, rhce_v3, syeghiay |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2010-06-18 18:33:07 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Steven Mercurio
2010-06-16 09:05:58 UTC
/boot is on a RAID1 (md0) using sda1 and sdb1 the PV is on a RAID 5 (md1) using sda2, sbd2, and sdc2 sdc1 is a swap partition the same size as sda1 or sdb1 Motherboard is ASUS KGPE-16 with 4G RAM and 1 AMD 6128 CPU Server has 3 1TB SATA HDD's total ALL other partitiona: / /opt /usr /var /home swap (additional) Are on the PV. Have verified the system will boot normally if a ext4 partition is on the RAID 5 array. Also discovered issue still exists when SDA2, SDB2, and SDC2 are only 25G for a 50G RAID5 MD1 (32M extents) so array/PV/extents size does not seem to be a factor. This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux major release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Major release. This request is not yet committed for inclusion. Chances are this is a dracut issue. But I'll be triaging this shortly. Mike: I can take a look at this if you wish. However, I haven't noticed anything even remotely similar to this in my testing so far. which version of dracut and mdadm? I just tested RHEL6 Beta2-3.0 (newer than snapshot6) and it worked perfectly fine for me. Configuration: - cciss controller with 3 disks - 200GB partitions on all cciss LUNs (for root) - 100M partition on first cciss LUN (for /boot) - created MD raid5 across all 200GB cciss partitions: # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 cciss/c0d0p1[0] cciss/c0d2p1[3] cciss/c0d1p1[1] 209712128 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU] bitmap: 1/1 pages [4KB], 65536KB chunk unused devices: <none> - created PV/VG/LV on md device; assigned it to be / # vgs -o +pv_name VG #PV #LV #SN Attr VSize VFree PV vg_storageqe01 1 1 0 wz--n- 200.00g 4.00m /dev/md0 # lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert LogVol00 vg_storageqe01 -wi-ao 199.99g - installed system, rebooted, all worked as expected Closing WORKSFORME, please reopen if you continue to have problems when testing against latest RHEL6 snapshots (e.g. the beta2 release that will be coming RSN). |