Bug 604570

Summary: LVM on RAID 5 not working
Product: Red Hat Enterprise Linux 6 Reporter: Steven Mercurio <rhce_v3>
Component: lvm2Assignee: Mike Snitzer <msnitzer>
Status: CLOSED WORKSFORME QA Contact: Corey Marthaler <cmarthal>
Severity: high Docs Contact:
Priority: low    
Version: 6.0CC: agk, dledford, dwysocha, harald, heinzm, jbrassow, joe.thornber, mbroz, msnitzer, prockai, rhce_v3, syeghiay
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2010-06-18 18:33:07 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Steven Mercurio 2010-06-16 09:05:58 UTC
Description of problem:

LVM over raid not working

Version-Release number of selected component (if applicable):

RHEL 6 64bit (downloaded DVD within last 5-7 days)

How reproducible:

During install create a RAID 5 array and put LVM on it.  Install will proceed normally but system will stop booting after "press I for interactive startup" prompt

Steps to Reproduce:
1. Boot RH6 and begin install
2. Create RAID5 with LVM on it
3. Finish install and reboot
  
Additional info:

Using Asus KGPE-D16 MB and AMD 6128 CPU.  SAME install using RAID but putting / on RAID5 rather than LVM will allow system to boot.

Comment 2 Steven Mercurio 2010-06-16 18:03:49 UTC
/boot is on a RAID1 (md0) using sda1 and sdb1
the PV is on a RAID 5 (md1) using sda2, sbd2, and sdc2

sdc1 is a swap partition the same size as sda1 or sdb1

Motherboard is ASUS KGPE-16 with 4G RAM and 1 AMD 6128 CPU

Server has 3 1TB SATA HDD's total

Comment 3 Steven Mercurio 2010-06-16 18:07:20 UTC
ALL other partitiona:

/
/opt
/usr
/var
/home
swap (additional)

Are on the PV.

Have verified the system will boot normally if a ext4 partition is on the RAID 5 array.  Also discovered issue still exists when SDA2, SDB2, and SDC2 are only 25G for a 50G RAID5 MD1 (32M extents) so array/PV/extents size does not seem to be a factor.

Comment 5 RHEL Program Management 2010-06-16 19:53:01 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 7 Mike Snitzer 2010-06-16 20:59:24 UTC
Chances are this is a dracut issue.  But I'll be triaging this shortly.

Comment 8 Doug Ledford 2010-06-16 21:44:50 UTC
Mike: I can take a look at this if you wish.  However, I haven't noticed anything even remotely similar to this in my testing so far.

Comment 9 Harald Hoyer 2010-06-17 08:08:56 UTC
which version of dracut and mdadm?

Comment 10 Mike Snitzer 2010-06-18 18:33:07 UTC
I just tested RHEL6 Beta2-3.0 (newer than snapshot6) and it worked perfectly fine for me.

Configuration:
- cciss controller with 3 disks
- 200GB partitions on all cciss LUNs (for root)
- 100M partition on first cciss LUN (for /boot)
- created MD raid5 across all 200GB cciss partitions:
# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 cciss/c0d0p1[0] cciss/c0d2p1[3] cciss/c0d1p1[1]
      209712128 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>

- created PV/VG/LV on md device; assigned it to be /
# vgs -o +pv_name
  VG             #PV #LV #SN Attr   VSize   VFree PV        
  vg_storageqe01   1   1   0 wz--n- 200.00g 4.00m /dev/md0  
# lvs
  LV       VG             Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  LogVol00 vg_storageqe01 -wi-ao 199.99g

- installed system, rebooted, all worked as expected

Closing WORKSFORME, please reopen if you continue to have problems when testing against latest RHEL6 snapshots (e.g. the beta2 release that will be coming RSN).