RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 604570 - LVM on RAID 5 not working
Summary: LVM on RAID 5 not working
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.0
Hardware: x86_64
OS: Linux
low
high
Target Milestone: rc
: ---
Assignee: Mike Snitzer
QA Contact: Corey Marthaler
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-06-16 09:05 UTC by Steven Mercurio
Modified: 2010-06-18 18:33 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-06-18 18:33:07 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Steven Mercurio 2010-06-16 09:05:58 UTC
Description of problem:

LVM over raid not working

Version-Release number of selected component (if applicable):

RHEL 6 64bit (downloaded DVD within last 5-7 days)

How reproducible:

During install create a RAID 5 array and put LVM on it.  Install will proceed normally but system will stop booting after "press I for interactive startup" prompt

Steps to Reproduce:
1. Boot RH6 and begin install
2. Create RAID5 with LVM on it
3. Finish install and reboot
  
Additional info:

Using Asus KGPE-D16 MB and AMD 6128 CPU.  SAME install using RAID but putting / on RAID5 rather than LVM will allow system to boot.

Comment 2 Steven Mercurio 2010-06-16 18:03:49 UTC
/boot is on a RAID1 (md0) using sda1 and sdb1
the PV is on a RAID 5 (md1) using sda2, sbd2, and sdc2

sdc1 is a swap partition the same size as sda1 or sdb1

Motherboard is ASUS KGPE-16 with 4G RAM and 1 AMD 6128 CPU

Server has 3 1TB SATA HDD's total

Comment 3 Steven Mercurio 2010-06-16 18:07:20 UTC
ALL other partitiona:

/
/opt
/usr
/var
/home
swap (additional)

Are on the PV.

Have verified the system will boot normally if a ext4 partition is on the RAID 5 array.  Also discovered issue still exists when SDA2, SDB2, and SDC2 are only 25G for a 50G RAID5 MD1 (32M extents) so array/PV/extents size does not seem to be a factor.

Comment 5 RHEL Program Management 2010-06-16 19:53:01 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 7 Mike Snitzer 2010-06-16 20:59:24 UTC
Chances are this is a dracut issue.  But I'll be triaging this shortly.

Comment 8 Doug Ledford 2010-06-16 21:44:50 UTC
Mike: I can take a look at this if you wish.  However, I haven't noticed anything even remotely similar to this in my testing so far.

Comment 9 Harald Hoyer 2010-06-17 08:08:56 UTC
which version of dracut and mdadm?

Comment 10 Mike Snitzer 2010-06-18 18:33:07 UTC
I just tested RHEL6 Beta2-3.0 (newer than snapshot6) and it worked perfectly fine for me.

Configuration:
- cciss controller with 3 disks
- 200GB partitions on all cciss LUNs (for root)
- 100M partition on first cciss LUN (for /boot)
- created MD raid5 across all 200GB cciss partitions:
# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 cciss/c0d0p1[0] cciss/c0d2p1[3] cciss/c0d1p1[1]
      209712128 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>

- created PV/VG/LV on md device; assigned it to be /
# vgs -o +pv_name
  VG             #PV #LV #SN Attr   VSize   VFree PV        
  vg_storageqe01   1   1   0 wz--n- 200.00g 4.00m /dev/md0  
# lvs
  LV       VG             Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  LogVol00 vg_storageqe01 -wi-ao 199.99g

- installed system, rebooted, all worked as expected

Closing WORKSFORME, please reopen if you continue to have problems when testing against latest RHEL6 snapshots (e.g. the beta2 release that will be coming RSN).


Note You need to log in before you can comment on or make changes to this bug.