Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
When attempting to use kickstart to create a system with LVM on top of RAID 1, the installer craps out.
Version-Release number of selected component (if applicable):
anaconda 19.31.79-1
How reproducible:
Every time.
Steps to Reproduce:
Attempt to install from a kickstart with the following configuration:
# Format the partitions, clear mbr first
zerombr
clearpart --all --initlabel
part raid.00 --size 768 --asprimary --ondrive=vda
part raid.10 --size 768 --asprimary --ondrive=vdb
part raid.01 --size 4096 --asprimary --ondrive=vda
part raid.11 --size 4096 --asprimary --ondrive=vdb
part raid.02 --size 1024 --asprimary --grow --ondrive=vda
part raid.12 --size 1024 --asprimary --grow --ondrive=vdb
raid /boot --fstype ext3 --device raid1-boot --level=RAID1 raid.00 raid.10
raid swap --fstype swap --device raid1-swap --level=RAID1 raid.01 raid.11
raid pv.00 --device raid1-pv --level=RAID1 raid.02 raid.12
volgroup vg0 pv.00
# Create more logical partitions
# CCE-14161-4, CCE-14777-2, CCE-14011-1, CCE-14171-3, CCE-14559-9 (Rows 2 - 6)
logvol / --fstype ext4 --name=root --vgname=vg0 --grow --size=1024
logvol /tmp --fstype ext4 --name=temp --vgname=vg0 --size=32768 --fsoptions="nodev,noexec,nosuid"
logvol /home --fstype ext4 --name=home --vgname=vg0 --size=65536 --fsoptions="nodev"
logvol /var --fstype ext4 --name=var --vgname=vg0 --size=32768 --fsoptions="nodev"
logvol /var/log --fstype ext4 --name=varlog --vgname=vg0 --size=8192 --fsoptions="nodev,noexec,nosuid"
logvol /var/log/audit --fstype ext4 --name=audit --vgname=vg0 --size=4096 --fsoptions="nodev,noexec,nosuid"
Actual results:
Anaconda crashes; installation is halted.
Expected results:
The system should install with the configuration as detailed in the kickstart file.
Additional info:
Works in 6.5
The information I just added happened from a slightly different kickstart file -- it's the same as the one I already posted, but every logical volume except for "/" is commented out.
Comment 13Vratislav Podzimek
2014-09-03 09:48:39 UTC
I think this could be a duplicate of bug 1093144 with a fix already available. Let's see if fixing bug 1093144 fixes this too.