Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
DescriptionOrion Poplawski
2011-06-27 17:38:53 UTC
Created attachment 510127[details]
anaconda tb xml
Description of problem:
If I install a machine with the following partitioning:
clearpart --drives=/dev/disk/by-path/pci-0000:00:1f.1-scsi-0:0:0:0,/dev/disk/by-path/pci-0000:00:1f.1-scsi-1:0:0:0
part raid.1 --size=200 --ondisk=/dev/disk/by-path/pci-0000:00:1f.1-scsi-0:0:0:0
part raid.2 --size=200 --ondisk=/dev/disk/by-path/pci-0000:00:1f.1-scsi-1:0:0:0
part raid.3 --size=17000 --grow --ondisk=/dev/disk/by-path/pci-0000:00:1f.1-scsi-0:0:0:0
part raid.4 --size=17000 --grow --ondisk=/dev/disk/by-path/pci-0000:00:1f.1-scsi-1:0:0:0
raid /boot --level=RAID1 --device=md0 raid.1 raid.2
raid pv.1 --level=RAID1 --device=md1 raid.3 raid.4
volgroup vg_root pv.1
logvol / --vgname=vg_root --size=2000 --name=root
logvol swap --fstype=swap --vgname=vg_root --size=2048 --name=swap
logvol /usr --vgname=vg_root --size=4000 --name=usr
logvol /var --vgname=vg_root --size=4000 --name=var
logvol /var/spool/mail --vgname=vg_root --size=3000 --name=mail
logvol /tftpboot --vgname=vg_root --size=1000 --name=tftpboot
When I attempt a re-install on this machine with the same partitioning, I get:
Traceback (most recent call first):
File "/usr/lib/anaconda/storage/devicetree.py", line 745, in _removeDevice
raise ValueError("Cannot remove non-leaf device '%s'" % dev.name)
File "/usr/lib/anaconda/storage/devicetree.py", line 795, in registerAction
self._removeDevice(d)
File "/usr/lib/anaconda/storage/__init__.py", line 785, in createDevice
self.devicetree.registerAction(ActionCreateDevice(device))
File "/usr/lib/anaconda/kickstart.py", line 935, in execute
storage.createDevice(request)
File "/usr/lib/anaconda/kickstart.py", line 1149, in execute
obj.execute(self.anaconda)
File "/usr/bin/anaconda", line 1102, in <module>
ksdata.execute()
Changing to:
clearpart --all --initlabel
allows the install to proceed, but I can't use this in production.
Version-Release number of selected component (if applicable):
13.21.82
How reproducible:
Every time
This one is slated for 6.3 planning. We're too late for 6.2 work at this point. Please try to reproduce the problem with 6.2 when it's available to you and update the bug with new findings or close it out if the problem has been resolved.
You are not clearing any partitions and your existing disk setup includes three md arrays, named md0, md1, and md2. You are specifying that your new md arrays be named md0 and md1, which is not possible since those names are already in use. You have several options, depending on what you are hoping to accomplish:
1. remove the --device= from your new arrays' "raid" commands in ks.cfg
2. specify a type in your clearpart command, eg: clearpart --linux
3. specify that your raid partitions and arrays are preexisting by
adding --onpart= for partitions and --useexisting for raid.
I don't see a bug in here anywhere. Please advise.
(In reply to comment #8)
> Orion, are you aware that clearpart without either the --all or --linux option
> is the same as clearpart --none?
>
> See https://fedoraproject.org/wiki/Anaconda/Kickstart#clearpart
Nope, or at least I missed that here. Sorry about that. Adding --linux did the trick. Thanks.
Created attachment 510127 [details] anaconda tb xml Description of problem: If I install a machine with the following partitioning: clearpart --drives=/dev/disk/by-path/pci-0000:00:1f.1-scsi-0:0:0:0,/dev/disk/by-path/pci-0000:00:1f.1-scsi-1:0:0:0 part raid.1 --size=200 --ondisk=/dev/disk/by-path/pci-0000:00:1f.1-scsi-0:0:0:0 part raid.2 --size=200 --ondisk=/dev/disk/by-path/pci-0000:00:1f.1-scsi-1:0:0:0 part raid.3 --size=17000 --grow --ondisk=/dev/disk/by-path/pci-0000:00:1f.1-scsi-0:0:0:0 part raid.4 --size=17000 --grow --ondisk=/dev/disk/by-path/pci-0000:00:1f.1-scsi-1:0:0:0 raid /boot --level=RAID1 --device=md0 raid.1 raid.2 raid pv.1 --level=RAID1 --device=md1 raid.3 raid.4 volgroup vg_root pv.1 logvol / --vgname=vg_root --size=2000 --name=root logvol swap --fstype=swap --vgname=vg_root --size=2048 --name=swap logvol /usr --vgname=vg_root --size=4000 --name=usr logvol /var --vgname=vg_root --size=4000 --name=var logvol /var/spool/mail --vgname=vg_root --size=3000 --name=mail logvol /tftpboot --vgname=vg_root --size=1000 --name=tftpboot When I attempt a re-install on this machine with the same partitioning, I get: Traceback (most recent call first): File "/usr/lib/anaconda/storage/devicetree.py", line 745, in _removeDevice raise ValueError("Cannot remove non-leaf device '%s'" % dev.name) File "/usr/lib/anaconda/storage/devicetree.py", line 795, in registerAction self._removeDevice(d) File "/usr/lib/anaconda/storage/__init__.py", line 785, in createDevice self.devicetree.registerAction(ActionCreateDevice(device)) File "/usr/lib/anaconda/kickstart.py", line 935, in execute storage.createDevice(request) File "/usr/lib/anaconda/kickstart.py", line 1149, in execute obj.execute(self.anaconda) File "/usr/bin/anaconda", line 1102, in <module> ksdata.execute() Changing to: clearpart --all --initlabel allows the install to proceed, but I can't use this in production. Version-Release number of selected component (if applicable): 13.21.82 How reproducible: Every time