RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1463817 - Kickstart (re)install of host using 7.4-beta initrd fails on the xfs partition creation stage
Summary: Kickstart (re)install of host using 7.4-beta initrd fails on the xfs partitio...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: anaconda
Version: 7.4
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: rc
: ---
Assignee: Anaconda Maintenance Team
QA Contact: Release Test Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-06-21 19:42 UTC by Dan Yasny
Modified: 2021-01-15 07:38 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-01-15 07:38:42 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Dan Yasny 2017-06-21 19:42:14 UTC
Description of problem:
I usually rebuild my hosts as follows:
- download a fresh set of initrd and vmlinux to /boot
- create a "reinstall" grub entry pointing at a kickstart file on the network 
  grubby --add-kernel=/boot/vmlinuz --title=REINSTALL --args="text ks=http://mykickstart" --initrd=/boot/initrd.img --make-default
- restart the host

I had to reinstall a host today, whcih was running 7.3 with ext4 partitions (instead of the default xfs for some reason)

I downloaded the initrd from the internal 7.4-beta location

Install failed at the stage where it was supposed to clear the disk and recreate partitions with XFS, consistently repeated.

When I changed the initrd and vmlinuz to the older files from an internal 7.3 repo everything worked as expected

* See file locations in private comment

How reproducible:
always

Steps to Reproduce:
see above

Actual results:
see above

Expected results:
installation should work

Additional info:

Comment 3 Martin Kolman 2017-06-22 11:48:35 UTC
Please add logs from an affected installation run as separate plain-text
attachments. The log files should be located in /tmp during the installation and there is a root shell running on tty2.

Thanks in advance!

Comment 4 Dan Yasny 2017-06-22 13:30:34 UTC
Since the host has been reprovisioned successfully, using the 7.3 initramfs, it is now in production and cannot be set up as a reproducer. 

It should be fairly easy to reproduce, I provided all the steps and versions in use. If that is something your team cannot do, feel free to close the BZ, all I wanted to do was report on an issue I came across.

Comment 6 RHEL Program Management 2021-01-15 07:38:42 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.