This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1983051 - zipl-switch-to-blscfg dies with "entry already exists" when having more than one "rescue" entry
Summary: zipl-switch-to-blscfg dies with "entry already exists" when having more than ...
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: s390utils
Version: 8.4
Hardware: s390x
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Dan Horák
QA Contact: Vilém Maršík
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-07-16 10:44 UTC by Renaud Métrich
Modified: 2024-10-01 19:00 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-02 14:34:06 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1783542 1 unspecified CLOSED In-place upgrade fails to convert the zipl configuration to BLS for RHEL-ALT qemu-kvm guest on s390x 2024-06-13 22:20:41 UTC
Red Hat Issue Tracker OAMG-6408 0 None None None 2022-01-21 12:44:32 UTC
Red Hat Issue Tracker   RHEL-2054 0 None Migrated None 2023-09-02 14:34:00 UTC
Red Hat Knowledge Base (Solution) 6191401 0 None None None 2021-07-16 12:08:12 UTC

Description Renaud Métrich 2021-07-16 10:44:37 UTC
Description of problem:

When having more than one "rescue" entry in the /etc/zipl.conf file, the utility dies after printing "BLS file /boot/loader/entries/<MACHINEID>-0-rescue.conf already exists".

This is due to having the file name "<MACHINEID>-0-rescue.conf" hardcoded, as soon as one "rescue" entry exists.

Typically the tool will die if a system has been cloned and there exists already another "rescue" entry for a different machine id.


Version-Release number of selected component (if applicable):

s390utils-base-2.15.1-5.el8.s390x


How reproducible:

Always


Steps to Reproduce:
1. Create a /etc/zipl.conf from a RHEL7 which contains 2 rescue entries

-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
[defaultboot]
defaultauto
prompt=1
timeout=5
default=Red_Hat_Enterprise_Linux_Server_7.9_Rescue_7fef08a17f6a400db03b693a0ef30ba0
target=/boot
[Red_Hat_Enterprise_Linux_Server_7.9_Rescue_7fef08a17f6a400db03b693a0ef30ba0]
    image=/boot/vmlinuz-0-rescue-7fef08a17f6a400db03b693a0ef30ba0
    parameters="root=/dev/mapper/rootvg-root vmalloc=4096G user_mode=home console=ttyS0 crashkernel=auto rd.lvm.lv=rootvg/root LANG=en_US.UTF-8 ipv6.disable=1 transparent_hugepage=never vmhalt=LOGOFF vmpoff=LOGOFF"
    ramdisk=/boot/initramfs-0-rescue-7fef08a17f6a400db03b693a0ef30ba0.img
[3.10.0-1160.25.1.el7.s390x]
    image=/boot/vmlinuz-3.10.0-1160.25.1.el7.s390x
    parameters="root=/dev/mapper/rootvg-root vmalloc=4096G user_mode=home console=ttyS0 crashkernel=auto rd.lvm.lv=rootvg/root LANG=en_US.UTF-8 ipv6.disable=1 transparent_hugepage=never vmhalt=LOGOFF vmpoff=LOGOFF"
    ramdisk=/boot/initramfs-3.10.0-1160.25.1.el7.s390x.img
[linux-0-rescue-fbf2f10617024e97989bccd4d299ec21]
    image=/boot/vmlinuz-0-rescue-fbf2f10617024e97989bccd4d299ec21
    ramdisk=/boot/initramfs-0-rescue-fbf2f10617024e97989bccd4d299ec21.img
    parameters="root=/dev/mapper/rootvg-root vmalloc=4096G user_mode=home console=ttyS0 crashkernel=auto rd.lvm.lv=rootvg/root ipv6.disable=1 transparent_hugepage=never vmhalt=LOGOFF vmpoff=LOGOFF"  
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

  Adjust machine id "7fef08a17f6a400db03b693a0ef30ba0" to the system's one ideally.

2. Execute zipl-switch-to-blscfg after deleting /boot/loader/entries directory

  # rm -fr /boot/loader/entries
  # zipl-switch-to-blscfg


Actual results:

BLS file /boot/loader/entries/7fef08a17f6a400db03b693a0ef30ba0-0-rescue.conf already exists


Expected results:

A Warning that the entry couldn't be created and machine-id differs somehow.


Additional info:

The code responsible to create the file is shown below:
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
168             if [ -n "${zipl_to_bls[$key]}" ]; then
169                 if [[ $key = "image" && $version_name == true ]]; then
170                     if [[ $val = *"vmlinuz-"* ]]; then
171                         version="${val##*/vmlinuz-}"
172                     else
173                         version="${val##*/}"
174                     fi
175                     echo "version $version" >> ${OUTPUT}
176                     if [[ $version = *"rescue"* ]]; then
177                         FILENAME=${BLS_DIR}/${MACHINE_ID}0-rescue.conf
 :
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

On line 177 we can see that FILENAME is dependent on machine id, but not actually machine id found in rescue entry extracted from kernel image, here above "/boot/vmlinuz-0-rescue-fbf2f10617024e97989bccd4d299ec21".

Comment 3 Dan Horák 2021-07-20 12:06:39 UTC
There is no structure defined for the section names in the classic zipl.conf file. Thus the conversion script can't make any conclusions about machine-ids or such. I am going to open an upstream PR to document the limit of a single "rescue" entry into the man page and will close this as a NOTABUG for s390utils.

Comment 6 Renaud Métrich 2022-01-20 11:13:06 UTC
The issue not only happens with Rescue entries but also duplicated entries, e.g. you have an entry to boot the kernel "normally" and another entry to boot with "debug options":

/etc/zipl.conf:
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
[defaultboot]
defaultauto
prompt=1
timeout=5
default=RHEL-Upgrade-Initramfs
target=/boot
[RHEL-Upgrade-Initramfs]
        image=/boot/vmlinuz-upgrade.s390x
        parameters="root=/dev/mapper/rootvg-root crashkernel=auto cio_ignore=all,!condev rd.lvm.lv=rootvg/root rd.dasd=0.0.0100 LANG=en_US.UTF-8 enforcing=0 rd.plymouth=0 plymouth.enable=0"
        ramdisk=/boot/initramfs-upgrade.s390x.img
[3.10.0-1160.36.2.el7.s390x]
        image=/boot/vmlinuz-3.10.0-1160.36.2.el7.s390x
        parameters="root=/dev/mapper/rootvg-root crashkernel=auto cio_ignore=all,!condev rd.lvm.lv=rootvg/root rd.dasd=0.0.0100 LANG=en_US.UTF-8"
        ramdisk=/boot/initramfs-3.10.0-1160.36.2.el7.s390x.img
[3.10.0-1160.36.2.el7.s390x_with_debugging]
        image=/boot/vmlinuz-3.10.0-1160.36.2.el7.s390x
        parameters="root=/dev/mapper/rootvg-root crashkernel=auto cio_ignore=all,!condev rd.lvm.lv=rootvg/root rd.dasd=0.0.0100 LANG=en_US.UTF-8 systemd.log_level=debug systemd.log_target=kmsg"
        ramdisk=/boot/initramfs-3.10.0-1160.36.2.el7.s390x.img
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

Execute the conversion script:
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
$ zipl-switch-to-blscfg --bls-directory=/tmp/bls
BLS file /tmp/bls/93f56661c34a47d4b5fd516d87aae9e0-3.10.0-1160.36.2.el7.s390x.conf already exists
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------


Increasing priority/severity to 2 because systems getting upgraded using Leapp end up restarting on RHEL7 kernel but RHEL8 root.

Comment 18 Doug Ledford 2023-08-31 13:44:08 UTC
This bug is scheduled for migration to Jira.  When that happens, this bugzilla issue will be permanently closed with the status MIGRATED and all future interaction on this issue will need to happen in the Jira issue.  The new issue will be part of the RHEL project (a project for Jira only issues, which will sync once, then close the bugzilla issue and all future updates will happen in Jira), not part of the RHELPLAN project (which is part of the automated bugzilla->Jira mirroring and which allows ongoing updates to the bugzilla bug and syncs those updates over to Jira).

For making sure you have access to Jira in order to continue accessing this issue, follow one of the appropriate knowledge base articles:

KB0016394 - https://redhat.service-now.com/help?id=kb_article_view&sysparm_article=KB0016394
KB0016694 - https://redhat.service-now.com/help?id=kb_article_view&sysparm_article=KB0016694
KB0016774 - https://redhat.service-now.com/help?id=kb_article_view&sysparm_article=KB0016774

For general issues with Jira, open a ticket with rh-issues

Comment 19 RHEL Program Management 2023-09-02 14:32:19 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 20 RHEL Program Management 2023-09-02 14:34:06 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues.

Comment 21 Doug Ledford 2023-09-08 17:30:47 UTC
Additional information on creating a Jira account to access the migrated issue can be found here:

https://access.redhat.com/articles/7032570


Note You need to log in before you can comment on or make changes to this bug.