This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2232739 - System fails to reboot on installed RHEL8 because of preexisting customized service units
Summary: System fails to reboot on installed RHEL8 because of preexisting customized s...
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: leapp-repository
Version: 7.9
Hardware: All
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Leapp Notifications Bot
QA Contact: upgrades-and-conversions
Miriam Portman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-08-18 11:01 UTC by Renaud Métrich
Modified: 2023-09-12 15:22 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-12 15:22:02 UTC
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OAMG-9652 0 None None None 2023-08-18 11:05:17 UTC
Red Hat Issue Tracker   RHEL-3368 0 None Migrated None 2023-09-12 15:21:56 UTC
Red Hat Issue Tracker RHELPLAN-166028 0 None None None 2023-08-19 07:29:41 UTC

Description Renaud Métrich 2023-08-18 11:01:38 UTC
Description of problem:

After the upgrade completed (so reboot and upgrade done), it may be possible that the newly upgraded system doesn't boot properly, especially because critical system services have been customized while still on previous release.
Due to the customization, the service unit from previous release (e.g. RHEL7) will override the service unit from current release (e.g. RHEL8).

Solving automatically this particular issue is not possible, because it's impossible to tell whether customization will create havoc or not.
Through this BZ, I'm recommending that an Risk High/"Inhibitor until acknowledge" be created when detecting service unit overrides.

Version-Release number of selected component (if applicable):

leapp-upgrade-el7toel8-0.18.0-3.el7_9.noarch

How reproducible:

Always

Steps to Reproduce:
1. Create a RHEL7 VM with custom partitioning

  # lsblk
  NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
  sr0           11:0    1 1024M  0 rom  
  vda          252:0    0   20G  0 disk 
  ├─vda1       252:1    0    1G  0 part /boot
  ├─vda2       252:2    0   12G  0 part 
  │ ├─vg01-usr 253:0    0    6G  0 lvm  /usr
  │ └─vg01-var 253:1    0    6G  0 lvm  /var
  ├─vda3       252:3    0    6G  0 part /
  ├─vda4       252:4    0    1K  0 part 
  └─vda5       252:5    0 1015M  0 part [SWAP]

  Here above it's important to have a separate /var, unsure if having / on the LVM would also reproduce.

2. Create an override of lvm2-pvscan@.service unit

  # systemctl edit lvm2-pvscan@.service --full
  // editor opens, save and quit, no need to amend //

3. Upgrade using leapp

Actual results:

Upgrade goes well in reboot phase, but RHEL8 doesn't boot properly, because /var is not getting mounted:
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
[   ***] A start job is running for dev-mapp…g01\x2dvar.device (10s / 1min 30s)

...

[  OK  ] Mounted /boot.
[  OK  ] Started udev Kernel Device Manager.
[    2.391781] lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
[    2.396531] i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
[    2.399603] input: PC Speaker as /devices/platform/pcspkr/input/input5
[    2.423877] iTCO_vendor_support: vendor-support=0
[    2.425866] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11
[    2.427146] iTCO_wdt: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
[    2.428295] iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
[ TIME ] Timed out waiting for device dev-mapper-vg01\x2dvar.device.
[DEPEND] Dependency failed for /var.
[DEPEND] Dependency failed for Flush Journal to Persistent Storage.
[DEPEND] Dependency failed for Virtual Machine and Container Storage.
[DEPEND] Dependency failed for Load/Save Random Seed.
[DEPEND] Dependency failed for Update UTMP about System Runlevel Changes.
[DEPEND] Dependency failed for Postfix Mail Transport Agent.
[DEPEND] Dependency failed for Local File Systems.
[DEPEND] Dependency failed for Update UTMP about System Boot/Shutdown.
[  OK  ] Reached target Network (Pre).
[  OK  ] Reached target Timers.
[  OK  ] Reached target Login Prompts.
[  OK  ] Reached target Sockets.
         Starting Tell Plymouth To Write Out Runtime Data...
[  OK  ] Started Emergency Shell.
[  OK  ] Reached target Emergency Mode.
[  OK  ] Reached target Network.
[  OK  ] Reached target Network is Online.
         Starting Temporary Leapp service wh… resumes execution after reboot...
         Starting Create Volatile Files and Directories...
[  OK  ] Started Create Volatile Files and Directories.
         Starting Security Auditing Service...
[  OK  ] Started Tell Plymouth To Write Out Runtime Data.
[FAILED] Failed to start Security Auditing Service.
See 'systemctl status auditd.service' for details.
[   92.570244] leapp3[735]: Traceback (most recent call last):
[   92.570302] leapp3[735]:   File "/root/tmp_leapp_py3/leapp/cli/commands/upgrade/breadcrumbs.py", line 147, in wrapper
[   92.570320] leapp3[735]:     return f(*args, breadcrumbs=breadcrumbs, **kwargs)
[   92.570335] leapp3[735]:   File "/root/tmp_leapp_py3/leapp/cli/commands/upgrade/__init__.py", line 64, in upgrade
[   92.570354] leapp3[735]:     context, configuration = util.fetch_last_upgrade_context(resume_context)
[   92.570372] leapp3[735]:   File "/root/tmp_leapp_py3/leapp/cli/commands/upgrade/util.py", line 98, in fetch_last_upgrade_context
[   92.570389] leapp3[735]:     with get_connection(None) as db:
[   92.570403] leapp3[735]:   File "/root/tmp_leapp_py3/leapp/utils/audit/__init__.py", line 73, in get_connection
[   92.570416] leapp3[735]:     return create_connection(cfg.get('database', 'path'))
[   92.570430] leapp3[735]:   File "/root/tmp_leapp_py3/leapp/utils/audit/__init__.py", line 60, in create_connection
[   92.570443] leapp3[735]:     return _initialize_database(sqlite3.connect(path))
[   92.570459] leapp3[735]: sqlite3.OperationalError: unable to open database file
[   92.570472] leapp3[735]: During handling of the above exception, another exception occurred:
[   92.570485] leapp3[735]: Traceback (most recent call last):
[   92.570499] leapp3[735]:   File "/root/tmp_leapp_py3/leapp3", line 6, in <module>
[   92.570521] leapp3[735]:     sys.exit(leapp.cli.main())
[   92.570535] leapp3[735]:   File "/root/tmp_leapp_py3/leapp/cli/__init__.py", line 45, in main
[   92.570549] leapp3[735]:     cli.command.execute('leapp version {}'.format(VERSION))
[   92.570564] leapp3[735]:   File "/root/tmp_leapp_py3/leapp/utils/clicmd.py", line 111, in execute
[   92.570579] leapp3[735]:     args.func(args)
[   92.570606] leapp3[735]:   File "/root/tmp_leapp_py3/leapp/utils/clicmd.py", line 133, in called
[   92.570621] leapp3[735]:     self.target(args)
[   92.570637] leapp3[735]:   File "/root/tmp_leapp_py3/leapp/cli/commands/upgrade/breadcrumbs.py", line 156, in wrapper
[   92.570651] leapp3[735]:     breadcrumbs.save()
[   92.570667] leapp3[735]:   File "/root/tmp_leapp_py3/leapp/cli/commands/upgrade/breadcrumbs.py", line 89, in save
[   92.570690] leapp3[735]:     messages = get_messages(('IPUConfig',), self._crumbs['run_id'])
[   92.570724] leapp3[735]:   File "/root/tmp_leapp_py3/leapp/utils/audit/__init__.py", line 404, in get_messages
[   92.570741] leapp3[735]:     with get_connection(db=connection) as conn:
[   92.570768] leapp3[735]:   File "/root/tmp_leapp_py3/leapp/utils/audit/__init__.py", line 73, in get_connection
[   92.570783] leapp3[735]:     return create_connection(cfg.get('database', 'path'))
[   92.570806] leapp3[735]:   File "/root/tmp_leapp_py3/leapp/utils/audit/__init__.py", line 60, in create_connection
[   92.570819] leapp3[735]:     return _initialize_database(sqlite3.connect(path))
[   92.570953] leapp3[735]: sqlite3.OperationalError: unable to open database file
[FAILED] Failed to start Security Auditing Service.
See 'systemctl status auditd.service' for details.
[FAILED] Failed to start Temporary Leapp ser…ch resumes execution after reboot.
See 'systemctl status leapp_resume.service' for details.
You are in emergency mode. After logging in, type "journalctl -xb" to view
system logs, "systemctl reboot" to reboot, "systemctl default" or "exit"
to boot into default mode.
Give root password for maintenance
(or press Control-D to continue): 
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

Expected results:

Some Risk/High/Inhibitor executing to avoid proceeding with the upgrade

Additional info:

Any service unit override could lead to issues, it's not possible to tell.
All this depends if RHEL7 service units and RHEL8 ones are "compatible".

Comment 3 RHEL Program Management 2023-09-12 14:48:54 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 4 RHEL Program Management 2023-09-12 15:22:02 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.


Note You need to log in before you can comment on or make changes to this bug.