Bug 2232739
| Summary: | System fails to reboot on installed RHEL8 because of preexisting customized service units | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Renaud Métrich <rmetrich> |
| Component: | leapp-repository | Assignee: | Leapp Notifications Bot <leapp-notifications-bot> |
| Status: | CLOSED MIGRATED | QA Contact: | upgrades-and-conversions |
| Severity: | medium | Docs Contact: | Miriam Portman <mportman> |
| Priority: | medium | ||
| Version: | 7.9 | CC: | upgrades-and-conversions |
| Target Milestone: | rc | Keywords: | MigratedToJIRA |
| Target Release: | --- | Flags: | pm-rhel:
mirror+
|
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-09-12 15:22:02 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug. This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there. Due to differences in account names between systems, some fields were not replicated. Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information. To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer. You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like: "Bugzilla Bug" = 1234567 In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information. |
Description of problem: After the upgrade completed (so reboot and upgrade done), it may be possible that the newly upgraded system doesn't boot properly, especially because critical system services have been customized while still on previous release. Due to the customization, the service unit from previous release (e.g. RHEL7) will override the service unit from current release (e.g. RHEL8). Solving automatically this particular issue is not possible, because it's impossible to tell whether customization will create havoc or not. Through this BZ, I'm recommending that an Risk High/"Inhibitor until acknowledge" be created when detecting service unit overrides. Version-Release number of selected component (if applicable): leapp-upgrade-el7toel8-0.18.0-3.el7_9.noarch How reproducible: Always Steps to Reproduce: 1. Create a RHEL7 VM with custom partitioning # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom vda 252:0 0 20G 0 disk ├─vda1 252:1 0 1G 0 part /boot ├─vda2 252:2 0 12G 0 part │ ├─vg01-usr 253:0 0 6G 0 lvm /usr │ └─vg01-var 253:1 0 6G 0 lvm /var ├─vda3 252:3 0 6G 0 part / ├─vda4 252:4 0 1K 0 part └─vda5 252:5 0 1015M 0 part [SWAP] Here above it's important to have a separate /var, unsure if having / on the LVM would also reproduce. 2. Create an override of lvm2-pvscan@.service unit # systemctl edit lvm2-pvscan@.service --full // editor opens, save and quit, no need to amend // 3. Upgrade using leapp Actual results: Upgrade goes well in reboot phase, but RHEL8 doesn't boot properly, because /var is not getting mounted: -------- 8< ---------------- 8< ---------------- 8< ---------------- 8< -------- [ ***] A start job is running for dev-mapp…g01\x2dvar.device (10s / 1min 30s) ... [ OK ] Mounted /boot. [ OK ] Started udev Kernel Device Manager. [ 2.391781] lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized [ 2.396531] i801_smbus 0000:00:1f.3: SMBus using PCI interrupt [ 2.399603] input: PC Speaker as /devices/platform/pcspkr/input/input5 [ 2.423877] iTCO_vendor_support: vendor-support=0 [ 2.425866] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11 [ 2.427146] iTCO_wdt: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660) [ 2.428295] iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) [ TIME ] Timed out waiting for device dev-mapper-vg01\x2dvar.device. [DEPEND] Dependency failed for /var. [DEPEND] Dependency failed for Flush Journal to Persistent Storage. [DEPEND] Dependency failed for Virtual Machine and Container Storage. [DEPEND] Dependency failed for Load/Save Random Seed. [DEPEND] Dependency failed for Update UTMP about System Runlevel Changes. [DEPEND] Dependency failed for Postfix Mail Transport Agent. [DEPEND] Dependency failed for Local File Systems. [DEPEND] Dependency failed for Update UTMP about System Boot/Shutdown. [ OK ] Reached target Network (Pre). [ OK ] Reached target Timers. [ OK ] Reached target Login Prompts. [ OK ] Reached target Sockets. Starting Tell Plymouth To Write Out Runtime Data... [ OK ] Started Emergency Shell. [ OK ] Reached target Emergency Mode. [ OK ] Reached target Network. [ OK ] Reached target Network is Online. Starting Temporary Leapp service wh… resumes execution after reboot... Starting Create Volatile Files and Directories... [ OK ] Started Create Volatile Files and Directories. Starting Security Auditing Service... [ OK ] Started Tell Plymouth To Write Out Runtime Data. [FAILED] Failed to start Security Auditing Service. See 'systemctl status auditd.service' for details. [ 92.570244] leapp3[735]: Traceback (most recent call last): [ 92.570302] leapp3[735]: File "/root/tmp_leapp_py3/leapp/cli/commands/upgrade/breadcrumbs.py", line 147, in wrapper [ 92.570320] leapp3[735]: return f(*args, breadcrumbs=breadcrumbs, **kwargs) [ 92.570335] leapp3[735]: File "/root/tmp_leapp_py3/leapp/cli/commands/upgrade/__init__.py", line 64, in upgrade [ 92.570354] leapp3[735]: context, configuration = util.fetch_last_upgrade_context(resume_context) [ 92.570372] leapp3[735]: File "/root/tmp_leapp_py3/leapp/cli/commands/upgrade/util.py", line 98, in fetch_last_upgrade_context [ 92.570389] leapp3[735]: with get_connection(None) as db: [ 92.570403] leapp3[735]: File "/root/tmp_leapp_py3/leapp/utils/audit/__init__.py", line 73, in get_connection [ 92.570416] leapp3[735]: return create_connection(cfg.get('database', 'path')) [ 92.570430] leapp3[735]: File "/root/tmp_leapp_py3/leapp/utils/audit/__init__.py", line 60, in create_connection [ 92.570443] leapp3[735]: return _initialize_database(sqlite3.connect(path)) [ 92.570459] leapp3[735]: sqlite3.OperationalError: unable to open database file [ 92.570472] leapp3[735]: During handling of the above exception, another exception occurred: [ 92.570485] leapp3[735]: Traceback (most recent call last): [ 92.570499] leapp3[735]: File "/root/tmp_leapp_py3/leapp3", line 6, in <module> [ 92.570521] leapp3[735]: sys.exit(leapp.cli.main()) [ 92.570535] leapp3[735]: File "/root/tmp_leapp_py3/leapp/cli/__init__.py", line 45, in main [ 92.570549] leapp3[735]: cli.command.execute('leapp version {}'.format(VERSION)) [ 92.570564] leapp3[735]: File "/root/tmp_leapp_py3/leapp/utils/clicmd.py", line 111, in execute [ 92.570579] leapp3[735]: args.func(args) [ 92.570606] leapp3[735]: File "/root/tmp_leapp_py3/leapp/utils/clicmd.py", line 133, in called [ 92.570621] leapp3[735]: self.target(args) [ 92.570637] leapp3[735]: File "/root/tmp_leapp_py3/leapp/cli/commands/upgrade/breadcrumbs.py", line 156, in wrapper [ 92.570651] leapp3[735]: breadcrumbs.save() [ 92.570667] leapp3[735]: File "/root/tmp_leapp_py3/leapp/cli/commands/upgrade/breadcrumbs.py", line 89, in save [ 92.570690] leapp3[735]: messages = get_messages(('IPUConfig',), self._crumbs['run_id']) [ 92.570724] leapp3[735]: File "/root/tmp_leapp_py3/leapp/utils/audit/__init__.py", line 404, in get_messages [ 92.570741] leapp3[735]: with get_connection(db=connection) as conn: [ 92.570768] leapp3[735]: File "/root/tmp_leapp_py3/leapp/utils/audit/__init__.py", line 73, in get_connection [ 92.570783] leapp3[735]: return create_connection(cfg.get('database', 'path')) [ 92.570806] leapp3[735]: File "/root/tmp_leapp_py3/leapp/utils/audit/__init__.py", line 60, in create_connection [ 92.570819] leapp3[735]: return _initialize_database(sqlite3.connect(path)) [ 92.570953] leapp3[735]: sqlite3.OperationalError: unable to open database file [FAILED] Failed to start Security Auditing Service. See 'systemctl status auditd.service' for details. [FAILED] Failed to start Temporary Leapp ser…ch resumes execution after reboot. See 'systemctl status leapp_resume.service' for details. You are in emergency mode. After logging in, type "journalctl -xb" to view system logs, "systemctl reboot" to reboot, "systemctl default" or "exit" to boot into default mode. Give root password for maintenance (or press Control-D to continue): -------- 8< ---------------- 8< ---------------- 8< ---------------- 8< -------- Expected results: Some Risk/High/Inhibitor executing to avoid proceeding with the upgrade Additional info: Any service unit override could lead to issues, it's not possible to tell. All this depends if RHEL7 service units and RHEL8 ones are "compatible".