Bug 2170579
| Summary: | Backport https://github.com/coreos/rpm-ostree/commit/8dd45f293afc1ca32b42bda86dde47c66e652dda | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Colin Walters <walters> |
| Component: | rpm-ostree | Assignee: | RHCOS SST <rhcos-sst> |
| Status: | CLOSED ERRATA | QA Contact: | RHCOS SST QE <rhcos-sst-qe> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 8.8 | CC: | hhei, mnguyen |
| Target Milestone: | rc | Keywords: | Triaged |
| Target Release: | --- | Flags: | pm-rhel:
mirror+
|
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | rpm-ostree-2022.10.112.g3d0ac35b-3.el8 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-05-16 08:24:00 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Colin Walters
2023-02-16 19:32:50 UTC
To verify this, I started from an older 4.12 qcow2, and then upgraded in-place via e.g. ``` $ rpm-ostree rebase --experimental ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5fcd773df6dbe04e25f42678b93223b304f5fa7854e2cc9b0032c690eb9c7a2e $ systemctl reboot ``` But you can also start from a newer qcow2; the idea is to be running the currently-tagged into 4.13 and 8.8 errata of [root@cosa-devsh ~]# rpm -q rpm-ostree rpm-ostree-2022.10.112.g3d0ac35b-2.el8.x86_64 Now, there's actually a somewhat involved setup to re-create this bug in way that would happen in production. You'd need to be upgrading from an older rpm-ostree release with layered packages, then upgrade again I think. In any case we can simulate the end result of the bug in earlier versions by doing: ostree refs --delete ostree/container/image systemctl restart rpm-ostreed On the versions of rpm-ostree without this patch, this should cause the daemon to crash. With this change, it should display an error, but still allow you to perform further operations. [core@cosa-devsh ~]$ rpm-ostree status
State: idle
Deployments:
* 82c04ab0c971a354dfe024325d27687593c64e6e2da6408d86aa1ec054aa27be
Version: 412.86.202302061718-0 (2023-02-06T17:21:12Z)
[core@cosa-devsh ~]$ rpm -q rpm-ostree
rpm-ostree-2022.10.99.g0049dbdd-3.el8.x86_64
[core@cosa-devsh ~]$ sudo vi /etc/ostree/auth.json
[core@cosa-devsh ~]$ sudo rpm-ostree rebase --experimental ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5fcd773df6dbe04e25f42678b93223b304f5fa7854e2cc9b0032c690eb9c7a2e
Pulling manifest: ostree-unverified-image:docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5fcd773df6dbe04e25f42678b93223b304f5fa7854e2cc9b0032c690eb9c7a2e
Importing: ostree-unverified-image:docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5fcd773df6dbe04e25f42678b93223b304f5fa7854e2cc9b0032c690eb9c7a2e (digest: sha256:5fcd773df6dbe04e25f42678b93223b304f5fa7854e2cc9b0032c690eb9c7a2e)
ostree chunk layers stored: 0 needed: 51 (1.1 GB)
Fetching ostree chunk sha256:e89a8408ae78 (276.9 MB)
...
Staging deployment... done
Upgraded:
git-core 2.31.1-2.el8 -> 2.31.1-3.el8_6
grub2-common 1:2.02-123.el8_6.12 -> 1:2.02-123.el8_6.14
grub2-efi-x64 1:2.02-123.el8_6.12 -> 1:2.02-123.el8_6.14
grub2-pc 1:2.02-123.el8_6.12 -> 1:2.02-123.el8_6.14
grub2-pc-modules 1:2.02-123.el8_6.12 -> 1:2.02-123.el8_6.14
grub2-tools 1:2.02-123.el8_6.12 -> 1:2.02-123.el8_6.14
grub2-tools-extra 1:2.02-123.el8_6.12 -> 1:2.02-123.el8_6.14
grub2-tools-minimal 1:2.02-123.el8_6.12 -> 1:2.02-123.el8_6.14
openvswitch2.17 2.17.0-67.el8fdp -> 2.17.0-71.el8fdp
rpm-ostree 2022.10.99.g0049dbdd-3.el8 -> 2022.10.112.g3d0ac35b-2.el8
rpm-ostree-libs 2022.10.99.g0049dbdd-3.el8 -> 2022.10.112.g3d0ac35b-2.el8
toolbox 0.1.0-2.rhaos4.12.el8 -> 0.1.1-3.rhaos4.12.el8
Changes queued for next boot. Run "systemctl reboot" to start a reboot
[core@cosa-devsh ~]$
[core@cosa-devsh ~]$ sudo systemctl reboot
Red Hat Enterprise Linux CoreOS 412.86.202302151644-0
Part of OpenShift 4.12, RHCOS is a Kubernetes native operating system
managed by the Machine Config Operator (`clusteroperator/machine-config`).
WARNING: Direct SSH access to machines is not recommended; instead,
make configuration changes via `machineconfig` objects:
https://docs.openshift.com/container-platform/4.12/architecture/architecture-rhcos.html
---
Last login: Thu Feb 16 22:47:46 2023
[core@cosa-devsh ~]$ rpm-ostree status
State: idle
Deployments:
* ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5fcd773df6dbe04e25f42678b93223b304f5fa7854e2cc9b0032c690eb9c7a2e
Digest: sha256:5fcd773df6dbe04e25f42678b93223b304f5fa7854e2cc9b0032c690eb9c7a2e
Version: 412.86.202302151644-0 (2023-02-16T22:45:56Z)
82c04ab0c971a354dfe024325d27687593c64e6e2da6408d86aa1ec054aa27be
Version: 412.86.202302061718-0 (2023-02-06T17:21:12Z)
[core@cosa-devsh ~]$ rpm -q rpm-ostree
rpm-ostree-2022.10.112.g3d0ac35b-2.el8.x86_64
[core@cosa-devsh ~]$ sudo ostree refs --delete ostree/container/image
[core@cosa-devsh ~]$ sudo systemctl restart rpm-ostreed
[core@cosa-devsh ~]$ journalctl -r -u rpm-ostreed
-- Logs begin at Thu 2023-02-16 22:43:30 UTC, end at Thu 2023-02-16 22:48:15 UTC. --
Feb 16 22:48:09 cosa-devsh systemd[1]: Started rpm-ostree System Management Daemon.
Feb 16 22:48:09 cosa-devsh rpm-ostree[1435]: In idle state; will auto-exit in 62 seconds
Feb 16 22:48:09 cosa-devsh rpm-ostree[1435]: Reading config file '/etc/rpm-ostreed.conf'
Feb 16 22:48:08 cosa-devsh systemd[1]: Starting rpm-ostree System Management Daemon...
Feb 16 22:48:08 cosa-devsh systemd[1]: Stopped rpm-ostree System Management Daemon.
Feb 16 22:48:08 cosa-devsh systemd[1]: rpm-ostreed.service: Succeeded.
Feb 16 22:48:08 cosa-devsh systemd[1]: Stopping rpm-ostree System Management Daemon...
Feb 16 22:47:52 cosa-devsh rpm-ostree[1420]: In idle state; will auto-exit in 63 seconds
Feb 16 22:47:52 cosa-devsh rpm-ostree[1420]: client(id:cli dbus:1.25 unit:session-4.scope uid:1000) vanished; remai>
Feb 16 22:47:52 cosa-devsh rpm-ostree[1420]: client(id:cli dbus:1.25 unit:session-4.scope uid:1000) added; new tota>
Feb 16 22:47:52 cosa-devsh rpm-ostree[1420]: Allowing active client :1.25 (uid 1000)
Feb 16 22:47:52 cosa-devsh systemd[1]: Started rpm-ostree System Management Daemon.
Feb 16 22:47:52 cosa-devsh rpm-ostree[1420]: In idle state; will auto-exit in 63 seconds
Feb 16 22:47:52 cosa-devsh rpm-ostree[1420]: Reading config file '/etc/rpm-ostreed.conf'
Feb 16 22:47:52 cosa-devsh systemd[1]: Starting rpm-ostree System Management Daemon...
-- Reboot --
Feb 16 22:47:02 cosa-devsh systemd[1]: rpm-ostreed.service: Succeeded.
Feb 16 22:45:57 cosa-devsh rpm-ostree[2040]: In idle state; will auto-exit in 64 seconds
Feb 16 22:45:57 cosa-devsh rpm-ostree[2040]: client(id:cli dbus:1.35 unit:session-4.scope uid:0) vanished; remainin>
Feb 16 22:45:57 cosa-devsh rpm-ostree[2040]: Process [pid: 2054 uid: 0 unit: session-4.scope] disconnected from tra>
Feb 16 22:45:57 cosa-devsh rpm-ostree[2040]: Unlocked sysroot
Feb 16 22:45:57 cosa-devsh rpm-ostree[2040]: Txn Rebase on /org/projectatomic/rpmostree1/rhcos successful
Feb 16 22:45:57 cosa-devsh rpm-ostree[2040]: sanitycheck(/usr/bin/true) successful
Feb 16 22:45:57 cosa-devsh rpm-ostree[2040]: Created new deployment /ostree/deploy/rhcos/deploy/c9b97d20f01f2abcd6b>
Feb 16 22:44:36 cosa-devsh rpm-ostree[2040]: Process [pid: 2054 uid: 0 unit: session-4.scope] connected to transact>
Feb 16 22:44:36 cosa-devsh rpm-ostree[2040]: Initiated txn Rebase for client(id:cli dbus:1.35 unit:session-4.scope >
Feb 16 22:44:36 cosa-devsh rpm-ostree[2040]: Locked sysroot
Feb 16 22:44:36 cosa-devsh rpm-ostree[2040]: client(id:cli dbus:1.35 unit:session-4.scope uid:0) added; new total=1
Feb 16 22:43:55 cosa-devsh rpm-ostree[2040]: In idle state; will auto-exit in 61 seconds
Feb 16 22:43:55 cosa-devsh rpm-ostree[2040]: client(id:cli dbus:1.29 unit:session-4.scope uid:1000) vanished; remai>
[core@cosa-devsh ~]$
Verify passed with rpm-ostree-2022.10.112.g3d0ac35b-3.el8.x86_64
$ cosa run --qemu-image rhcos-412.86.202302142053-0-qemu.x86_64.qcow2 -m 4096
Red Hat Enterprise Linux CoreOS 412.86.202302142053-0
---
Last login: Fri Mar 31 11:33:26 2023
[core@cosa-devsh ~]$ sudo -i
[root@cosa-devsh ~]# vi /etc/ostree/auth.json
[root@cosa-devsh ~]# rpm-ostree rebase --experimental ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8d50270bc29d27a1fcdac3eefaf95b46d3652251c7637cff09d184897e93bd8
[root@cosa-devsh ~]# reboot
[root@cosa-devsh ~]# rpm-ostree status
State: idle
Deployments:
* ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8d50270bc29d27a1fcdac3eefaf95b46d3652251c7637cff09d184897e93bd8
Digest: sha256:b8d50270bc29d27a1fcdac3eefaf95b46d3652251c7637cff09d184897e93bd8
Version: 412.86.202303161056-0 (2023-03-31T11:42:09Z)
e1dd762fae7633058c20dec92717ab3576d01765f8bcc98fc5c0df16c0bff79b
Version: 412.86.202302142053-0 (2023-02-14T20:56:52Z)
[root@cosa-devsh ~]# rpm -q rpm-ostree
rpm-ostree-2022.10.112.g3d0ac35b-3.el8.x86_64
[root@cosa-devsh ~]# ostree refs --list ostree/container/image
ostree/container/image/docker_3A__2F__2F_quay_2E_io/openshift-release-dev/ocp-v4_2E_0-art-dev_40_sha256_3A_b8d50270bc29d27a1fcdac3eefaf95b46d3652251c7637cff09d184897e93bd8
[root@cosa-devsh ~]# ostree refs --delete ostree/container/image
[root@cosa-devsh ~]# systemctl restart rpm-ostreed
[root@cosa-devsh ~]# journalctl -r -u rpm-ostreed
-- Logs begin at Fri 2023-03-31 11:40:48 UTC, end at Fri 2023-03-31 11:43:53 UTC. --
Mar 31 11:43:53 cosa-devsh systemd[1]: Started rpm-ostree System Management Daemon.
Mar 31 11:43:53 cosa-devsh rpm-ostree[1497]: In idle state; will auto-exit in 63 seconds
Mar 31 11:43:53 cosa-devsh rpm-ostree[1497]: Reading config file '/etc/rpm-ostreed.conf'
Mar 31 11:43:53 cosa-devsh systemd[1]: Starting rpm-ostree System Management Daemon...
Mar 31 11:43:52 cosa-devsh systemd[1]: rpm-ostreed.service: Succeeded.
Mar 31 11:43:52 cosa-devsh rpm-ostree[1446]: In idle state; will auto-exit in 61 seconds
Mar 31 11:42:51 cosa-devsh rpm-ostree[1446]: In idle state; will auto-exit in 60 seconds
Mar 31 11:42:51 cosa-devsh rpm-ostree[1446]: client(id:cli dbus:1.25 unit:session-4.scope uid:1000) vanished; remaining=0
Mar 31 11:42:51 cosa-devsh rpm-ostree[1446]: client(id:cli dbus:1.25 unit:session-4.scope uid:1000) added; new total=1
Mar 31 11:42:51 cosa-devsh rpm-ostree[1446]: Allowing active client :1.25 (uid 1000)
Mar 31 11:42:51 cosa-devsh systemd[1]: Started rpm-ostree System Management Daemon.
Mar 31 11:42:51 cosa-devsh rpm-ostree[1446]: In idle state; will auto-exit in 62 seconds
Mar 31 11:42:51 cosa-devsh rpm-ostree[1446]: Reading config file '/etc/rpm-ostreed.conf'
Mar 31 11:42:51 cosa-devsh systemd[1]: Starting rpm-ostree System Management Daemon...
-- Reboot --
Mar 31 11:42:21 cosa-devsh systemd[1]: Stopped rpm-ostree System Management Daemon.
Mar 31 11:42:21 cosa-devsh systemd[1]: rpm-ostreed.service: Succeeded.
Mar 31 11:42:21 cosa-devsh systemd[1]: Stopping rpm-ostree System Management Daemon...
Mar 31 11:42:11 cosa-devsh rpm-ostree[2093]: In idle state; will auto-exit in 63 seconds
Mar 31 11:42:11 cosa-devsh rpm-ostree[2093]: client(id:cli dbus:1.31 unit:session-4.scope uid:0) vanished; remaining=0
Mar 31 11:42:11 cosa-devsh rpm-ostree[2093]: Process [pid: 2089 uid: 0 unit: session-4.scope] disconnected from transaction progress
Mar 31 11:42:11 cosa-devsh rpm-ostree[2093]: Unlocked sysroot
Mar 31 11:42:11 cosa-devsh rpm-ostree[2093]: Txn Rebase on /org/projectatomic/rpmostree1/rhcos successful
Mar 31 11:42:10 cosa-devsh rpm-ostree[2093]: sanitycheck(/usr/bin/true) successful
Mar 31 11:42:10 cosa-devsh rpm-ostree[2093]: Created new deployment /ostree/deploy/rhcos/deploy/1ff85fa8faab8a5546ae64e7b8310d513df4019f54fc2bcf15279673a872a>
Mar 31 11:41:32 cosa-devsh rpm-ostree[2093]: Process [pid: 2089 uid: 0 unit: session-4.scope] connected to transaction progress
Mar 31 11:41:32 cosa-devsh rpm-ostree[2093]: Initiated txn Rebase for client(id:cli dbus:1.31 unit:session-4.scope uid:0): /org/projectatomic/rpmostree1/rhcos
Mar 31 11:41:32 cosa-devsh rpm-ostree[2093]: Locked sysroot
Mar 31 11:41:32 cosa-devsh rpm-ostree[2093]: client(id:cli dbus:1.31 unit:session-4.scope uid:0) added; new total=1
Mar 31 11:41:32 cosa-devsh systemd[1]: Started rpm-ostree System Management Daemon.
Mar 31 11:41:32 cosa-devsh rpm-ostree[2093]: In idle state; will auto-exit in 60 seconds
Mar 31 11:41:32 cosa-devsh rpm-ostree[2093]: Reading config file '/etc/rpm-ostreed.conf'
Mar 31 11:41:32 cosa-devsh systemd[1]: Starting rpm-ostree System Management Daemon...
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (rpm-ostree bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:2759 |