Bug 1882922

Summary: Unable to launch instances due to SELinux issue executing qemu-kvm
Product: [Community] RDO Reporter: Kevin Jones <kevindjones>
Component: openstack-selinuxAssignee: Julie Pichon <jpichon>
Status: CLOSED NOTABUG QA Contact: Ofer Blaut <oblaut>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: trunkCC: cjeanner, lhh
Target Milestone: ---   
Target Release: trunk   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-10-06 08:22:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Audit log from nova compute after setting SELinux in permissive mode and launching instance none

Description Kevin Jones 2020-09-26 15:29:52 UTC
Created attachment 1716847 [details]
Audit log from nova compute after setting SELinux in permissive mode and launching instance

Description of problem:
In both Train and Ussuri deployments on CentOS 8, I am unable to launch instances on hypervisors unless SELinux is in permissive mode.

Version-Release number of selected component (if applicable):
Train
Ussuri

How reproducible:
100%


Steps to Reproduce:
1. Deploy RDO via TripleO on CentOS 8 using [1]
2. Setup initial items (networks, router, images, etc.)
3. Attempt to launch instance

Actual results:
Instance fails with following error:
Error: Failed to perform requested operation on instance "cirros-1", the instance has an error status: Please try again later [Error: Exceeded maximum number of retries. Exceeded max scheduling attempts 3 for instance 936dafd0-d7ef-4cc0-95ee-84111dcf84bb. Last exception: internal error: process exited while connecting to monitor: libvirt: error : cannot execute binary /usr/libexec/qemu].

Expected results:
Instance launches as expected.

Additional info:
Audit log from compute node attached after setting SELinux to permissive and zeroing out audit.log.

[1] https://github.com/kjw3/rdo-sbx

Comment 1 Julie Pichon 2020-09-28 08:30:14 UTC
Thank you for the report and including the permissive audit logs, really appreciated.

This looks a lot like bug 1841822... Can I confirm the podman rpm version?

There were a lot of moving parts with that bug but if I recall correctly, the trickiest bit is to have at least podman 1.6.4-15 running (because of bug 1846364). On RHEL8, that's achieved by enabling the container-tools:2.0 module stream (and NOT using the container-tools:rhel8 default one) but I need to check how this works with CentOS. For now, could we confirm the podman version?

Comment 2 Julie Pichon 2020-09-28 08:59:12 UTC
container-tools is an AppStream module so it should also be available on CentOS 8 too.

Comment 3 Kevin Jones 2020-09-28 12:42:31 UTC
(undercloud) [stack@tripleo ~]$ openstack server list
+--------------------------------------+-------------------------+--------+----------------------+----------------+------------+
| ID                                   | Name                    | Status | Networks             | Image          | Flavor     |
+--------------------------------------+-------------------------+--------+----------------------+----------------+------------+
| 9c29c973-c72f-43a3-b6e1-b062925910ea | overcloud-controller-1  | ACTIVE | ctlplane=10.100.4.81 | overcloud-full | control    |
| bea7c10f-f85e-4b21-bffc-a713c3467467 | overcloud-controller-2  | ACTIVE | ctlplane=10.100.4.87 | overcloud-full | control    |
| 3238e7b5-eec7-4ff4-951c-652a79b4dc9c | overcloud-controller-0  | ACTIVE | ctlplane=10.100.4.77 | overcloud-full | control    |
| 78022df8-4d6e-4d6c-a44d-dc7327b78474 | overcloud-computehci-2  | ACTIVE | ctlplane=10.100.4.76 | overcloud-full | computeHCI |
| d010afd4-39d3-4b60-89d3-79334549a317 | overcloud-computehci-0  | ACTIVE | ctlplane=10.100.4.78 | overcloud-full | computeHCI |
| a544464e-c2b2-4ba7-a4d7-b9c70ad1c8f3 | overcloud-computehci-1  | ACTIVE | ctlplane=10.100.4.80 | overcloud-full | computeHCI |
| 067589fc-3b29-46bd-a54a-d0f1febe0c5d | overcloud-novacompute-0 | ACTIVE | ctlplane=10.100.4.75 | overcloud-full | compute    |
| d79c5023-f883-465b-a8b6-aa9f174d38e7 | overcloud-novacompute-1 | ACTIVE | ctlplane=10.100.4.79 | overcloud-full | compute    |
+--------------------------------------+-------------------------+--------+----------------------+----------------+------------+
[stack@tripleo ~]$ source stackrc
(undercloud) [stack@tripleo ~]$ for i in {81,87,77,76,78,80,75,79}; do ssh heat-admin.4.$i sudo rpm -qa podman; done
podman-1.6.4-10.module_el8.2.0+305+5e198a41.x86_64
podman-1.6.4-10.module_el8.2.0+305+5e198a41.x86_64
podman-1.6.4-10.module_el8.2.0+305+5e198a41.x86_64
podman-1.6.4-10.module_el8.2.0+305+5e198a41.x86_64
podman-1.6.4-10.module_el8.2.0+305+5e198a41.x86_64
podman-1.6.4-10.module_el8.2.0+305+5e198a41.x86_64
podman-1.6.4-10.module_el8.2.0+305+5e198a41.x86_64
podman-1.6.4-10.module_el8.2.0+305+5e198a41.x86_64

Comment 4 Kevin Jones 2020-09-28 12:43:33 UTC
(undercloud) [stack@tripleo ~]$ for i in {81,87,77,76,78,80,75,79}; do ssh heat-admin.4.$i sudo dnf repolist; done
repo id                            repo name
AppStream                          CentOS-8 - AppStream
BaseOS                             CentOS-8 - Base
extras                             CentOS-8 - Extras
repo id                            repo name
AppStream                          CentOS-8 - AppStream
BaseOS                             CentOS-8 - Base
extras                             CentOS-8 - Extras
repo id                            repo name
AppStream                          CentOS-8 - AppStream
BaseOS                             CentOS-8 - Base
extras                             CentOS-8 - Extras
repo id                            repo name
AppStream                          CentOS-8 - AppStream
BaseOS                             CentOS-8 - Base
extras                             CentOS-8 - Extras
repo id                            repo name
AppStream                          CentOS-8 - AppStream
BaseOS                             CentOS-8 - Base
extras                             CentOS-8 - Extras
repo id                            repo name
AppStream                          CentOS-8 - AppStream
BaseOS                             CentOS-8 - Base
extras                             CentOS-8 - Extras
repo id                            repo name
AppStream                          CentOS-8 - AppStream
BaseOS                             CentOS-8 - Base
extras                             CentOS-8 - Extras
repo id                            repo name
AppStream                          CentOS-8 - AppStream
BaseOS                             CentOS-8 - Base
extras                             CentOS-8 - Extras

Comment 5 Julie Pichon 2020-09-28 12:57:39 UTC
Right, that podman version is too old... Can you get to the correct version after running `dnf -y module enable container-tools:2.0` ?

Comment 6 Julie Pichon 2020-09-28 13:01:54 UTC
THT patch that's possibly of interest, though focused on updates/upgrades - https://review.opendev.org/#/c/745213/

Comment 7 Kevin Jones 2020-09-29 01:21:30 UTC
I did the following on one of my computes.

# dnf module reset container-tools
# dnf module enable container-tools:2.0
# dnf --allowerasing distro-sync
# podman --version

[root@overcloud-novacompute-0 ~]# podman --version
podman version 1.6.4
[root@overcloud-novacompute-0 ~]# rpm -qa podman
podman-1.6.4-15.module_el8.2.0+465+f9348e8f.x86_64

Then I modified /etc/selinux/config and set to enforcing
[root@overcloud-novacompute-0 ~]# cat /etc/selinux/config 
SELINUX=enforcing
SELINUXTYPE=targeted

# touch /.autorelabel
# reboot

(undercloud) [stack@tripleo ~]$ openstack server list
+--------------------------------------+-------------------------+--------+----------------------+----------------+------------+
| ID                                   | Name                    | Status | Networks             | Image          | Flavor     |
+--------------------------------------+-------------------------+--------+----------------------+----------------+------------+
| 9c29c973-c72f-43a3-b6e1-b062925910ea | overcloud-controller-1  | ACTIVE | ctlplane=10.100.4.81 | overcloud-full | control    |
| bea7c10f-f85e-4b21-bffc-a713c3467467 | overcloud-controller-2  | ACTIVE | ctlplane=10.100.4.87 | overcloud-full | control    |
| 3238e7b5-eec7-4ff4-951c-652a79b4dc9c | overcloud-controller-0  | ACTIVE | ctlplane=10.100.4.77 | overcloud-full | control    |
| 78022df8-4d6e-4d6c-a44d-dc7327b78474 | overcloud-computehci-2  | ACTIVE | ctlplane=10.100.4.76 | overcloud-full | computeHCI |
| d010afd4-39d3-4b60-89d3-79334549a317 | overcloud-computehci-0  | ACTIVE | ctlplane=10.100.4.78 | overcloud-full | computeHCI |
| a544464e-c2b2-4ba7-a4d7-b9c70ad1c8f3 | overcloud-computehci-1  | ACTIVE | ctlplane=10.100.4.80 | overcloud-full | computeHCI |
| 067589fc-3b29-46bd-a54a-d0f1febe0c5d | overcloud-novacompute-0 | ACTIVE | ctlplane=10.100.4.75 | overcloud-full | compute    |
| d79c5023-f883-465b-a8b6-aa9f174d38e7 | overcloud-novacompute-1 | ACTIVE | ctlplane=10.100.4.79 | overcloud-full | compute    |
+--------------------------------------+-------------------------+--------+----------------------+----------------+------------+
(undercloud) [stack@tripleo ~]$ ssh heat-admin.4.75
Activate the web console with: systemctl enable --now cockpit.socket

Last login: Mon Sep 28 20:41:26 2020 from 10.100.4.5
[heat-admin@overcloud-novacompute-0 ~]$ sudo getenforce
Enforcing


Still seeing permission denied error on executing qemu-kvm.

/var/log/containers/nova/nova-compute.log
2020-09-28 21:17:28.936 7 ERROR oslo_messaging.rpc.server [req-ad0f648d-d6fe-45a6-b870-7b049adbf80f 56695eaf3356435fbb42a4104df1c22e 8a1e24f298d0424b8aa52c313323d289 - default default] Exception during message handling: libvirt.libvirtError: internal error: process exited while connecting to monitor: libvirt:  error : cannot execute binary /usr/libexec/qemu-kvm: Permission denied

/var/log/audit/audit.log
type=AVC msg=audit(1601342351.030:1174): avc:  denied  { entrypoint } for  pid=12576 comm="libvirtd" path="/usr/libexec/qemu-kvm" dev="overlay" ino=49914 scontext=system_u:system_r:svirt_t:s0:c91,c628 tcontext=system_u:object_r:container_file_t:s0:c96,c589 tclass=file permissive=0
type=ANOM_PROMISCUOUS msg=audit(1601342351.065:1175): dev=tapb1f159ce-16 prom=0 old_prom=256 auid=4294967295 uid=42427 gid=42427 ses=4294967295AUID="unset" UID="unknown(42427)" GID="unknown(42427)"
type=USER_ACCT msg=audit(1601342351.117:1176): pid=12615 uid=42435 auid=4294967295 ses=4294967295 subj=system_u:system_r:spc_t:s0 msg='op=PAM:accounting grantors=pam_unix acct="neutron" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'UID="unknown(42435)" AUID="unset"
type=USER_CMD msg=audit(1601342351.117:1177): pid=12615 uid=42435 auid=4294967295 ses=4294967295 subj=system_u:system_r:spc_t:s0 msg='cwd="/" cmd=6E657574726F6E2D726F6F7477726170202F6574632F6E657574726F6E2F726F6F74777261702E636F6E66206970206E65746E732065786563206F766E6D6574612D32656564626566342D316435642D346365322D386436392D6134333334396132313464652073797363746C202D77206E65742E697076342E636F6E662E616C6C2E70726F6D6F74655F7365636F6E6461726965733D31 exe="/usr/bin/sudo" terminal=? res=success'UID="unknown(42435)" AUID="unset"
type=CRED_REFR msg=audit(1601342351.118:1178): pid=12615 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:spc_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'UID="root" AUID="unset"
type=USER_START msg=audit(1601342351.118:1179): pid=12615 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:spc_t:s0 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'UID="root" AUID="unset"
type=USER_ACCT msg=audit(1601342351.210:1180): pid=12680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:spc_t:s0 msg='op=PAM:accounting grantors=pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'UID="root" AUID="unset"
type=USER_CMD msg=audit(1601342351.210:1181): pid=12680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:spc_t:s0 msg='cwd="/" cmd=7373202D6E74756170 exe="/usr/bin/sudo" terminal=? res=success'UID="root" AUID="unset"
type=CRED_REFR msg=audit(1601342351.210:1182): pid=12680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:spc_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="nova" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'UID="root" AUID="unset"
type=USER_START msg=audit(1601342351.210:1183): pid=12680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:spc_t:s0 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_unix acct="nova" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'UID="root" AUID="unset"
type=USER_END msg=audit(1601342351.214:1184): pid=12680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:spc_t:s0 msg='op=PAM:session_close grantors=pam_keyinit,pam_limits,pam_unix acct="nova" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'UID="root" AUID="unset"
type=CRED_DISP msg=audit(1601342351.215:1185): pid=12680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:spc_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="nova" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'UID="root" AUID="unset"
type=SERVICE_START msg=audit(1601342351.223:1186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=tripleo_logrotate_crond_healthcheck comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'UID="root" AUID="unset"
type=SERVICE_STOP msg=audit(1601342351.223:1187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=tripleo_logrotate_crond_healthcheck comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'UID="root" AUID="unset"
type=SERVICE_START msg=audit(1601342351.273:1188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=tripleo_nova_compute_healthcheck comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'UID="root" AUID="unset"
type=SERVICE_STOP msg=audit(1601342351.273:1189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=tripleo_nova_compute_healthcheck comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'UID="root" AUID="unset"
type=USER_END msg=audit(1601342351.364:1190): pid=12615 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:spc_t:s0 msg='op=PAM:session_close grantors=pam_keyinit,pam_limits,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'UID="root" AUID="unset"
type=CRED_DISP msg=audit(1601342351.365:1191): pid=12615 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:spc_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'UID="root" AUID="unset"
type=VIRT_RESOURCE msg=audit(1601342351.377:1192): pid=3478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:spc_t:s0 msg='virt=kvm resrc=net reason=start vm="instance-0000000b" uuid=b8125abe-de3a-42fb-8d54-56c82aee646c old-net="?" new-net="fa:16:3e:5e:e8:25" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'UID="root" AUID="unset"
type=VIRT_RESOURCE msg=audit(1601342351.377:1193): pid=3478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:spc_t:s0 msg='virt=kvm resrc=mem reason=start vm="instance-0000000b" uuid=b8125abe-de3a-42fb-8d54-56c82aee646c old-mem=0 new-mem=16777216 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'UID="root" AUID="unset"
type=VIRT_RESOURCE msg=audit(1601342351.377:1194): pid=3478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:spc_t:s0 msg='virt=kvm resrc=vcpu reason=start vm="instance-0000000b" uuid=b8125abe-de3a-42fb-8d54-56c82aee646c old-vcpu=0 new-vcpu=4 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success'UID="root" AUID="unset"
type=VIRT_CONTROL msg=audit(1601342351.377:1195): pid=3478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:spc_t:s0 msg='virt=kvm op=start reason=booted vm="instance-0000000b" uuid=b8125abe-de3a-42fb-8d54-56c82aee646c vm-pid=-1 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=failed'UID="root" AUID="unset"
type=ANOM_PROMISCUOUS msg=audit(1601342351.567:1196): dev=tap2eedbef4-10 prom=256 old_prom=0 auid=4294967295 uid=990 gid=1000 ses=4294967295AUID="unset" UID="openvswitch" GID="hugetlbfs"
type=AVC msg=audit(1601342351.577:1197): avc:  denied  { create } for  pid=1371 comm="ovs-vswitchd" scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:system_r:openvswitch_t:s0 tclass=netlink_netfilter_socket permissive=0
type=AVC msg=audit(1601342351.580:1198): avc:  denied  { create } for  pid=1371 comm="ovs-vswitchd" scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:system_r:openvswitch_t:s0 tclass=netlink_netfilter_socket permissive=0
type=AVC msg=audit(1601342351.580:1199): avc:  denied  { create } for  pid=1371 comm="ovs-vswitchd" scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:system_r:openvswitch_t:s0 tclass=netlink_netfilter_socket permissive=0
type=AVC msg=audit(1601342351.580:1200): avc:  denied  { create } for  pid=1371 comm="ovs-vswitchd" scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:system_r:openvswitch_t:s0 tclass=netlink_netfilter_socket permissive=0
type=AVC msg=audit(1601342351.580:1201): avc:  denied  { create } for  pid=1371 comm="ovs-vswitchd" scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:system_r:openvswitch_t:s0 tclass=netlink_netfilter_socket permissive=0
type=AVC msg=audit(1601342351.580:1202): avc:  denied  { create } for  pid=1371 comm="ovs-vswitchd" scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:system_r:openvswitch_t:s0 tclass=netlink_netfilter_socket permissive=0
type=AVC msg=audit(1601342351.580:1203): avc:  denied  { create } for  pid=1371 comm="ovs-vswitchd" scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:system_r:openvswitch_t:s0 tclass=netlink_netfilter_socket permissive=0
type=AVC msg=audit(1601342351.580:1204): avc:  denied  { create } for  pid=1371 comm="ovs-vswitchd" scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:system_r:openvswitch_t:s0 tclass=netlink_netfilter_socket permissive=0
type=AVC msg=audit(1601342351.580:1205): avc:  denied  { create } for  pid=1371 comm="ovs-vswitchd" scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:system_r:openvswitch_t:s0 tclass=netlink_netfilter_socket permissive=0
type=AVC msg=audit(1601342351.580:1206): avc:  denied  { create } for  pid=1371 comm="ovs-vswitchd" scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:system_r:openvswitch_t:s0 tclass=netlink_netfilter_socket permissive=0
type=AVC msg=audit(1601342351.580:1207): avc:  denied  { create } for  pid=1371 comm="ovs-vswitchd" scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:system_r:openvswitch_t:s0 tclass=netlink_netfilter_socket permissive=0


[root@overcloud-novacompute-0 log]# setenforce 0

And now the instance will start.

Comment 8 Julie Pichon 2020-09-29 10:19:57 UTC
Thank you for the update. It looks like just updating podman isn't enough because the qemu labels are still wrong... The nova_libvirt container needs to be replaced, Cedric had provided steps to resolve that part of the issue in bug 1866290, summarised in comment 30:

# dnf module disable -y container-tools:rhel8
# dnf module enable -y container-tools:2.0
# dnf upgrade -y podman
# systemctl disable --now tripleo_nova_libvirt
# podman rm nova_libvirt
# paunch apply --file /var/lib/tripleo-config/container-startup-config/step_3/nova_libvirt.json --config-id step_3
# systemctl enable tripleo_nova_libvirt

Does this help?

Comment 9 Julie Pichon 2020-10-02 10:53:54 UTC
Hi. Did replacing the container or redeploying, together with the updated podman version help to resolve the issue?

Comment 10 Kevin Jones 2020-10-05 13:56:50 UTC
Yes, this set of steps in comment #8 seems to allow an instance to launch.

Do you believe there is a patch coming to TripleO Heat Templates that addresses this issue by enabling the container-tools:2.0 stream at deployment time?

Comment 11 Julie Pichon 2020-10-05 14:36:32 UTC
Great to hear this resolved the issue!

Cedric, do you know the answer to the question in comment 10, is there any work ongoing upstream related to module streams enablement on CentOS 8? I'm only aware of https://review.opendev.org/#/c/745213/ which seems to focus on updates, not sure about first deployment? Thank you!

Comment 12 Kevin Jones 2020-10-05 15:28:31 UTC
I created a bash script [1] to execute against compute hosts for those that need to apply this workaround.

[1] https://raw.githubusercontent.com/kjw3/rdo-sbx/master/podman-update.sh

Comment 13 Cédric Jeanneret 2020-10-06 05:07:55 UTC
Hello Julie, all,

I think Upgrades has something in the pipe for setting the correct stream at least at upgrade time (osp-16.0 -> osp-16.1 needs to switch the stream).
For the deploy itself, we're pushing lower (and, in some cases, upper) constraints on podman version, for instance with those BZ:
- https://bugzilla.redhat.com/show_bug.cgi?id=1861777
- https://bugzilla.redhat.com/show_bug.cgi?id=1878189
- https://bugzilla.redhat.com/show_bug.cgi?id=1878187

While it won't activate the stream, it will at least prevent deploying with the wrong podman. The installation guide has notes about the stream change as well:
https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/director_installation_and_usage/preparing-for-director-installation#enabling-repositories-for-the-undercloud
https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/keeping_red_hat_openstack_platform_updated/preparing-for-a-minor-update#setting-the-container-tools-module-version

@Kevin: there's a playbook available in the Knowledge Base as well: https://access.redhat.com/solutions/5297991

Hope this helps.

Cheers,

C.

Comment 14 Julie Pichon 2020-10-06 08:22:31 UTC
Closing this based on comment 10.

Thanks the links Cedric. It sounds like the upstream docs should be updated as well to explain how/when to set up the modules on CentOS 8? All of these docs and the KB entry are downstream, and the bz related to the version checks also seems to be a downstream-only fix since it'll use the rhosp-release package, if I understand bug 1878187 correctly.