Bug 1893291

Summary: SELinux is blocking instances to be launched through Libvirt/QEMU-KVM in RHOSP 16.1 All-In-One default deployment
Product: Red Hat OpenStack Reporter: Alexon Oliveira <alolivei>
Component: documentationAssignee: Julie Pichon <jpichon>
Status: CLOSED CURRENTRELEASE QA Contact: RHOS Documentation Team <rhos-docs>
Severity: low Docs Contact:
Priority: low    
Version: 16.1 (Train)CC: alolivei, amoralej, cjeanner, jjoyce, jpichon, lhh
Target Milestone: ---Keywords: Reopened
Target Release: ---Flags: alolivei: automate_bug+
alolivei: internal-review+
alolivei: needinfo+
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-02-24 14:32:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Alexon Oliveira 2020-10-30 16:37:41 UTC
Description of problem:

On a fresh RHOSP 16.1 All-In-One default deployment (tech preview), no instance is allowed to be launched. After trying to spawn a new instance, you'll get a following error message like bellow after run "openstack server show myserver -c fault":

{'code': 500, 'created': '2020-10-30T15:07:40Z', 'message': 'Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance b046a5b9-9aac-4503-ad29-9b842c6159d9.', 'details': 'Traceback (most recent call last):\n  File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 651, in build_instances\n    raise exception.MaxRetriesExceeded(reason=msg)\nnova.exception.MaxRetriesExceeded: Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance b046a5b9-9aac-4503-ad29-9b842c6159d9.\n'}

Version-Release number of selected component (if applicable):
16.1

How reproducible:
Always

Steps to Reproduce:
1. Deploy the all-in-one solution with default options as per the documentation
2. Create the necessary resources before trying to create an instance
3. Try to spawn an instance

Actual results:
Fails

Expected results:
Succeeds

Additional info:

The instance log shows the following:

# tail /var/log/libvirt/qemu/instance-00000001.log 

-chardev pty,id=charserial0,logfile=/dev/fdset/3,logappend=on \
-device isa-serial,chardev=charserial0,id=serial0 \
-device usb-tablet,id=input0,bus=usb.0,port=1 \
-vnc 192.168.1.8:0 \
-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
libvirt:  error : cannot execute binary /usr/libexec/qemu-kvm: Permission denied
2020-10-30 15:07:12.392+0000: shutting down, reason=failed

Audit reports the following:

# aureport -a

AVC Report
===============================================================
# date time comm subj syscall class permission obj result event
===============================================================
1. 28/10/2020 16:50:29 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 72
2. 28/10/2020 17:12:17 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 275
3. 28/10/2020 17:12:43 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 276
4. 28/10/2020 17:14:05 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 306
5. 28/10/2020 17:14:10 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 307
6. 28/10/2020 17:14:37 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 309
7. 29/10/2020 18:55:23 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 150
8. 29/10/2020 18:56:39 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 168
9. 29/10/2020 18:56:51 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 171
10. 29/10/2020 18:56:57 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 172
11. 29/10/2020 18:57:02 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 175
12. 29/10/2020 22:10:15 rhsmcertd-worke system_u:system_r:rhsmcertd_t:s0 0 file read system_u:object_r:root_t:s0 denied 8720
13. 29/10/2020 22:10:15 rhsmcertd-worke system_u:system_r:rhsmcertd_t:s0 0 file read system_u:object_r:root_t:s0 denied 8721
14. 30/10/2020 02:10:22 rhsmcertd-worke system_u:system_r:rhsmcertd_t:s0 0 file read system_u:object_r:root_t:s0 denied 25322
15. 30/10/2020 02:10:22 rhsmcertd-worke system_u:system_r:rhsmcertd_t:s0 0 file read system_u:object_r:root_t:s0 denied 25323
16. 30/10/2020 06:10:21 rhsmcertd-worke system_u:system_r:rhsmcertd_t:s0 0 file read system_u:object_r:root_t:s0 denied 41846
17. 30/10/2020 06:10:21 rhsmcertd-worke system_u:system_r:rhsmcertd_t:s0 0 file read system_u:object_r:root_t:s0 denied 41847
18. 30/10/2020 10:10:21 rhsmcertd-worke system_u:system_r:rhsmcertd_t:s0 0 file read system_u:object_r:root_t:s0 denied 58388
19. 30/10/2020 10:10:21 rhsmcertd-worke system_u:system_r:rhsmcertd_t:s0 0 file read system_u:object_r:root_t:s0 denied 58389
20. 30/10/2020 14:10:23 rhsmcertd-worke system_u:system_r:rhsmcertd_t:s0 0 file read system_u:object_r:root_t:s0 denied 74916
21. 30/10/2020 14:10:23 rhsmcertd-worke system_u:system_r:rhsmcertd_t:s0 0 file read system_u:object_r:root_t:s0 denied 74917
22. 30/10/2020 15:07:12 libvirtd system_u:system_r:svirt_t:s0:c133,c610 0 file entrypoint system_u:object_r:container_file_t:s0:c266,c964 denied 80182

SELinux reports the following:

# ausearch -a 80182
----
time->Fri Oct 30 15:07:12 2020
type=AVC msg=audit(1604070432.085:80182): avc:  denied  { entrypoint } for  pid=1027698 comm="libvirtd" path="/usr/libexec/qemu-kvm" dev="overlay" ino=1365133 scontext=system_u:system_r:svirt_t:s0:c133,c610 tcontext=system_u:object_r:container_file_t:s0:c266,c964 tclass=file permissive=0

SELinux is enabled, even after disabling it manually before deployment, but the default deployment enables it again:

# getenforce 
Enforcing

These are the current package versions:

# rpm -qa | grep -i openstack
openstack-tripleo-common-containers-11.3.3-0.20200611110657.f7715be.el8ost.noarch
openstack-tripleo-heat-templates-11.3.2-0.20200616081539.396affd.el8ost.noarch
python3-openstacksdk-0.36.3-0.20200424135113.c07350e.el8ost.noarch
python-openstackclient-lang-4.0.0-0.20200310193636.aa64eb6.el8ost.noarch
puppet-openstacklib-15.4.1-0.20200403203429.5fdf43c.el8ost.noarch
openstack-heat-agents-1.10.1-0.20200311091123.96b819c.el8ost.noarch
openstack-heat-monolith-13.0.2-0.20200529053437.33972cc.el8ost.noarch
openstack-selinux-0.8.20-0.20200428133425.3300746.el8ost.noarch
openstack-heat-common-13.0.2-0.20200529053437.33972cc.el8ost.noarch
puppet-openstack_extras-15.4.1-0.20200528113453.371931c.el8ost.noarch
ansible-role-openstack-operations-0.0.1-0.20200311080930.274739e.el8ost.noarch
openstack-ironic-python-agent-builder-2.0.1-0.20200608173428.cb415ef.el8ost.noarch
python3-openstackclient-4.0.0-0.20200310193636.aa64eb6.el8ost.noarch
openstack-tripleo-common-11.3.3-0.20200611110657.f7715be.el8ost.noarch
openstack-heat-engine-13.0.2-0.20200529053437.33972cc.el8ost.noarch
openstack-tripleo-puppet-elements-11.2.2-0.20200527003426.226ce95.el8ost.noarch
openstack-heat-api-13.0.2-0.20200529053437.33972cc.el8ost.noarch
openstack-tripleo-validations-11.3.2-0.20200611115253.08f469d.el8ost.noarch
openstack-tripleo-image-elements-10.6.2-0.20200528043425.7dc0fa1.el8ost.noarch

Comment 2 Lon Hohberger 2020-11-03 20:56:24 UTC
This looks like a duplicate of bug 1846364

Comment 3 Jason Joyce 2020-11-04 13:17:35 UTC

*** This bug has been marked as a duplicate of bug 1846364 ***

Comment 6 Julie Pichon 2020-11-10 10:34:41 UTC
Hi Alexon, that podman version doesn't look correct - it should be podman-1.6.4-15 at least. Is the container-tools:2.0 module stream enabled?

I think bug 1866290 may be a clearer duplicate for this. Comment 30 on that bug summarises the steps required to get to the correct podman, as well the steps to follow after that to restart the libvirt containers. Note that just upgrading podman is not sufficient as the labels would still be incorrect.

The knowledge base article at https://access.redhat.com/solutions/5297991 also has a playbook to run these commands across all compute nodes more conveniently. Does this help with resolving the issue?

Comment 7 Alexon Oliveira 2020-11-10 14:52:14 UTC
Hi Julie,

Just to be clear, I'm working in the RHOSP 16.1 All-In-One installation, according to the official documentation:

Creating an all-in-one OpenStack cloud for test and proof-of-concept environments:
https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/quick_start_guide/index

The official documentation doesn't require to enable the container-tools:2.0 module stream. The only requirements regard packages are:

===
[stack@all-in-one]$ sudo dnf install -y dnf-utils
[stack@all-in-one]$ sudo subscription-manager repos --disable=*
[stack@all-in-one]$ sudo subscription-manager repos \
--enable=rhel-8-for-x86_64-baseos-eus-rpms \
--enable=rhel-8-for-x86_64-appstream-eus-rpms \
--enable=rhel-8-for-x86_64-highavailability-eus-rpms \
--enable=ansible-2.9-for-rhel-8-x86_64-rpms \
--enable=openstack-16.1-for-rhel-8-x86_64-rpms \
--enable=fast-datapath-for-rhel-8-x86_64-rpms \
--enable=rhel-8-for-x86_64-highavailability-rpms

[stack@all-in-one]$ sudo dnf install -y python3-tripleoclient
===

As you can see from below, I have the same repositories:

===
$ sudo dnf repolist

repo id                                        repo name
ansible-2.9-for-rhel-8-x86_64-rpms             Red Hat Ansible Engine 2.9 for RHEL 8 x86_64 (RPMs)
fast-datapath-for-rhel-8-x86_64-rpms           Fast Datapath for RHEL 8 x86_64 (RPMs)
openstack-16.1-for-rhel-8-x86_64-rpms          Red Hat OpenStack Platform 16.1 for RHEL 8 x86_64 (RPMs)
rhel-8-for-x86_64-appstream-eus-rpms           Red Hat Enterprise Linux 8 for x86_64 - AppStream - Extended Update Support (RPMs)
rhel-8-for-x86_64-baseos-eus-rpms              Red Hat Enterprise Linux 8 for x86_64 - BaseOS - Extended Update Support (RPMs)
rhel-8-for-x86_64-highavailability-eus-rpms    Red Hat Enterprise Linux 8 for x86_64 - High Availability - Extended Update Support (RPMs)
rhel-8-for-x86_64-highavailability-rpms        Red Hat Enterprise Linux 8 for x86_64 - High Availability (RPMs)
satellite-tools-6.7-for-rhel-8-x86_64-rpms     Red Hat Satellite Tools 6.7 for RHEL 8 x86_64 (RPMs)
===


Also, it specifies:

===
If you want to use the all-in-one Red Hat OpenStack Platform installation in a virtual environment, you must define the virtualization type with the StandaloneExtraConfig parameter:

StandaloneExtraConfig:
  NovaComputeLibvirtType: qemu
===

The problem is I just can't change everything as the KCS recommends because it's an "all-in-one" deployment. Even if I reboot it, it fails to come up and run again and I need to redeploy everything.

If I disable SELinux after deployment, it works fine, no issues. But the problem is, the way I see, if the official documentation doesn't mention that the container-tools:2.0 module stream should be enabled prior the installation, that's a flaw. Another thing is the procedure described in that other BZ doesn't apply here completely because it's an "all-in-one" deployment, so we should have a different procedure for this environment.

Comment 9 Julie Pichon 2020-11-10 15:26:15 UTC
I agree, it seems like the documentation needs to be updated with the information about the module streams. Someone left a comment on the knowledge base article about how to adjust the paunch command for a standalone deployment which might help, though it might still need to be adjusted as it seems related to a different deployment method.

Cedric, I'm wondering if you would have an idea on how to adjust the libvirt/SELinux/podman workaround so that it can be applied in a standalone environment? (cf. comment 7 where the current one we documented in a KB doesn't work in this context.)
Also, to confirm before we update the docs, is it correct that enabling the correct streams after configuring the subcriptions will mean we get the right podman set up for the libvirt containers too?
Thank you!

I will move this to the documentation component.

Comment 10 Cédric Jeanneret 2020-11-10 16:00:49 UTC
Hello,

A couple of things/thoughts/details:
- iirc there actually IS a mention of "enable container-tools:2.0" somewhere in the official documentation - but apparently it's done only for upgrades, updates and LEAPP[1]

- there is a bunch of changes coming in z3 (or already in z2 - not sure anymore now) actually pinning the podman version in different fashion, being either within ansible directly, or with package dependencies

- though, with modules and streams, we can't make a dependency on a module stream (i.e. we can't make that python3-tripleoclient depends on container-tools:2.0 being active)

- the version pinning as currently implemented will NOT crash "dnf install -y python3-tripleoclient" - though there's also work in progress on that part[2][3]

- ensuring we get the correct stream before deploying will, of course, make the whole thing work, since podman will be installed at a correct version. Therefore, adding some hint about "ensure you enable the following streams before installing osp-16.1" is a must (for instance, there's also the "virt" module, must go from virt:rhel to virt:8.2 apparently).

Does it answer the questions, Julie?

Cheers,

C.



[1] https://access.redhat.com/search/#/?q=container-tools:2.0&p=1&sort=relevant&scoped&documentKind=Documentation - for instance https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/framework_for_upgrades_13_to_16.1/configuring-the-overcloud-for-a-leapp-upgrade#creating-an-upgrades-environment-file-overcloud-leapp
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1878189
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1861777

Comment 11 Julie Pichon 2020-11-10 16:10:48 UTC
I think it does! Thank you. It sounds like the solution is to update the document Alexon linked to in comment 7 with the following step, to be run after enabling the repos/subscriptions (aka in section 3, between step 8 (subcription manager) and 9 (install python-tripleoclient)):


# dnf module disable -y container-tools:rhel8
# dnf module enable -y container-tools:2.0
# dnf module disable -y virt:rhel
# dnf module enable -y virt:8.2

This should prevent the problem from occurring.

Comment 12 Alexon Oliveira 2020-12-09 15:03:09 UTC
Julie/Cédric,

As you can see from the official documentation now [1], I managed to get the Chapter 3, Step 9, updated with your recommendation. But the problem remains. I've tested to redeploy the whole environment with this new configuration, but it didn't work.

Do you have any other recommendation so I can test it?

[1] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/quick_start_guide/installing-the-all-in-one-openstack-environment

Comment 14 Julie Pichon 2020-12-09 16:29:31 UTC
Hi Alexon,

Great to see the documentation updated! I will leave the needinfo on Cedric as he is more familiar with the details of the problem and might see something I'm missing, but I am wondering from your "redeploy" comment if the same existing environment was reused as opposed to creating a new deployment from scratch?

If this is a redeploy, the SELinux labels are likely still wrong. https://bugzilla.redhat.com/show_bug.cgi?id=1866290#c30 and related KB article have a couple of additional steps for how to restart the libvirt container so that it uses the right labels... Could this help?

If not, can you attach the permissive logs for SELinux again to see if it's a new error as well as confirm the podman version? Thank you.

Comment 15 Alexon Oliveira 2020-12-09 21:30:31 UTC
Julie/Cédric,

Just in case, I've just finished a fresh install, from scratch, brand new VM, following all your recommendations and following the quoted BZ and KCS. Check it out:

===
[root@openstack16 ~]# hostnamectl 

   Static hostname: openstack16.example.local
         Icon name: computer-vm
           Chassis: vm
        Machine ID: f85aa0aaf6894ab792bd555197a13152
           Boot ID: 7c99b506f0c9434a95430de0ceebcd6a
    Virtualization: kvm
  Operating System: Red Hat Enterprise Linux 8.2 (Ootpa)
       CPE OS Name: cpe:/o:redhat:enterprise_linux:8.2:GA
            Kernel: Linux 4.18.0-193.29.1.el8_2.x86_64
      Architecture: x86-64


[root@openstack16 ~]# getenforce

Enforcing

[root@openstack16 ~]# rpm -qa | grep -i openstack

openstack-tripleo-common-containers-11.4.1-1.20200914165651.el8ost.noarch
python3-openstackclient-4.0.1-1.20200817092223.bff556c.el8ost.noarch
openstack-selinux-0.8.24-1.20200914163011.26243bf.el8ost.noarch
openstack-tripleo-validations-11.3.2-1.20200914170825.el8ost.noarch
python3-openstacksdk-0.36.4-0.20200715054250.76d3b29.el8ost.noarch
python-openstackclient-lang-4.0.1-1.20200817092223.bff556c.el8ost.noarch
puppet-openstacklib-15.4.1-0.20200403203429.5fdf43c.el8ost.noarch
openstack-heat-api-13.0.3-1.20200914171254.48b730a.el8ost.noarch
openstack-ironic-python-agent-builder-2.1.1-1.20200914175356.65d0f80.el8ost.noarch
openstack-tripleo-heat-templates-11.3.2-1.20200914170156.el8ost.noarch
openstack-heat-engine-13.0.3-1.20200914171254.48b730a.el8ost.noarch
openstack-tripleo-puppet-elements-11.2.2-0.20200701163410.432518a.el8ost.noarch
ansible-role-openstack-operations-0.0.1-0.20200311080930.274739e.el8ost.noarch
openstack-heat-monolith-13.0.3-1.20200914171254.48b730a.el8ost.noarch
puppet-openstack_extras-15.4.1-0.20200528113453.371931c.el8ost.noarch
openstack-heat-common-13.0.3-1.20200914171254.48b730a.el8ost.noarch
openstack-tripleo-image-elements-10.6.2-0.20200528043425.7dc0fa1.el8ost.noarch
openstack-heat-agents-1.10.1-0.20200311091123.96b819c.el8ost.noarch
openstack-tripleo-common-11.4.1-1.20200914165651.el8ost.noarch

[root@openstack16 ~]# rpm -qa | grep podman

podman-1.6.4-12.module+el8.2.0+6669+dde598ec.x86_64


(admin_overcloud) [stack@openstack16 ~]$ openstack server list

+--------------------------------------+-----------+--------+----------+---------+--------+
| ID                                   | Name      | Status | Networks | Image   | Flavor |
+--------------------------------------+-----------+--------+----------+---------+--------+
| 50635c9c-6a14-47f8-b47e-acc408277890 | myserver2 | ERROR  |          | cirros4 | tiny   |
| ba580948-51ee-4957-baf4-aae5cf924ac9 | myserver1 | ERROR  |          | cirros5 | tiny   |
+--------------------------------------+-----------+--------+----------+---------+--------+

[root@openstack16 ~]# aureport -a

AVC Report
===============================================================
# date time comm subj syscall class permission obj result event
===============================================================
1. 01/12/2020 18:27:01 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 248
2. 01/12/2020 18:46:14 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 326
3. 01/12/2020 18:46:43 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 329
4. 01/12/2020 18:48:16 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 361
5. 01/12/2020 18:48:22 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 362
6. 01/12/2020 18:48:53 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 364
7. 09/12/2020 18:47:18 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 330
8. 09/12/2020 18:47:27 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 333
9. 09/12/2020 18:47:34 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 334
10. 09/12/2020 18:47:38 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0 (null) (null) (null) unset 337
11. 09/12/2020 18:59:55 rhsmcertd-worke system_u:system_r:rhsmcertd_t:s0 0 file read system_u:object_r:root_t:s0 denied 380
12. 09/12/2020 18:59:55 rhsmcertd-worke system_u:system_r:rhsmcertd_t:s0 0 file read system_u:object_r:root_t:s0 denied 381


[root@openstack16 ~]# ausearch -a 380
----
time->Tue Dec  1 18:30:56 2020
type=SERVICE_START msg=audit(1606847456.995:380): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=user-runtime-dir@0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
----
time->Tue Dec  1 18:57:54 2020
type=SOFTWARE_UPDATE msg=audit(1606849074.991:380): pid=5310 uid=0 auid=1001 ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=install sw="python3-pbr-5.1.2-2.el8ost.noarch" sw_type=rpm key_enforce=0 gpg_res=1 root_dir="/" comm="dnf" exe="/usr/libexec/platform-python3.6" hostname=openstack16.example.local addr=? terminal=pts/0 res=success'
----
time->Wed Dec  9 18:59:55 2020
type=AVC msg=audit(1607540395.240:380): avc:  denied  { read } for  pid=20402 comm="rhsmcertd-worke" name="satellite-5-client.module" dev="dm-0" ino=135 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=file permissive=0

[root@openstack16 ~]# ausearch -a 381
----
time->Tue Dec  1 18:30:56 2020
type=SERVICE_STOP msg=audit(1606847456.995:381): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=user-runtime-dir@0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
----
time->Tue Dec  1 18:57:54 2020
type=SOFTWARE_UPDATE msg=audit(1606849074.991:381): pid=5310 uid=0 auid=1001 ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=install sw="python3-pyyaml-3.12-12.el8.x86_64" sw_type=rpm key_enforce=0 gpg_res=1 root_dir="/" comm="dnf" exe="/usr/libexec/platform-python3.6" hostname=openstack16.example.local addr=? terminal=pts/0 res=success'
----
time->Wed Dec  9 18:59:55 2020
type=AVC msg=audit(1607540395.240:381): avc:  denied  { read } for  pid=20402 comm="rhsmcertd-worke" name="virt.module" dev="dm-0" ino=136 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=file permissive=0
===

Also, after finishing the deployment, I ran the recommended playbook as follows (PS. I've commented the modules and updates parts because I've already did prior the deployment):

==
---
- hosts: localhost
  gather_facts: false
  become: true
  tasks:
#    - name: disable virt:rhel
#      command: dnf -y module disable virt:rhel

#    - name: enable virt:8.2
#      command: dnf -y module enable virt:8.2

#    - name: disable container-tools:rhel8
#      command: dnf -y module disable container-tools:rhel8

#    - name: enable container-tools:2.0
#      command: dnf -y module enable container-tools:2.0

#    - name: update remaining packages
#      dnf:
#        name: '*'
#        state: latest
#        exclude: ansible

    - name: stop nova libvirt
      service:
        name: tripleo_nova_libvirt
        state: stopped
        enabled: no

    - name: trash libvirt container
      command: podman rm nova_libvirt

    - name: recreate nova container with proper selinux settings
      command: paunch apply --file /var/lib/tripleo-config/container-startup-config/step_3/nova_libvirt.json --config-id tripleo_step3 --managed-by tripleo-Standalone

    - name: stop nova libvirt
      service:
        name: tripleo_nova_libvirt
        state: started
        enabled: yes

    - name: make selinux great again
      selinux:
        policy: targeted
        state: enforcing
==

When I did it, the Keystone and Horizon containers stopped to work and were deleted. I needed to come up with them manually like this:

===
paunch apply --file /var/lib/tripleo-config/container-startup-config/step_3/keystone.json --config-id tripleo_step3 --managed-by tripleo-Standalone
paunch apply --file /var/lib/tripleo-config/container-startup-config/step_3/horizon.json --config-id tripleo_step3 --managed-by tripleo-Standalone
===

The problem is, when I do start the Keystone container, the Horizon container stops to work. If I start the Horizon container, the Keystone container stops to work (facepalm).

Also, if I reboot the VM, nothing works anymore and I need to reinstall everything again.

I don't know what else could be done. What do y'all have in mind?

Comment 16 Cédric Jeanneret 2020-12-10 06:45:11 UTC
Hello,

Some comments inline.

(In reply to Alexon Oliveira from comment #15)
> Julie/Cédric,
> 
> Just in case, I've just finished a fresh install, from scratch, brand new
> VM, following all your recommendations and following the quoted BZ and KCS.
> Check it out:
> 
> ===
> [root@openstack16 ~]# hostnamectl 
> 
>    Static hostname: openstack16.example.local
>          Icon name: computer-vm
>            Chassis: vm
>         Machine ID: f85aa0aaf6894ab792bd555197a13152
>            Boot ID: 7c99b506f0c9434a95430de0ceebcd6a
>     Virtualization: kvm
>   Operating System: Red Hat Enterprise Linux 8.2 (Ootpa)
>        CPE OS Name: cpe:/o:redhat:enterprise_linux:8.2:GA
>             Kernel: Linux 4.18.0-193.29.1.el8_2.x86_64
>       Architecture: x86-64
> 
> 
> [root@openstack16 ~]# getenforce
> 
> Enforcing
> 
> [root@openstack16 ~]# rpm -qa | grep -i openstack
> 
> openstack-tripleo-common-containers-11.4.1-1.20200914165651.el8ost.noarch
> python3-openstackclient-4.0.1-1.20200817092223.bff556c.el8ost.noarch
> openstack-selinux-0.8.24-1.20200914163011.26243bf.el8ost.noarch
> openstack-tripleo-validations-11.3.2-1.20200914170825.el8ost.noarch
> python3-openstacksdk-0.36.4-0.20200715054250.76d3b29.el8ost.noarch
> python-openstackclient-lang-4.0.1-1.20200817092223.bff556c.el8ost.noarch
> puppet-openstacklib-15.4.1-0.20200403203429.5fdf43c.el8ost.noarch
> openstack-heat-api-13.0.3-1.20200914171254.48b730a.el8ost.noarch
> openstack-ironic-python-agent-builder-2.1.1-1.20200914175356.65d0f80.el8ost.
> noarch
> openstack-tripleo-heat-templates-11.3.2-1.20200914170156.el8ost.noarch
> openstack-heat-engine-13.0.3-1.20200914171254.48b730a.el8ost.noarch
> openstack-tripleo-puppet-elements-11.2.2-0.20200701163410.432518a.el8ost.
> noarch
> ansible-role-openstack-operations-0.0.1-0.20200311080930.274739e.el8ost.
> noarch
> openstack-heat-monolith-13.0.3-1.20200914171254.48b730a.el8ost.noarch
> puppet-openstack_extras-15.4.1-0.20200528113453.371931c.el8ost.noarch
> openstack-heat-common-13.0.3-1.20200914171254.48b730a.el8ost.noarch
> openstack-tripleo-image-elements-10.6.2-0.20200528043425.7dc0fa1.el8ost.
> noarch
> openstack-heat-agents-1.10.1-0.20200311091123.96b819c.el8ost.noarch
> openstack-tripleo-common-11.4.1-1.20200914165651.el8ost.noarch
> 
> [root@openstack16 ~]# rpm -qa | grep podman
> 
> podman-1.6.4-12.module+el8.2.0+6669+dde598ec.x86_64

This is the wrong podman version....... No wonder it's not working then.

> 
> 
> (admin_overcloud) [stack@openstack16 ~]$ openstack server list
> 
> +--------------------------------------+-----------+--------+----------+-----
> ----+--------+
> | ID                                   | Name      | Status | Networks |
> Image   | Flavor |
> +--------------------------------------+-----------+--------+----------+-----
> ----+--------+
> | 50635c9c-6a14-47f8-b47e-acc408277890 | myserver2 | ERROR  |          |
> cirros4 | tiny   |
> | ba580948-51ee-4957-baf4-aae5cf924ac9 | myserver1 | ERROR  |          |
> cirros5 | tiny   |
> +--------------------------------------+-----------+--------+----------+-----
> ----+--------+
> 
> [root@openstack16 ~]# aureport -a
> 
> AVC Report
> ===============================================================
> # date time comm subj syscall class permission obj result event
> ===============================================================
> 1. 01/12/2020 18:27:01 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> (null) (null) (null) unset 248
> 2. 01/12/2020 18:46:14 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> (null) (null) (null) unset 326
> 3. 01/12/2020 18:46:43 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> (null) (null) (null) unset 329
> 4. 01/12/2020 18:48:16 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> (null) (null) (null) unset 361
> 5. 01/12/2020 18:48:22 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> (null) (null) (null) unset 362
> 6. 01/12/2020 18:48:53 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> (null) (null) (null) unset 364
> 7. 09/12/2020 18:47:18 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> (null) (null) (null) unset 330
> 8. 09/12/2020 18:47:27 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> (null) (null) (null) unset 333
> 9. 09/12/2020 18:47:34 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> (null) (null) (null) unset 334
> 10. 09/12/2020 18:47:38 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> (null) (null) (null) unset 337
> 11. 09/12/2020 18:59:55 rhsmcertd-worke system_u:system_r:rhsmcertd_t:s0 0
> file read system_u:object_r:root_t:s0 denied 380
> 12. 09/12/2020 18:59:55 rhsmcertd-worke system_u:system_r:rhsmcertd_t:s0 0
> file read system_u:object_r:root_t:s0 denied 381
> 
> 
> [root@openstack16 ~]# ausearch -a 380
> ----
> time->Tue Dec  1 18:30:56 2020
> type=SERVICE_START msg=audit(1606847456.995:380): pid=1 uid=0
> auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0
> msg='unit=user-runtime-dir@0 comm="systemd" exe="/usr/lib/systemd/systemd"
> hostname=? addr=? terminal=? res=success'
> ----
> time->Tue Dec  1 18:57:54 2020
> type=SOFTWARE_UPDATE msg=audit(1606849074.991:380): pid=5310 uid=0 auid=1001
> ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
> msg='op=install sw="python3-pbr-5.1.2-2.el8ost.noarch" sw_type=rpm
> key_enforce=0 gpg_res=1 root_dir="/" comm="dnf"
> exe="/usr/libexec/platform-python3.6" hostname=openstack16.example.local
> addr=? terminal=pts/0 res=success'
> ----
> time->Wed Dec  9 18:59:55 2020
> type=AVC msg=audit(1607540395.240:380): avc:  denied  { read } for 
> pid=20402 comm="rhsmcertd-worke" name="satellite-5-client.module" dev="dm-0"
> ino=135 scontext=system_u:system_r:rhsmcertd_t:s0
> tcontext=system_u:object_r:root_t:s0 tclass=file permissive=0
> 
> [root@openstack16 ~]# ausearch -a 381
> ----
> time->Tue Dec  1 18:30:56 2020
> type=SERVICE_STOP msg=audit(1606847456.995:381): pid=1 uid=0 auid=4294967295
> ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=user-runtime-dir@0
> comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=?
> res=success'
> ----
> time->Tue Dec  1 18:57:54 2020
> type=SOFTWARE_UPDATE msg=audit(1606849074.991:381): pid=5310 uid=0 auid=1001
> ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
> msg='op=install sw="python3-pyyaml-3.12-12.el8.x86_64" sw_type=rpm
> key_enforce=0 gpg_res=1 root_dir="/" comm="dnf"
> exe="/usr/libexec/platform-python3.6" hostname=openstack16.example.local
> addr=? terminal=pts/0 res=success'
> ----
> time->Wed Dec  9 18:59:55 2020
> type=AVC msg=audit(1607540395.240:381): avc:  denied  { read } for 
> pid=20402 comm="rhsmcertd-worke" name="virt.module" dev="dm-0" ino=136
> scontext=system_u:system_r:rhsmcertd_t:s0
> tcontext=system_u:object_r:root_t:s0 tclass=file permissive=0
> ===
> 
> Also, after finishing the deployment, I ran the recommended playbook as
> follows (PS. I've commented the modules and updates parts because I've
> already did prior the deployment):


Did you REALLY switch the module streams? Since you have the -12, I'm pretty sure you didn't do it...
Though I thought there were some hard limitations/Depends on the exact podman version that should have prevented the deploy..... Adding a needinfo on Lon..

> 
> ==
> ---
> - hosts: localhost
>   gather_facts: false
>   become: true
>   tasks:
> #    - name: disable virt:rhel
> #      command: dnf -y module disable virt:rhel
> 
> #    - name: enable virt:8.2
> #      command: dnf -y module enable virt:8.2
> 
> #    - name: disable container-tools:rhel8
> #      command: dnf -y module disable container-tools:rhel8
> 
> #    - name: enable container-tools:2.0
> #      command: dnf -y module enable container-tools:2.0
> 
> #    - name: update remaining packages
> #      dnf:
> #        name: '*'
> #        state: latest
> #        exclude: ansible
> 
>     - name: stop nova libvirt
>       service:
>         name: tripleo_nova_libvirt
>         state: stopped
>         enabled: no
> 
>     - name: trash libvirt container
>       command: podman rm nova_libvirt
> 
>     - name: recreate nova container with proper selinux settings
>       command: paunch apply --file
> /var/lib/tripleo-config/container-startup-config/step_3/nova_libvirt.json
> --config-id tripleo_step3 --managed-by tripleo-Standalone
> 
>     - name: stop nova libvirt
>       service:
>         name: tripleo_nova_libvirt
>         state: started
>         enabled: yes
> 
>     - name: make selinux great again
>       selinux:
>         policy: targeted
>         state: enforcing
> ==
> 
> When I did it, the Keystone and Horizon containers stopped to work and were
> deleted. I needed to come up with them manually like this:

There is no reason Horizon or Keyston stop working when you trash nova_libvirt container. At least with the right podman version... But since you have an unsupported one, there might be some tricks there.

> 
> ===
> paunch apply --file
> /var/lib/tripleo-config/container-startup-config/step_3/keystone.json
> --config-id tripleo_step3 --managed-by tripleo-Standalone
> paunch apply --file
> /var/lib/tripleo-config/container-startup-config/step_3/horizon.json
> --config-id tripleo_step3 --managed-by tripleo-Standalone
> ===
> 
> The problem is, when I do start the Keystone container, the Horizon
> container stops to work. If I start the Horizon container, the Keystone
> container stops to work (facepalm).

might also be due to a configuration you've passed and services are trying to hit on the same port/device at some point? Seems unrelated to the whole SELinux story, unless you can point to denials. Still. Wrong podman version, can't do anything good to the whole stability.

> 
> Also, if I reboot the VM, nothing works anymore and I need to reinstall
> everything again.
> 
> I don't know what else could be done. What do y'all have in mind?

Comment 17 Julie Pichon 2020-12-10 13:16:50 UTC
The podman version is still wrong indeed (minimum is podman-1.6.4-15).

Additionally the SELinux denials look new and are related to rhsmcertd_t this time? Could there be a problem even earlier in the process when setting up the modules and subscription manager? This may explain why the virt module stream didn't show up either:

type=AVC msg=audit(1607540395.240:381): avc:  denied  { read } for  pid=20402 comm="rhsmcertd-worke" name="virt.module" dev="dm-0" ino=136 scontext=system_u:system_r:rhsmcertd_t:s0 tcontext=system_u:object_r:root_t:s0 tclass=file permissive=0

This looks like the virt module configuration file couldn't be read? I haven't seen this before. Was there any error when setting up the subscriptions and modules otherwise?

Comment 18 Julie Pichon 2020-12-10 13:43:24 UTC
The *.module read errors look a lot like bug 1775975 which indicate that the label for those files should be etc_t, not root_t like in the error message. 

# restorecon -Rv /etc/dnf/modules.d

should resolve this. This can be confirmed after with `ls -alZ /etc/dnf/modules.d`.

I think the advanced-virt-for-rhel-8-x86_64-rpms is missing from the list of repos though, that's where virt:8.2 comes from... But I don't think this should impact the podman version issue, as this comes from the container-tools:2.0 stream.

Comment 19 Alexon Oliveira 2020-12-15 18:58:50 UTC
(In reply to Cédric Jeanneret from comment #16)
> Hello,
> 
> Some comments inline.
> 
> (In reply to Alexon Oliveira from comment #15)
> > Julie/Cédric,
> > 
> > Just in case, I've just finished a fresh install, from scratch, brand new
> > VM, following all your recommendations and following the quoted BZ and KCS.
> > Check it out:
> > 
> > ===
> > [root@openstack16 ~]# hostnamectl 
> > 
> >    Static hostname: openstack16.example.local
> >          Icon name: computer-vm
> >            Chassis: vm
> >         Machine ID: f85aa0aaf6894ab792bd555197a13152
> >            Boot ID: 7c99b506f0c9434a95430de0ceebcd6a
> >     Virtualization: kvm
> >   Operating System: Red Hat Enterprise Linux 8.2 (Ootpa)
> >        CPE OS Name: cpe:/o:redhat:enterprise_linux:8.2:GA
> >             Kernel: Linux 4.18.0-193.29.1.el8_2.x86_64
> >       Architecture: x86-64
> > 
> > 
> > [root@openstack16 ~]# getenforce
> > 
> > Enforcing
> > 
> > [root@openstack16 ~]# rpm -qa | grep -i openstack
> > 
> > openstack-tripleo-common-containers-11.4.1-1.20200914165651.el8ost.noarch
> > python3-openstackclient-4.0.1-1.20200817092223.bff556c.el8ost.noarch
> > openstack-selinux-0.8.24-1.20200914163011.26243bf.el8ost.noarch
> > openstack-tripleo-validations-11.3.2-1.20200914170825.el8ost.noarch
> > python3-openstacksdk-0.36.4-0.20200715054250.76d3b29.el8ost.noarch
> > python-openstackclient-lang-4.0.1-1.20200817092223.bff556c.el8ost.noarch
> > puppet-openstacklib-15.4.1-0.20200403203429.5fdf43c.el8ost.noarch
> > openstack-heat-api-13.0.3-1.20200914171254.48b730a.el8ost.noarch
> > openstack-ironic-python-agent-builder-2.1.1-1.20200914175356.65d0f80.el8ost.
> > noarch
> > openstack-tripleo-heat-templates-11.3.2-1.20200914170156.el8ost.noarch
> > openstack-heat-engine-13.0.3-1.20200914171254.48b730a.el8ost.noarch
> > openstack-tripleo-puppet-elements-11.2.2-0.20200701163410.432518a.el8ost.
> > noarch
> > ansible-role-openstack-operations-0.0.1-0.20200311080930.274739e.el8ost.
> > noarch
> > openstack-heat-monolith-13.0.3-1.20200914171254.48b730a.el8ost.noarch
> > puppet-openstack_extras-15.4.1-0.20200528113453.371931c.el8ost.noarch
> > openstack-heat-common-13.0.3-1.20200914171254.48b730a.el8ost.noarch
> > openstack-tripleo-image-elements-10.6.2-0.20200528043425.7dc0fa1.el8ost.
> > noarch
> > openstack-heat-agents-1.10.1-0.20200311091123.96b819c.el8ost.noarch
> > openstack-tripleo-common-11.4.1-1.20200914165651.el8ost.noarch
> > 
> > [root@openstack16 ~]# rpm -qa | grep podman
> > 
> > podman-1.6.4-12.module+el8.2.0+6669+dde598ec.x86_64
> 
> This is the wrong podman version....... No wonder it's not working then.
> 

You're right. The playbook I ran, for some reason, didn't work and didn't change the container-tools version.

> > 
> > 
> > (admin_overcloud) [stack@openstack16 ~]$ openstack server list
> > 
> > +--------------------------------------+-----------+--------+----------+-----
> > ----+--------+
> > | ID                                   | Name      | Status | Networks |
> > Image   | Flavor |
> > +--------------------------------------+-----------+--------+----------+-----
> > ----+--------+
> > | 50635c9c-6a14-47f8-b47e-acc408277890 | myserver2 | ERROR  |          |
> > cirros4 | tiny   |
> > | ba580948-51ee-4957-baf4-aae5cf924ac9 | myserver1 | ERROR  |          |
> > cirros5 | tiny   |
> > +--------------------------------------+-----------+--------+----------+-----
> > ----+--------+
> > 
> > [root@openstack16 ~]# aureport -a
> > 
> > AVC Report
> > ===============================================================
> > # date time comm subj syscall class permission obj result event
> > ===============================================================
> > 1. 01/12/2020 18:27:01 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> > (null) (null) (null) unset 248
> > 2. 01/12/2020 18:46:14 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> > (null) (null) (null) unset 326
> > 3. 01/12/2020 18:46:43 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> > (null) (null) (null) unset 329
> > 4. 01/12/2020 18:48:16 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> > (null) (null) (null) unset 361
> > 5. 01/12/2020 18:48:22 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> > (null) (null) (null) unset 362
> > 6. 01/12/2020 18:48:53 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> > (null) (null) (null) unset 364
> > 7. 09/12/2020 18:47:18 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> > (null) (null) (null) unset 330
> > 8. 09/12/2020 18:47:27 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> > (null) (null) (null) unset 333
> > 9. 09/12/2020 18:47:34 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> > (null) (null) (null) unset 334
> > 10. 09/12/2020 18:47:38 ? system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 0
> > (null) (null) (null) unset 337
> > 11. 09/12/2020 18:59:55 rhsmcertd-worke system_u:system_r:rhsmcertd_t:s0 0
> > file read system_u:object_r:root_t:s0 denied 380
> > 12. 09/12/2020 18:59:55 rhsmcertd-worke system_u:system_r:rhsmcertd_t:s0 0
> > file read system_u:object_r:root_t:s0 denied 381
> > 
> > 
> > [root@openstack16 ~]# ausearch -a 380
> > ----
> > time->Tue Dec  1 18:30:56 2020
> > type=SERVICE_START msg=audit(1606847456.995:380): pid=1 uid=0
> > auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0
> > msg='unit=user-runtime-dir@0 comm="systemd" exe="/usr/lib/systemd/systemd"
> > hostname=? addr=? terminal=? res=success'
> > ----
> > time->Tue Dec  1 18:57:54 2020
> > type=SOFTWARE_UPDATE msg=audit(1606849074.991:380): pid=5310 uid=0 auid=1001
> > ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
> > msg='op=install sw="python3-pbr-5.1.2-2.el8ost.noarch" sw_type=rpm
> > key_enforce=0 gpg_res=1 root_dir="/" comm="dnf"
> > exe="/usr/libexec/platform-python3.6" hostname=openstack16.example.local
> > addr=? terminal=pts/0 res=success'
> > ----
> > time->Wed Dec  9 18:59:55 2020
> > type=AVC msg=audit(1607540395.240:380): avc:  denied  { read } for 
> > pid=20402 comm="rhsmcertd-worke" name="satellite-5-client.module" dev="dm-0"
> > ino=135 scontext=system_u:system_r:rhsmcertd_t:s0
> > tcontext=system_u:object_r:root_t:s0 tclass=file permissive=0
> > 
> > [root@openstack16 ~]# ausearch -a 381
> > ----
> > time->Tue Dec  1 18:30:56 2020
> > type=SERVICE_STOP msg=audit(1606847456.995:381): pid=1 uid=0 auid=4294967295
> > ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=user-runtime-dir@0
> > comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=?
> > res=success'
> > ----
> > time->Tue Dec  1 18:57:54 2020
> > type=SOFTWARE_UPDATE msg=audit(1606849074.991:381): pid=5310 uid=0 auid=1001
> > ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
> > msg='op=install sw="python3-pyyaml-3.12-12.el8.x86_64" sw_type=rpm
> > key_enforce=0 gpg_res=1 root_dir="/" comm="dnf"
> > exe="/usr/libexec/platform-python3.6" hostname=openstack16.example.local
> > addr=? terminal=pts/0 res=success'
> > ----
> > time->Wed Dec  9 18:59:55 2020
> > type=AVC msg=audit(1607540395.240:381): avc:  denied  { read } for 
> > pid=20402 comm="rhsmcertd-worke" name="virt.module" dev="dm-0" ino=136
> > scontext=system_u:system_r:rhsmcertd_t:s0
> > tcontext=system_u:object_r:root_t:s0 tclass=file permissive=0
> > ===
> > 
> > Also, after finishing the deployment, I ran the recommended playbook as
> > follows (PS. I've commented the modules and updates parts because I've
> > already did prior the deployment):
> 
> 
> Did you REALLY switch the module streams? Since you have the -12, I'm pretty
> sure you didn't do it...
> Though I thought there were some hard limitations/Depends on the exact
> podman version that should have prevented the deploy..... Adding a needinfo
> on Lon..

I thought I did, but I didn't. I have fixed it later.

> 
> > 
> > ==
> > ---
> > - hosts: localhost
> >   gather_facts: false
> >   become: true
> >   tasks:
> > #    - name: disable virt:rhel
> > #      command: dnf -y module disable virt:rhel
> > 
> > #    - name: enable virt:8.2
> > #      command: dnf -y module enable virt:8.2
> > 
> > #    - name: disable container-tools:rhel8
> > #      command: dnf -y module disable container-tools:rhel8
> > 
> > #    - name: enable container-tools:2.0
> > #      command: dnf -y module enable container-tools:2.0
> > 
> > #    - name: update remaining packages
> > #      dnf:
> > #        name: '*'
> > #        state: latest
> > #        exclude: ansible
> > 
> >     - name: stop nova libvirt
> >       service:
> >         name: tripleo_nova_libvirt
> >         state: stopped
> >         enabled: no
> > 
> >     - name: trash libvirt container
> >       command: podman rm nova_libvirt
> > 
> >     - name: recreate nova container with proper selinux settings
> >       command: paunch apply --file
> > /var/lib/tripleo-config/container-startup-config/step_3/nova_libvirt.json
> > --config-id tripleo_step3 --managed-by tripleo-Standalone
> > 
> >     - name: stop nova libvirt
> >       service:
> >         name: tripleo_nova_libvirt
> >         state: started
> >         enabled: yes
> > 
> >     - name: make selinux great again
> >       selinux:
> >         policy: targeted
> >         state: enforcing
> > ==
> > 
> > When I did it, the Keystone and Horizon containers stopped to work and were
> > deleted. I needed to come up with them manually like this:
> 
> There is no reason Horizon or Keyston stop working when you trash
> nova_libvirt container. At least with the right podman version... But since
> you have an unsupported one, there might be some tricks there.
> 

I agree with that, but for some reason that was happening then. Dunno why.

> > 
> > ===
> > paunch apply --file
> > /var/lib/tripleo-config/container-startup-config/step_3/keystone.json
> > --config-id tripleo_step3 --managed-by tripleo-Standalone
> > paunch apply --file
> > /var/lib/tripleo-config/container-startup-config/step_3/horizon.json
> > --config-id tripleo_step3 --managed-by tripleo-Standalone
> > ===
> > 
> > The problem is, when I do start the Keystone container, the Horizon
> > container stops to work. If I start the Horizon container, the Keystone
> > container stops to work (facepalm).
> 
> might also be due to a configuration you've passed and services are trying
> to hit on the same port/device at some point? Seems unrelated to the whole
> SELinux story, unless you can point to denials. Still. Wrong podman version,
> can't do anything good to the whole stability.
> 
> > 
> > Also, if I reboot the VM, nothing works anymore and I need to reinstall
> > everything again.
> > 
> > I don't know what else could be done. What do y'all have in mind?

I managed to correct all of this later.

Comment 20 Alexon Oliveira 2020-12-15 19:00:55 UTC
(In reply to Julie Pichon from comment #17)
> The podman version is still wrong indeed (minimum is podman-1.6.4-15).
> 
> Additionally the SELinux denials look new and are related to rhsmcertd_t
> this time? Could there be a problem even earlier in the process when setting
> up the modules and subscription manager? This may explain why the virt
> module stream didn't show up either:
> 
> type=AVC msg=audit(1607540395.240:381): avc:  denied  { read } for 
> pid=20402 comm="rhsmcertd-worke" name="virt.module" dev="dm-0" ino=136
> scontext=system_u:system_r:rhsmcertd_t:s0
> tcontext=system_u:object_r:root_t:s0 tclass=file permissive=0

I don't know why this happened, but I'm sure I've set the right repositories. The problem is the playbook I ran didn't change the modules to the right version. That could be the cause (?).
> 
> This looks like the virt module configuration file couldn't be read? I
> haven't seen this before. Was there any error when setting up the
> subscriptions and modules otherwise?

See above comment.

Comment 21 Alexon Oliveira 2020-12-15 19:03:33 UTC
(In reply to Julie Pichon from comment #18)
> The *.module read errors look a lot like bug 1775975 which indicate that the
> label for those files should be etc_t, not root_t like in the error message. 
> 
> # restorecon -Rv /etc/dnf/modules.d
> 
> should resolve this. This can be confirmed after with `ls -alZ
> /etc/dnf/modules.d`.

Julie, this one is the golden trick, because I managed to solve the selinux context permission with this later and that solved the problem.

> 
> I think the advanced-virt-for-rhel-8-x86_64-rpms is missing from the list of
> repos though, that's where virt:8.2 comes from... But I don't think this
> should impact the podman version issue, as this comes from the
> container-tools:2.0 stream.

You're right, but the official documentation doesn't mention it. I'll ask them to update the documentation with this information, otherwise it won't be possible to use the correct module version.

Comment 22 Alexon Oliveira 2020-12-15 19:16:32 UTC
Julie/Cédric,

Good news! After all your tips and recommendations, I finally managed to deploy the All-In-One correctly (despite the official documentation is wrong regarding these steps) and the SELinux issue was solved. Check it out:

+++

$ openstack server list

+--------------------------------------+-----------+--------+-------------------------------------+---------+--------+
| ID                                   | Name      | Status | Networks                            | Image   | Flavor |
+--------------------------------------+-----------+--------+-------------------------------------+---------+--------+
| c2533014-7f3a-45f8-a97e-d09b67fa65da | myserver2 | ACTIVE | private=192.168.4.77, 192.168.1.113 | cirros4 | tiny   |
| 333a8678-7369-4e1e-956b-c1b281814e0d | myserver1 | ACTIVE | private=192.168.4.56, 192.168.1.112 | cirros5 | tiny   |
+--------------------------------------+-----------+--------+-------------------------------------+---------+--------+

$ getenforce 

Enforcing

$ hostnamectl 

   Static hostname: openstack16.example.local
         Icon name: computer-vm
           Chassis: vm
        Machine ID: f85aa0aaf6894ab792bd555197a13152
           Boot ID: f28e812e50714cbea29a85e64b1662a5
    Virtualization: kvm
  Operating System: Red Hat Enterprise Linux 8.2 (Ootpa)
       CPE OS Name: cpe:/o:redhat:enterprise_linux:8.2:GA
            Kernel: Linux 4.18.0-193.29.1.el8_2.x86_64
      Architecture: x86-64

$ sudo dnf repolist

repo id                                             repo name
advanced-virt-for-rhel-8-x86_64-rpms                Advanced Virtualization for RHEL 8 x86_64 (RPMs)
ansible-2.9-for-rhel-8-x86_64-rpms                  Red Hat Ansible Engine 2.9 for RHEL 8 x86_64 (RPMs)
fast-datapath-for-rhel-8-x86_64-rpms                Fast Datapath for RHEL 8 x86_64 (RPMs)
openstack-16.1-for-rhel-8-x86_64-rpms               Red Hat OpenStack Platform 16.1 for RHEL 8 x86_64 (RPMs)
rhel-8-for-x86_64-appstream-eus-rpms                Red Hat Enterprise Linux 8 for x86_64 - AppStream - Extended Update Support (RPMs)
rhel-8-for-x86_64-baseos-eus-rpms                   Red Hat Enterprise Linux 8 for x86_64 - BaseOS - Extended Update Support (RPMs)
rhel-8-for-x86_64-highavailability-eus-rpms         Red Hat Enterprise Linux 8 for x86_64 - High Availability - Extended Update Support (RPMs)
rhel-8-for-x86_64-highavailability-rpms             Red Hat Enterprise Linux 8 for x86_64 - High Availability (RPMs)

$ sudo dnf module list --enabled

Updating Subscription Management repositories.

Fast Datapath for RHEL 8 x86_64 (RPMs)                                                                                 27 kB/s | 2.4 kB     00:00    
Red Hat Enterprise Linux 8 for x86_64 - BaseOS - Extended Update Support (RPMs)                                        22 kB/s | 2.4 kB     00:00    
Red Hat OpenStack Platform 16.1 for RHEL 8 x86_64 (RPMs)                                                               36 kB/s | 2.4 kB     00:00    
Advanced Virtualization for RHEL 8 x86_64 (RPMs)                                                                       26 kB/s | 2.8 kB     00:00    
Red Hat Enterprise Linux 8 for x86_64 - High Availability (RPMs)                                                       27 kB/s | 2.4 kB     00:00    
Red Hat Enterprise Linux 8 for x86_64 - High Availability - Extended Update Support (RPMs)                             33 kB/s | 2.4 kB     00:00    
Red Hat Ansible Engine 2.9 for RHEL 8 x86_64 (RPMs)                                                                    36 kB/s | 2.4 kB     00:00    
Red Hat Enterprise Linux 8 for x86_64 - AppStream - Extended Update Support (RPMs)                                     39 kB/s | 2.8 kB     00:00    

Advanced Virtualization for RHEL 8 x86_64 (RPMs)
Name                          Stream               Profiles                             Summary                                                       
virt                          8.2 [e]              common                               Virtualization module                                         

Red Hat Enterprise Linux 8 for x86_64 - AppStream - Extended Update Support (RPMs)
Name                          Stream               Profiles                             Summary                                                       
container-tools               2.0 [e]              common [d]                           Common tools and dependencies for container runtimes          
httpd                         2.4 [d][e]           common [d], devel, minimal           Apache HTTP Server                                            
python36                      3.6 [d][e]           build, common [d]                    Python programming language, version 3.6                      
ruby                          2.5 [d][e]           common [d]                           An interpreter of object-oriented scripting language          
satellite-5-client            1.0 [d][e]           common [d], gui                      Red Hat Satellite 5 client packages                           

Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled

$ sudo ls -alZ /etc/dnf/modules.d

total 24
drwxr-xr-x. 2 root root system_u:object_r:etc_t:s0     150 dez 14 20:34 .
drwxr-xr-x. 8 root root system_u:object_r:etc_t:s0     128 mai  7  2020 ..
-rw-r--r--. 1 root root unconfined_u:object_r:etc_t:s0  74 dez 11 19:10 container-tools.module
-rw-r--r--. 1 root root unconfined_u:object_r:etc_t:s0  54 dez 14 20:34 httpd.module
-rw-r--r--. 1 root root unconfined_u:object_r:etc_t:s0  60 dez  1 18:13 python36.module
-rw-r--r--. 1 root root unconfined_u:object_r:etc_t:s0  52 dez  1 18:42 ruby.module
-rw-r--r--. 1 root root system_u:object_r:etc_t:s0      80 set 24 18:58 satellite-5-client.module
-rw-r--r--. 1 root root system_u:object_r:etc_t:s0      52 dez 11 19:22 virt.module

$ sudo rpm -qa | grep openstack

openstack-tripleo-common-containers-11.4.1-1.20200914165651.el8ost.noarch
python3-openstackclient-4.0.1-1.20200817092223.bff556c.el8ost.noarch
openstack-selinux-0.8.24-1.20200914163011.26243bf.el8ost.noarch
openstack-tripleo-validations-11.3.2-1.20200914170825.el8ost.noarch
python3-openstacksdk-0.36.4-0.20200715054250.76d3b29.el8ost.noarch
python-openstackclient-lang-4.0.1-1.20200817092223.bff556c.el8ost.noarch
puppet-openstacklib-15.4.1-0.20200403203429.5fdf43c.el8ost.noarch
openstack-heat-api-13.0.3-1.20200914171254.48b730a.el8ost.noarch
openstack-ironic-python-agent-builder-2.1.1-1.20200914175356.65d0f80.el8ost.noarch
openstack-tripleo-heat-templates-11.3.2-1.20200914170156.el8ost.noarch
openstack-heat-engine-13.0.3-1.20200914171254.48b730a.el8ost.noarch
openstack-tripleo-puppet-elements-11.2.2-0.20200701163410.432518a.el8ost.noarch
ansible-role-openstack-operations-0.0.1-0.20200311080930.274739e.el8ost.noarch
openstack-heat-monolith-13.0.3-1.20200914171254.48b730a.el8ost.noarch
puppet-openstack_extras-15.4.1-0.20200528113453.371931c.el8ost.noarch
openstack-heat-common-13.0.3-1.20200914171254.48b730a.el8ost.noarch
openstack-tripleo-image-elements-10.6.2-0.20200528043425.7dc0fa1.el8ost.noarch
openstack-heat-agents-1.10.1-0.20200311091123.96b819c.el8ost.noarch
openstack-tripleo-common-11.4.1-1.20200914165651.el8ost.noarch

$ sudo rpm -qa | grep podman

podman-1.6.4-16.module+el8.2.0+7659+b700d80e.x86_64

+++

So here is the correct steps in order to deploy it without SELinux issues:

===

Chapter3:

1 - For the Step 8, the right repositories to be enable are:

[stack@all-in-one]$ sudo subscription-manager repos \
--enable=rhel-8-for-x86_64-baseos-eus-rpms \
--enable=rhel-8-for-x86_64-appstream-eus-rpms \
--enable=rhel-8-for-x86_64-highavailability-eus-rpms \
--enable=ansible-2.9-for-rhel-8-x86_64-rpms \
--enable=openstack-16.1-for-rhel-8-x86_64-rpms \
--enable=fast-datapath-for-rhel-8-x86_64-rpms \
--enable=rhel-8-for-x86_64-highavailability-rpms \
--enable=advanced-virt-for-rhel-8-x86_64-rpms

Chapter 5:

2 - After Step 2 and before Step 3, run restorecon and make sure the /etc/dnf/modules.d gets the right SELinux context:

$ sudo restorecon -Rv /etc/dnf/modules.d
$ sudo ls -alZ /etc/dnf/modules.d

total 24
drwxr-xr-x. 2 root root system_u:object_r:etc_t:s0     150 dez 14 20:34 .
drwxr-xr-x. 8 root root system_u:object_r:etc_t:s0     128 mai  7  2020 ..
-rw-r--r--. 1 root root unconfined_u:object_r:etc_t:s0  74 dez 11 19:10 container-tools.module
-rw-r--r--. 1 root root unconfined_u:object_r:etc_t:s0  54 dez 14 20:34 httpd.module
-rw-r--r--. 1 root root unconfined_u:object_r:etc_t:s0  60 dez  1 18:13 python36.module
-rw-r--r--. 1 root root unconfined_u:object_r:etc_t:s0  52 dez  1 18:42 ruby.module
-rw-r--r--. 1 root root system_u:object_r:etc_t:s0      80 set 24 18:58 satellite-5-client.module
-rw-r--r--. 1 root root system_u:object_r:etc_t:s0      52 dez 11 19:22 virt.module

===

Only doing this way the deployment is going to be able to set and use the right SELinux contexts and the instances' spawning are going to run flawlessly.

teamwork++

Comment 24 Julie Pichon 2020-12-16 09:44:32 UTC
That's great news, thank you for the update!

Note that the restorecon step is only a workaround for another bug (I think bug 1775975), hopefully it won't be necessary forever. It may be useful to add a link to that bug in the documentation for that step as well, for the context?

I'm clearing everyone's needinfo because I don't think any further information is necessary on our part, but feel free to add it back if I missed a question! I agree on leaving this bug open until the documentation is up to date.

Comment 25 Cédric Jeanneret 2020-12-16 12:03:57 UTC
Hello there,

Good to see everything's fine now.

Alexon: I'm a bit surprised a restorecon is mandatory once you enable the repositories. Would be interesting to investigate a bit further, there might be an issue somewhere if this is actually needed.

Julie: there's still the question mark about dependency on the right podman version - not 100% sure about the state here.... Maybe it will hit another zstream? Since Lon took the relevant BZs, I'll let him clear things up on that field.

Cheers,

C.

Comment 26 Julie Pichon 2020-12-16 15:22:53 UTC
(I see I actually didn't check the box to clear everybody's needinfo earlier, sorry about that!)

I ran a few additional tests about module enablement with subscription manager. I was able to confirm a couple of things, but I couldn't reproduce the bug.

1. Looking at a fresh 8.2 system and another one I keep up-to-date (but not with subscription manager), both are displaying the *wrong* file context for module files (root_t vs etc_t):

$ ls -lZ /etc/dnf/modules.d
total 12
-rw-r--r--. 1 root root system_u:object_r:root_t:s0 74 Dec 16 14:48 container-tools.module
-rw-r--r--. 1 root root system_u:object_r:root_t:s0 80 Dec 16 14:33 satellite-5-client.module
-rw-r--r--. 1 root root system_u:object_r:root_t:s0 53 Dec 16 14:33 virt.module

And it looks like bug 1775975 will only fix the issue from RHEL 8.3.

2. However, this does not prevent me from setting up modules:

$ sudo dnf module disable container-tools:rhel8
[...]
$ sudo dnf module enable container-tools:2.0
[...]
$ sudo dnf module list --enabled
Updating Subscription Management repositories.
Last metadata expiration check: 0:17:52 ago on Wed 16 Dec 2020 14:52:48 GMT.
Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs)
Name                 Stream        Profiles         Summary                                              
container-tools      2.0 [e]       common [d]       Common tools and dependencies for container runtimes 
satellite-5-client   1.0 [d][e]    common [d], gui  Red Hat Satellite 5 client packages                  
virt                 rhel [d][e]   common [d]       Virtualization module                                

Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled

$ sudo grep denied /var/log/audit/audit.log
# Returns nothing

The expected podman comes up too. I'm using the Employee SKU in a fresh RHEL 8.2 VM, and couldn't reproduce the denial mentioned in comment 17. Not sure what could have caused the issue.

Comment 27 Julie Pichon 2021-02-24 14:32:45 UTC
The document in comment 7 now shows the information about enabling modules and the right list of repositories. From comment 26, it doesn't look like the restorecon step is mandatory or related to the module enablement issues, so I'm closing this as resolved. If the bug where the module silently failed the 'enable' step can be produced, please open a new bug. Thank you!