RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1846364 - podman 1.6.4 is not honouring --security-opt when --privileged is passed
Summary: podman 1.6.4 is not honouring --security-opt when --privileged is passed
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: podman
Version: 8.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: 8.0
Assignee: Jindrich Novy
QA Contact: atomic-bugs@redhat.com
URL:
Whiteboard:
: 1869503 (view as bug list)
Depends On:
Blocks: 1793607 1841822 1851986
TreeView+ depends on / blocked
 
Reported: 2020-06-11 12:55 UTC by Daniel Berrangé
Modified: 2021-05-06 11:17 UTC (History)
22 users (show)

Fixed In Version: podman-1.6.4-15.el8_2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-21 14:48:49 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:3036 0 None None None 2020-07-21 14:48:58 UTC

Description Daniel Berrangé 2020-06-11 12:55:45 UTC
Description of problem:

By default every container gets a unique MLS category.

We see that category in the process context:

  $ podman run -i -t --net host  --detach --rm fedora  sleep 30                                                               
  a0112cb9c1513e371705a44764c444eb186d39b4ede558dbe825c66d05953efb
  $ ps -axuwfZ | grep sleep                                                                                                   
  system_u:system_r:container_t:s0:c74,c260 root 109406 7.0  0.0 2348 524 pts/0    Ss+  12:44   0:00  \_ sleep 30          

And in the filesystem mount context

  $ podman run -i -t --net host fedora cat /proc/mounts | grep container_file_t | head -1
  overlay / overlay rw,context="system_u:object_r:container_file_t:s0:c47,c57",relatime,lowerdir=/var/lib/containers/storage/overlay/l/V44BJVA25GTELCKUVJSNJ5Q7P3,upperdir=/var/lib/containers/storage/overlay/86475a2ef0a94bb18ef05956a4236b7226f9cfa0854c785f3a31dd156b64cab5/diff,workdir=/var/lib/containers/storage/overlay/86475a2ef0a94bb18ef05956a4236b7226f9cfa0854c785f3a31dd156b64cab5/work 0 0



OpenStack Nova needs one of its containers to be able to disable the MLS category, and so it passes "--security-opt label=level:s0".

This works as expected initially, with both the process context and filesystem context showing no category:

  $ podman run -i -t --net host --security-opt label=level:s0 --detach fedora  sleep 30                                       
  34390f3203d622b033070d09b1cab3db8a57bb6fbffe5ac8c673a65bbec2250e
  $ ps -axuwfZ | grep sleep
  system_u:system_r:container_t:s0 root     109061  2.2  0.0   2348   524 pts/0    Ss+  12:43   0:00  \_ sleep 30                    


  $ podman run -i -t --net host --security-opt label=level:s0 fedora cat /proc/mounts | grep container_file_t | head -1
overlay / overlay rw,context=system_u:object_r:container_file_t:s0,relatime,lowerdir=/var/lib/containers/storage/overlay/l/V44BJVA25GTELCKUVJSNJ5Q7P3,upperdir=/var/lib/containers/storage/overlay/96f04ac169078a803bc10f235d838e89a07b702ca3f401995a86cdf29f59dda0/diff,workdir=/var/lib/containers/storage/overlay/96f04ac169078a803bc10f235d838e89a07b702ca3f401995a86cdf29f59dda0/work 0 0


OpenStack Nova also needs the same container to be privileged, so passes --privileged in addition to the --security-opt.

Now things break.  The process context correctly shows no MLS category:

  $ podman run -i -t --net host --privileged  --security-opt label=level:s0 --detach fedora  sleep 30                         
  d29747fd7b24bdb3d1e4f7a44d3a74376e8025a2d05299a44eec7245dc9200fe
  $ ps -axuwfZ | grep sleep                                                                                                   
  unconfined_u:system_r:spc_t:s0  root      109762  1.0  0.0   2348   584 pts/0    Ss+  12:44   0:00  \_ sleep 30                                   

The type changed from container_t to spc_t. I guess that's ok.

The critical issue though is that the filesystem still has an MLS category present:

   $ podman run -i -t --net host --privileged --security-opt label=level:s0 fedora cat /proc/mounts | grep container_file_t | head -1
overlay / overlay rw,context="system_u:object_r:container_file_t:s0:c128,c844",relatime,lowerdir=/var/lib/containers/storage/overlay/l/V44BJVA25GTELCKUVJSNJ5Q7P3,upperdir=/var/lib/containers/storage/overlay/9f0e86213e92845eaa258d94c1e5e1f44db45cf5c7293050257fbe455f74f9e3/diff,workdir=/var/lib/containers/storage/overlay/9f0e86213e92845eaa258d94c1e5e1f44db45cf5c7293050257fbe455f74f9e3/work 0 0

IOW, the "--security-opt label=level:s0" is not honoured for the filesystem label when using --privileged

This was demonstrated on podman 1.6.4.

If I upgrade to podman 1.9.3 RPMs, then things work correctly - the filesystem label honours the --security-opt


I ran a git bisect against upstram libpod repo, and discovered that the issue was fixed in


  commit 58cbbbc56e9f1cee4992ae4f4d3971c0e336ecd2
  Author: Valentin Rothberg <rothberg>
  Date:   Tue Feb 18 15:01:18 2020 +0100

    set process labels in pkg/spec
    
    Set the (default) process labels in `pkg/spec`. This way, we can also
    query libpod.conf and disable labeling if needed.
    
    Fixes: #5087
    Signed-off-by: Valentin Rothberg <rothberg>


The issue #5087 doesn't mention the behaviour we're seeing, so it looks like it was just a happy side-effect.

In any case, this bug in podman 1.6.4 not honouring --security-opt for filesystem labels when --privileged is set is a blocker for fixing OSP bug 1841822


Version-Release number of selected component (if applicable):
podman-1.6.4-4.module+el8.1.1+5885+44006e55.x86_64

How reproducible:
Always

Steps to Reproduce:
1. $ podman run -i -t --net host --privileged --security-opt label=level:s0 fedora cat /proc/mounts | grep container_file_t
2.
3.

Actual results:
The filesystem mounts have a MLS category present  "system_u:object_r:container_file_t:s0:c128,c844"

Expected results:
The filesystem mounts do NOT have a MLS category present  "system_u:object_r:container_file_t:s0:"

Comment 1 Matthew Heon 2020-06-11 13:16:29 UTC
For reference, I think I would be more inclined to consider the 1.9 behaviour a bug - `--privileged` is supposed to completely remove all security restrictions from the container. We likely should not even allow `--security-opt` to be passed when `--privileged` is also specified (I believe Podman 2.0 has this behaviour).

What are you using `--privileged` for that also requires SELinux labelling? It should be possible to duplicate without the full `--privileged` flag.

Comment 2 Colin Walters 2020-06-11 13:24:57 UTC
You may really want https://bugzilla.redhat.com/show_bug.cgi?id=1839065

Comment 3 Daniel Berrangé 2020-06-11 13:25:15 UTC
Nova is launching  libvirtd inside a container. It is *NOT* using containers for security isolation, just for simplication of deployment.

Inside the container, libvirtd in turn spawns multiple QEMU processes, and needs to assign unique MLS categories to each QEMU.

The problem with --privileged, is that while the process does not have any MLS category, the  filesystem mounts still have a MLS category assigned. This causes MLS constraints violations with the MLS category assigned to QEMU processes & their files.

IOW, the filesystem mounts ned to have "system_u:object_r:container_file_t:s0"  as the context, with NO MLS category assigned, when --privileged is used.

Perhaps this should in fact be the default for mounts when using --privileged, and --security-opts is a red herring ?

Comment 4 Kashyap Chamarthy 2020-06-11 13:35:00 UTC
(In reply to Matthew Heon from comment #1)

[...]

> What are you using `--privileged` for that also requires SELinux labelling?
> It should be possible to duplicate without the full `--privileged` flag.

Context: It is in a RHOS (Red Hat OpenStack) environment, which launches 'nova_libvirt' container (one of the several), which is 
"inherently a highly privileged container because Nova [the Compute project] requires it to be able to perform many highly privileged actions." (quoting from #22 in the below bug):


https://bugzilla.redhat.com/show_bug.cgi?id=1841822#c22 — SELinux blocks 'qemu-kvm' running in a container (running in a VM)

Comment 5 Matthew Heon 2020-06-11 13:49:45 UTC
It sounds like our assumption that `--security-opt` and `--privileged` were incompatible is not matching up with expectations... Fortunately, we haven't released Podman 2.0, so there's time to change upstream before we make a release.

I'll add a needinfo on Dan to make sure he agrees.

Meanwhile, I'm assuming this is a backport request for 58cbbbc56e9f1cee4992ae4f4d3971c0e336ecd2 to the 1.6.4 (2.0 stream) used by Openstack?

Comment 6 Daniel Walsh 2020-06-11 13:49:58 UTC
Labeling the content as container_file_t:s0 allows all containers to write to the container content, form an SELinux point of view. I think that is a bad idea.

This works, and we can further improve the SELinux policy

# podman run --security-opt label=filetype:svirt_image_t --security-opt label=type:spc_t --security-opt label=level:s0 -ti ubi8-init ls -lZ /
total 0
lrwxrwxrwx.   1 root root system_u:object_r:svirt_image_t:s0   7 Aug 12  2018 bin -> usr/bin
dr-xr-xr-x.   2 root root system_u:object_r:svirt_image_t:s0   6 Aug 12  2018 boot
drwxr-xr-x.   5 root root system_u:object_r:svirt_image_t:s0 360 Jun 11 13:48 dev
drwxr-xr-x.   1 root root system_u:object_r:svirt_image_t:s0  54 Apr 23 10:16 etc
...

This would remove the SELinux issues.  Not sure if any of the other security problems exists.

Comment 7 Tom Sweeney 2020-06-11 14:10:22 UTC
Assigning to Matt.  I've also added Giuseppe to the cc in he case he's an opinion on this.

Comment 8 Daniel Berrangé 2020-06-11 14:21:30 UTC
(In reply to Daniel Walsh from comment #6)
> Labeling the content as container_file_t:s0 allows all containers to write
> to the container content, form an SELinux point of view. I think that is a
> bad idea.

The question of which is the right type to use is tangential to the problem reported in this bug.

> This works, and we can further improve the SELinux policy
> 
> # podman run --security-opt label=filetype:svirt_image_t --security-opt
> label=type:spc_t --security-opt label=level:s0 -ti ubi8-init ls -lZ /
> total 0
> lrwxrwxrwx.   1 root root system_u:object_r:svirt_image_t:s0   7 Aug 12 
> 2018 bin -> usr/bin
> dr-xr-xr-x.   2 root root system_u:object_r:svirt_image_t:s0   6 Aug 12 
> 2018 boot
> drwxr-xr-x.   5 root root system_u:object_r:svirt_image_t:s0 360 Jun 11
> 13:48 dev
> drwxr-xr-x.   1 root root system_u:object_r:svirt_image_t:s0  54 Apr 23
> 10:16 etc
> ...
> 
> This would remove the SELinux issues.  Not sure if any of the other security
> problems exists.

Afraid not. This nicely illustrates the exact problem I'm reporting. As soon as you add the --privileged flag to your example command line above, it breaks.

Compare:

# podman run  --security-opt label=filetype:svirt_image_t --security-opt label=type:spc_t --security-opt label=level:s0 -ti ubi8-init ls -lZd /usr
drwxr-xr-x. 1 root root system_u:object_r:svirt_image_t:s0 4096 Jun  3 14:33 /usr

With:

# podman run --privileged  --security-opt label=filetype:svirt_image_t --security-opt label=type:spc_t --security-opt label=level:s0 -ti ubi8-init ls -lZd /usr
drwxr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c110,c536 4096 Jun  3 14:33 /usr

the container files for container_file_t:s0:c110,c536  instead of the svirt_image_t:s0  that was asked for.

Comment 9 Daniel Berrangé 2020-06-11 15:27:33 UTC
(In reply to Matthew Heon from comment #1)
> For reference, I think I would be more inclined to consider the 1.9
> behaviour a bug - `--privileged` is supposed to completely remove all
> security restrictions from the container. We likely should not even allow
> `--security-opt` to be passed when `--privileged` is also specified (I
> believe Podman 2.0 has this behaviour).

FYI I've just tested   podman-2.0.0-0.111.dev.gitd6e70c6 that is in Fedora rawhide, and I didn't see any change in behaviour compared to 1.9.3.  My test scenario from comment #8 works correctly with that 2.0.0 build:

ie --security-opt *is* still honoured, even with --privileged is set

# podman run  --security-opt label=filetype:svirt_image_t --security-opt label=type:spc_t --security-opt label=level:s0 -ti ubi8-init ls -lZd /usr
drwxr-xr-x. 1 root root system_u:object_r:svirt_image_t:s0 4096 Jun  3 14:33 /usr

# podman run --privileged  --security-opt label=filetype:svirt_image_t --security-opt label=type:spc_t --security-opt label=level:s0 -ti ubi8-init ls -lZd /usr
drwxr-xr-x. 1 root root system_u:object_r:svirt_image_t:s0 4096 Jun  3 14:33 /usr

Comment 10 Scott McCarty 2020-06-17 18:25:23 UTC
Daniel & OpenStack Team,
     We are happy to look at this in a future version of podman, but this is expected behavior that has been in place for a long time, so a back port to podman 1.6.4 is not really an option. I'm moving this to RHEL 8.3. We might be able to take a look at this for 8.3 or the 12 week release that comes out after 8.3, but as Matt said, this might just simply be a bug.

Comment 11 Daniel Berrangé 2020-06-18 08:57:23 UTC
(In reply to Scott McCarty from comment #10)
> Daniel & OpenStack Team,
>      We are happy to look at this in a future version of podman, but this is
> expected behavior that has been in place for a long time, so a back port to
> podman 1.6.4 is not really an option. I'm moving this to RHEL 8.3. We might
> be able to take a look at this for 8.3 or the 12 week release that comes out
> after 8.3, but as Matt said, this might just simply be a bug.

This is clearly a bug in podman in 8.1. Currently shipping versions of podman > 1.8.1 already have the correct behaviour, so no fix is required in 8.3

This is a blocker for fixing a significant bug in OpenStack in 8.1 stream.

Comment 12 Kashyap Chamarthy 2020-06-18 09:48:42 UTC
(In reply to Daniel Berrangé from comment #11)

[...]

> This is clearly a bug in podman in 8.1. Currently shipping versions of
> podman > 1.8.1 already have the correct behaviour, so no fix is required in
> 8.3
> 
> This is a blocker for fixing a significant bug in OpenStack in 8.1 stream.

Yes, I vehemently agree with Dan, above.

This bug _must_ be addressed in 8.1.  

Providing backports like these — which affect critical features in higher-layer tools — is one of Red Hat's key value propositions to customers.

Comment 22 Cédric Jeanneret 2020-06-30 08:21:05 UTC
Hello,

I have a weird thing with that version, in OSP-16.1:

when I deploy with the "stock" 1.6.4 provided by rhos-appstream, a container (ironic_conductor) set to run as "privileged" (--privileged is passed, NO security-opt) has its main process running as:
system_u:system_r:spc_t:s0        58360 ?        S      0:14      \_ /usr/bin/python3 /usr/bin/ironic-conductor
which is the intended context.

But with this patched podman, with the *same* configuration, same options and all, the main process is running with the following:
system_u:system_r:container_t:s0:c160,c697 128895 ? S   0:02      \_ /usr/bin/python3 /usr/bin/ironic-conductor

This latter seems to show the "--privileged" isn't correctly applied. We can also see in the /proc/mounts: rw,context="system_u:object_r:container_file_t:s0:c160,c697"

When we run "podman inspect ironic_conductor", we can see, in the Annotations: "io.podman.annotations.privileged": "TRUE"

This seems to indicate the options are actually properly passed.
In addition, the used command is:
podman run --name ironic_conductor-42nqiigp [...] --detach=true [...] --net=host --privileged=true --volume=/etc/hosts:/etc/hosts:ro --volume=/etc/localtime:/etc/localtime:ro --volume=/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume=/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume=/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume=/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume=/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume=/dev/log:/dev/log --volume=/etc/puppet:/etc/puppet:ro --volume=/var/lib/kolla/config_files/ironic_conductor.json:/var/lib/kolla/config_files/config.json:ro --volume=/var/lib/config-data/puppet-generated/ironic:/var/lib/kolla/config_files/src:ro --volume=/lib/modules:/lib/modules:ro --volume=/sys:/sys --volume=/dev:/dev --volume=/run:/run --volume=/var/lib/ironic:/var/lib/ironic:z --volume=/var/log/containers/ironic:/var/log/ironic:z undercloud.ctlplane.localdomain:8787/rh-osbs/rhosp16-openstack-ironic-conductor:16.1_20200625.1

Meaning... well... it DOES have the "--privileged" - but maybe the "=true" is causing an issue? It wasn't the case with stock 1.6.4 though.

Any hint?

Cheers,

C.

Comment 23 Kashyap Chamarthy 2020-06-30 13:04:55 UTC
So here's a reproducer problem with "--pid=host":


Test-1 (without "--pid=host")
-----------------------------

$> podman run --rm --privileged --net=host --security-opt label=level:s0  --security-opt label=filetype:container_ro_file_t undercloud.ctlplane:8787/rh-osbs/rhosp16-openstack-nova-libvirt:16.1_20200625.1  ls -lZ /usr/libexec/qemu-kvm
-rwxr-xr-x. 1 root root system_u:object_r:container_ro_file_t:s0 13892976 Apr 14 23:24 /usr/libexec/qemu-kvm

Notice:  the label of 'qemu-kvm' is CORRECT: 'container_ro_file_t'

Test-2 (with "--pid=host")
--------------------------

$> podman run --rm --privileged --net=host --pid=host --security-opt label=level:s0  --security-opt label=filetype:container_ro_file_t undercloud.ctlplane:8787/rh-osbs/rhosp16-openstack-nova-libvirt:16.1_20200625.1  ls -lZ /usr/libexec/qemu-kvm
-rwxr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c114,c257 13892976 Apr 14 23:24 /usr/libexec/qemu-kvm 


Notice: the label of 'qemu-kvm' is INCORRECT — it is back to the undersirable 'container_file_t'.


                            * * *

Any insights on this?

Comment 24 Cédric Jeanneret 2020-06-30 14:37:21 UTC
Hello there,

After some more testing, it appears there's a secondary issue with that version and the "--privileged" mode:

What we get with the patched podman:
podman run --net=host --rm -ti --privileged --user root undercloud.ctlplane.localdomain:8787/rh-osbs/rhosp16-openstack-nova-conductor:16.1_20200625.1 bash
()[root@undercloud /]# ps eZ
LABEL                               PID TTY      STAT   TIME COMMAND
system_u:system_r:container_t:s0:c247,c530 1 pts/0 Ss   0:00 dumb-init --single-child -- bash PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm HOSTNAME=undercloud.localdomain container=oci LANG=en_US.UTF-8 KOLLA
system_u:system_r:container_t:s0:c247,c530 7 pts/0 S    0:00 bash PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm HOSTNAME=undercloud.localdomain container=oci LANG=en_US.UTF-8 KOLLA_INSTALL_TYPE=binary KOLLA_B
system_u:system_r:container_t:s0:c247,c530 24 pts/0 R+   0:00 ps eZ KOLLA_DISTRO_PYTHON_VERSION=3.6 LANG=en_US.UTF-8 HOSTNAME=undercloud.localdomain KOLLA_BASE_ARCH=x86_64 container=oci PWD=/ HOME=/root KOLLA_INSTALL_METATYPE=rhos KOLLA_IN
()[root@undercloud /]# 


What we're expecting:
[root@tengu ~]# podman run --net=host --rm -ti --privileged --user root centos:8 bash
[root@tengu /]# ps eZ
LABEL                               PID TTY      STAT   TIME COMMAND
unconfined_u:system_r:spc_t:s0        1 pts/0    Ss     0:00 bash PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm HOSTNAME=tengu.lain.internux.ch container=podman HOME=/root
unconfined_u:system_r:spc_t:s0       10 pts/0    R+ [...]

This is the cause of another issue: https://bugzilla.redhat.com/show_bug.cgi?id=1851986

This issue can be worked around, passing the "--security-opt label=type:spc_t --security-opt label=level:s0" (provided we aren't passing "--pid=host" of course).

Cheers,

C.

Comment 25 Jindrich Novy 2020-06-30 18:08:51 UTC
Matt, any thoughts on the last two comments?

Comment 26 Matthew Heon 2020-06-30 18:12:12 UTC
Proposed fix: https://github.com/containers/libpod/pull/6827

Comment 27 Cédric Jeanneret 2020-07-01 05:55:37 UTC
Thank you Matt, Dan!

I'm putting the provided scratch build under test on my lab right now, and will come back with (hopefully) good news :)

Comment 28 Cédric Jeanneret 2020-07-01 07:52:33 UTC
Hello there,

Soooo... bad news: we're back to the initial state.

Namely:
"--privileged" does apply the spc_t context for the main process:
`podman exec nova_libvirt cat /proc/self/attr/current' returns "unconfined_u:system_r:spc_t:s0" as expected

"--privileged --security-opt label=filetype:container_share_t --security-opt label=level:s0" doesn't apply the security options:
grep 5b53fc47cd8a /proc/mounts
shm /var/lib/containers/storage/overlay-containers/5b53fc47cd8ad604ac2558efae0b4394da293a653a2d753c086cd5f5c3ac90bc/userdata/shm tmpfs rw,context="system_u:object_r:container_file_t:s0:c770,c785",nosuid,nodev,noexec,relatime,size=64000k 0 0

We can see the "container_file_t" instead of "container_share_t" or its new name, "container_ro_file_t" (or something similar)....

Sounds like the --privileged overrides the security-opt instead of merging them.

Cheers,

C.

Comment 29 Cédric Jeanneret 2020-07-01 08:30:07 UTC
Me again,

some more testing: the "--pid=host" is the culprit again in my tests. If we drop that option, everything is working just fine.

So, in short:

--privileged IS working thanks to Dan latest patches
--privileged --security-opt IS working thanks to the initial backport
--privileged --security-opt --pid=host is NOT working

We're getting closer. We can't drop that --pid=host option for now (would need extensive testing, and I'm not the right one to make this decision anyway) - plus there are other containers with such an option (but, afaik, no --security-opt).

There's probably some kind of override due to the "--pid=host" since, according to the "podman-run manpage", the container using this option has "[..] full access to local PID and is therefore considered insecure."

Cheers,

C.

Comment 30 Kashyap Chamarthy 2020-07-01 12:59:28 UTC
Afraid, some more bad news, with podman-1.6.4-14 scratch build, it causes Podman to coredump:


$ coredumpctl info podman

[...]

           PID: 11761 (podman)
           UID: 1001 (kashyapc)
           GID: 1001 (kashyapc)
        Signal: 6 (ABRT)
     Timestamp: Wed 2020-07-01 11:07:09 CEST (3h 50min ago)
  Command Line: podman run --rm --privileged --net=host --security-opt label=level:s0 --security-opt label=filetype:container_ro_file_t -i -t fedora:rawhide ls -lZ /usr/libexec/qemu-kvm
    Executable: /usr/bin/podman
 Control Group: /user.slice/user-1001.slice/user/gnome-terminal-server.service
          Unit: user
     User Unit: gnome-terminal-server.service
         Slice: user-1001.slice
     Owner UID: 1001 (kashyapc)
       Boot ID: 5edbfab34b24490ab32116e8b17a354e
    Machine ID: e617a19677b54e52a761ed8a4e6bd8b0
      Hostname: paraplu
       Storage: /var/lib/systemd/coredump/core.podman.1001.5edbfab34b24490ab32116e8b17a354e.11761.1593594429000000.lz4
       Message: Process 11761 (podman) of user 1001 dumped core.
                
                Stack trace of thread 11807:
                #0  0x0000563add3dba21 runtime.raise (podman)
                #1  0x0000563add3bfcce runtime.sigfwdgo (podman)
                #2  0x0000563add3be604 runtime.sigtrampgo (podman)
                #3  0x0000563add3dbd93 runtime.sigtramp (podman)
                #4  0x00007f0314bcec70 __restore_rt (libpthread.so.0)
                #5  0x0000563add3dba21 runtime.raise (podman)
                #6  0x0000563add3bf89a runtime.crash (podman)
                #7  0x0000563add3a8a56 runtime.fatalpanic (podman)
                #8  0x0000563add3a83f1 runtime.gopanic (podman)
                #9  0x0000563add37a1c3 runtime.closechan (podman)
                #10 0x0000563ade3da5d4 github.com/containers/libpod/pkg/adapter.ProxySignals.func1 (podman)
                #11 0x0000563add3d9e41 runtime.goexit (podman)
[...]


                            - - -

PS: Shouldn't this bug should go back to ASSIGNED state? :-)

Comment 31 Matthew Heon 2020-07-01 13:15:56 UTC
The chances that our changes here broke sig-proxy are very close to 0 - nothing touched anything remotely close to that part of the code. If you have a reproducer, please file a fresh BZ for it.

I'll look into the pid=host issues now.

Comment 32 Joy Pu 2020-07-08 16:20:45 UTC
Test with podman-1.6.4-15.module+el8.2.0+7290+954fb593.x86_64 and seems the --security-opt works with --privileged inside the container. So set this to verified, Details:
# podman run --privileged  --security-opt label=filetype:svirt_image_t --security-opt label=type:spc_t --security-opt label=level:s0 -ti ubi8-init ls -lZd /usr
drwxr-xr-x. 1 root root system_u:object_r:svirt_image_t:s0 66 Jun  3 14:33 /usr
# podman run -i -t --net host --privileged --security-opt label=level:s0 fedora cat /proc/mounts | grep container_file_t
overlay / overlay rw,context=system_u:object_r:container_file_t:s0,relatime,lowerdir=/var/lib/containers/storage/overlay/l/QOHUANZ5DSPEAFCQK5MXNGDRCR,upperdir=/var/lib/containers/storage/overlay/5f47d0e04bfab155875f8fe95977547197b69017c23d966d8c7706f5cc3fc537/diff,workdir=/var/lib/containers/storage/overlay/5f47d0e04bfab155875f8fe95977547197b69017c23d966d8c7706f5cc3fc537/work 0 0
tmpfs /dev tmpfs rw,context=system_u:object_r:container_file_t:s0,nosuid,size=65536k,mode=755 0 0
devpts /dev/pts devpts rw,context=system_u:object_r:container_file_t:s0,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666 0 0
shm /dev/shm tmpfs rw,context=system_u:object_r:container_file_t:s0,nosuid,nodev,noexec,relatime,size=64000k 0 0
tmpfs /sys/fs/cgroup tmpfs rw,context=system_u:object_r:container_file_t:s0,nosuid,nodev,noexec,relatime,mode=755 0 0
devpts /dev/console devpts rw,context=system_u:object_r:container_file_t:s0,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666 0 0

Comment 34 errata-xmlrpc 2020-07-21 14:48:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:3036

Comment 35 Jason Joyce 2020-11-04 13:17:35 UTC
*** Bug 1893291 has been marked as a duplicate of this bug. ***

Comment 36 Martin Schuppert 2021-05-06 11:17:00 UTC
*** Bug 1869503 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.