Bug 1882068 - cannot access /dev during image builds with buildah running using --isolation chroot unless /dev is already present in base container image
Summary: cannot access /dev during image builds with buildah running using --isolation...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Build
Version: 4.5
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.6.0
Assignee: Adam Kaplan
QA Contact: wewang
URL:
Whiteboard:
Depends On:
Blocks: 1186913
TreeView+ depends on / blocked
 
Reported: 2020-09-23 18:20 UTC by Robb Manes
Modified: 2020-10-27 16:45 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: buildah used an incorrect set of permissions when creating directories for default mount targets if the target was not present in the base image Consequence: if the base image did not contain a /dev directory, an image produced by OpenShift with this base would have the /dev read+writeable only to root users Fix: buildah uses the 0755 permission when creating mount target directories (world read+writeable) Result: if a base image does not have the /dev directory, an image built from this base image will allow not-root users to read and write to the /dev directory.
Clone Of:
Environment:
Last Closed: 2020-10-27 16:44:57 UTC
Target Upstream Version:


Attachments (Terms of Use)
buildConfig for reproducer (520 bytes, text/plain)
2020-10-14 22:21 UTC, Adam Kaplan
no flags Details
pod definition for reproducer (372 bytes, text/plain)
2020-10-14 22:21 UTC, Adam Kaplan
no flags Details
[4.5] pod logs (1.92 KB, text/plain)
2020-10-14 22:22 UTC, Adam Kaplan
no flags Details
[4.6] pod logs (1.92 KB, text/plain)
2020-10-14 22:23 UTC, Adam Kaplan
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:45:23 UTC

Description Robb Manes 2020-09-23 18:20:35 UTC
Description of problem:
Using OpenShift for builds or building a container image locally in a container, where `--isolation chroot` is set, unless an empty /dev directory is present in the base image, buildah will be unable to expose /dev.  This causes problems when attempting to access files such as /dev/null or /dev/zero during build processes.

Our Red Hat images appear to define an empty /dev directory, which is why we do not see this very often when the base image is a Red Hat image:

$ skopeo copy docker://registry.redhat.io/ubi8 dir:///home/rmanes/Downloads/ubi8
$ cd ~/Downloads/ubi8
$ cp ec1681b6a383e4ecedbeddd5abc596f3de835aed6db39a735f62395c8edbff30 layer.tar.gz
$ tar xf layer.tar.gz
$ ls -lZd dev
drwxr-xr-x. 2 rmanes rmanes unconfined_u:object_r:user_home_t:s0 6 Sep  1 15:38 dev

But other images, such as centos or SUSE images, do not have /dev predefined:

$ skopeo copy docker://registry.suse.com/suse/sles12sp4:latest dir:///home/rmanes/Downloads/suse
$ for i in $(ls *); do cp $i $i.tar.gz; tar xf $i.tar.gz; done
$ ls dev
ls: cannot access 'dev': No such file or directory

Since we are in a chrooted environment, I am not sure if it's possible to add these permissions to allow for this to work properly when we create /dev for a newly mounted container.

Version-Release number of selected component (if applicable):
buildah version 1.14.9 (image-spec 1.0.1-dev, runtime-spec 1.0.1-dev)

How reproducible:
Every time with image who's base has no empty /dev already defined

Steps to Reproduce:
- Pull the buildah image which has $BUILDAH_ISOLATION baked in and launch a container with a shell in it:
# podman run -it --rm --name buildah registry.redhat.io/rhel8/buildah bash

- Inside of the buildah container, pull an image that does not have /dev defined in the base image, such as SLES:
# buildah --storage-driver vfs pull registry.suse.com/suse/sles12sp4:latest

- Mount the container:
# buildah --storage-driver vfs from registry.suse.com/suse/sles12sp4:latest

- Inspect the container directory and note the lack of permissions on dev, which were created at mount time:
# ls -lZd /var/lib/containers/storage/vfs/dir/4a3ecd6db35ee481345ba5b907d39266fb4494fc229c2efa0fc9d59c7f882604/dev
drwx------. 2 root root system_u:object_r:container_file_t:s0:c584,c707 6 Sep 23 13:48 /var/lib/containers/storage/vfs/dir/4a3ecd6db35ee481345ba5b907d39266fb4494fc229c2efa0fc9d59c7f882604/dev

- Now pull a Red Hat image which has /dev as an empty directory with open permissions on it in the same container and observe more open permissions:
# buildah --storage-driver vfs pull registry.redhat.io/ubi8

# ls -lZd /var/lib/containers/storage/vfs/dir/c48d55ebf58fab22f62cd75b2dfac6b96ef981d02a9929f3c8cdfbc1408ee53b/dev/
drwxr-xr-x. 2 root root system_u:object_r:container_file_t:s0:c584,c707 6 Sep  1 19:38 /var/lib/containers/storage/vfs/dir/c48d55ebf58fab22f62cd75b2dfac6b96ef981d02a9929f3c8cdfbc1408ee53b/dev/

Actual results:
Unable to access anything under /dev within the buildah container in a container.

Expected results:
If possible, a way for images without pre-defined /dev directories to have access to /dev/null, /dev/zero, and other device files without permission errors.

Additional info:
I use VFS just because it was easier in a container, but the end result is the same with Overlay so long as chroot isolation is in use.

Comment 1 Robb Manes 2020-09-23 19:31:34 UTC
Just pointing out this side effect is actually untrue:

> This causes problems when attempting to access files such as /dev/null or /dev/zero during build processes.

It actually will work, as it's been described to me that we bind-mount the host (in this case, the containers) /dev into the buildah container.  Therefore, the below command works:

# buildah --storage-driver vfs run sles12sp4-working-container echo "TEST" >> /dev/null

Comment 2 Robb Manes 2020-09-24 16:53:53 UTC
(In reply to Robb Manes from comment #1)

> # buildah --storage-driver vfs run sles12sp4-working-container echo "TEST" >> /dev/null

This should read:

# buildah run sles12sp4-working-container sh -c 'echo "TEST" >> /dev/null'

So as to not use the host's /dev/null for redirection.

Nonetheless, it still allows you to properly mount in /dev from the container's environment into the buildah container, and I can still redirect, so as Nalin suggested to me in a chat this is probably an indication that /dev is not being mounted properly from the runtime environment (in this case, an OpenShift build) directly into the buildah container.  Investigating and will update once I find more.

Comment 4 Derrick Ornelas 2020-10-02 19:34:31 UTC
What build strategy is being used in OpenShift where this issue is reproducible?  If it's docker strategy where OCP's builder is being used, then we might want to move this BZ to OCP Build.  If it's a custom builder image, then we likely need to know more about their image.

Comment 5 Robb Manes 2020-10-09 16:13:34 UTC
(In reply to Derrick Ornelas from comment #4)
> What build strategy is being used in OpenShift where this issue is
> reproducible?  If it's docker strategy where OCP's builder is being used,
> then we might want to move this BZ to OCP Build.  If it's a custom builder
> image, then we likely need to know more about their image.

It's using dockerStrategy; I will move to OCP Build in the interim while I try to reproduce this.  We have received the problematic buildconfig as well.

Comment 7 Robb Manes 2020-10-09 18:23:10 UTC
Just re-outlining the issue.  If I create a pod that pulls from the SLES image, and run it directly, /dev is set with drwxr-xr-x.

$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: sles-orig-test
spec:
  containers:
    - name: sles-orig-test
      image: registry.suse.com/suse/sles12sp4:latest
      command: ["/bin/bash"]
      args: ["-c", "sleep infinity"]
	  
$ oc create -f pod.yaml
pod/sles-orig-test created

$ oc exec sles-orig-test -- ls -al /
total 12
drwxr-xr-x.   1 root root   62 Oct  9 18:13 .
drwxr-xr-x.   1 root root   62 Oct  9 18:13 ..
drwxr-xr-x.   2 root root 4096 Sep 30 13:27 bin
drwxr-xr-x.   5 root root  360 Oct  9 18:13 dev
drwxr-xr-x.   1 root root   25 Oct  9 18:13 etc
drwxr-xr-x.   2 root root    6 Jun 27  2017 home
drwxr-xr-x.   7 root root   76 Sep 30 13:27 lib
drwxr-xr-x.   5 root root 4096 Sep 30 13:27 lib64
drwxr-xr-x.   2 root root    6 Jun 27  2017 mnt
drwxr-xr-x.   2 root root    6 Jun 27  2017 opt
dr-xr-xr-x. 289 root root    0 Oct  9 18:13 proc
drwx------.   4 root root   52 Sep 30 13:27 root
drwxr-xr-x.   1 root root   21 Oct  9 18:13 run
drwxr-xr-x.   2 root root 4096 Sep 30 13:27 sbin
drwxr-xr-x.   2 root root    6 Jun 27  2017 selinux
drwxr-xr-x.   4 root root   28 Sep 30 13:27 srv
dr-xr-xr-x.  13 root root    0 Oct  9 17:35 sys
drwxrwxrwt.   2 root root    6 Sep 30 13:27 tmp
drwxr-xr-x.  13 root root  167 Sep 30 13:27 usr
drwxr-xr-x.  11 root root  148 Sep 30 13:27 var

If I make a buildConfig and remake the image myself however, during the build process it shows up as /dev having drwxr-xr-x permissions:

apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
  name: sles-bc
spec:
  resources:
    limits:
      cpu: "1"
      memory: 1Gi
  runPolicy: Serial
  source:
    dockerfile: |-
      FROM registry.suse.com/suse/sles12sp4:latest

      RUN ls -al /

      CMD [ "/bin/bash", "-c", "sleep infinity" ]
    type: Dockerfile
  output:
    pushSecret:
      name: quay
    to:
      kind: DockerImage
      name: quay.io/robbmanes/sles12sp3:1.0.0
  strategy:
    dockerStrategy:
      env:
      - name: BUILD_LOGLEVEL
        value: "5"
      forcePull: true
    type: Docker
	
$ oc create -f buildConfig.yaml

$ oc get bc
NAME      TYPE      FROM         LATEST
sles-bc   Docker    Dockerfile   1

$ oc start-build sles-bc
build.build.openshift.io/sles-bc-1 started

$ oc get pods
NAME                            READY     STATUS    RESTARTS   AGE
sles-bc-1-build                 1/1       Running   0          20s

$ oc logs sles-bc-1-build
I1009 18:05:36.474588       1 builder.go:329] openshift-builder 4.3.29-202007061006.p0-fcef31c
I1009 18:05:36.479359       1 builder.go:330] redacted build: {"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"sles-bc-1","namespace":"default","selfLink":"/apis/build.openshift.io/v1/namespaces/default/builds/sles-bc-1","uid":"fdd287dc-3e77-43d4-83d1-14b952da2ed0","resourceVersion":"17916","creationTimestamp":"2020-10-09T18:05:30Z","labels":{"buildconfig":"sles-bc","openshift.io/build-config.name":"sles-bc","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"sles-bc","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"sles-bc","uid":"fc776cee-b398-4043-b9a0-10371956cc7f","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"FROM registry.suse.com/suse/sles12sp4:latest\n\nRUN ls -al /\n\nCMD [ \"/bin/bash\", \"-c\", \"sleep infinity\" ]"},"strategy":{"type":"Docker","dockerStrategy":{"pullSecret":{"name":"builder-dockercfg-xt6zf"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}],"forcePull":true}},"output":{"to":{"kind":"DockerImage","name":"quay.io/robbmanes/sles12sp3:1.0.0"},"pushSecret":{"name":"quay"}},"resources":{"limits":{"cpu":"1","memory":"1Gi"}},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Manually triggered"}]},"status":{"phase":"New","outputDockerImageReference":"quay.io/robbmanes/sles12sp3:1.0.0","config":{"kind":"BuildConfig","namespace":"default","name":"sles-bc"},"output":{}}}
Caching blobs under "/var/cache/blobs".
I1009 18:05:36.768762       1 util_linux.go:56] found cgroup parent kubepods-pod8effd360_da84_4e9d_b4db_d925fff71716.slice
I1009 18:05:36.768824       1 builder.go:337] Running build with cgroup limits: api.CGroupLimits{MemoryLimitBytes:1073741824, CPUShares:0, CPUPeriod:0, CPUQuota:0, MemorySwap:1073741824, Parent:"kubepods-pod8effd360_da84_4e9d_b4db_d925fff71716.slice"}
I1009 18:05:36.768916       1 builder.go:318] Starting Docker build from build config sles-bc-1 ...
Local copy of "registry.suse.com/suse/sles12sp4:latest" is not present.
I1009 18:05:36.770056       1 docker.go:104] Locating docker config paths for type PULL_DOCKERCFG_PATH
I1009 18:05:36.770075       1 docker.go:104] Getting docker config in paths : [/var/run/secrets/openshift.io/pull]

Pulling image registry.suse.com/suse/sles12sp4:latest ...
Asked to pull fresh copy of "registry.suse.com/suse/sles12sp4:latest".
I1009 18:05:36.770176       1 daemonless.go:61] looking for config.json at /var/run/secrets/openshift.io/pull/config.json
I1009 18:05:36.770266       1 cfg.go:163] error reading file: open /var/run/secrets/openshift.io/pull/config.json: no such file or directory
I1009 18:05:36.770287       1 daemonless.go:61] looking for .dockerconfigjson at /var/run/secrets/openshift.io/pull/.dockerconfigjson
I1009 18:05:36.770308       1 cfg.go:163] error reading file: open /var/run/secrets/openshift.io/pull/.dockerconfigjson: no such file or directory
I1009 18:05:36.770324       1 daemonless.go:61] looking for .dockercfg at /var/run/secrets/openshift.io/pull/.dockercfg
I1009 18:05:36.770571       1 daemonless.go:61] found valid .dockercfg at /var/run/secrets/openshift.io/pull/.dockercfg
Getting image source signatures
Copying blob sha256:bb49d719eaee0cdbf07ada03af9f30a57fba73db7f7d56b8b668a4da4d8b70c2
Copying config sha256:4ea3dbf9c5846d1ad061513dc502e229de28c6e790f27c5c230f8fd1a0ccbfc2
Writing manifest to image destination
Storing signatures
Checking for Docker config file for PULL_DOCKERCFG_PATH in path /var/run/secrets/openshift.io/pull
I1009 18:05:48.547227       1 dockerutil.go:158] Using Docker authentication configuration in '/var/run/secrets/openshift.io/pull/.dockercfg'
Using Docker config file /var/run/secrets/openshift.io/pull/.dockercfg
Building...
Forcing fresh pull of base image.
I1009 18:05:48.547598       1 daemonless.go:458] Setting authentication for registry "172.30.150.60:5000" at "172.30.150.60:5000".
I1009 18:05:48.547975       1 daemonless.go:458] Setting authentication for registry "image-registry.openshift-image-registry.svc.cluster.local:5000" at "image-registry.openshift-image-registry.svc.cluster.local:5000".
I1009 18:05:48.548622       1 daemonless.go:458] Setting authentication for registry "image-registry.openshift-image-registry.svc:5000" at "image-registry.openshift-image-registry.svc:5000".
STEP 1: FROM registry.suse.com/suse/sles12sp4:latest
Getting image source signatures
Copying blob sha256:bb49d719eaee0cdbf07ada03af9f30a57fba73db7f7d56b8b668a4da4d8b70c2
Copying config sha256:4ea3dbf9c5846d1ad061513dc502e229de28c6e790f27c5c230f8fd1a0ccbfc2
Writing manifest to image destination
Storing signatures
STEP 2: ENV "BUILD_LOGLEVEL"="5"
ebd821064f9bd76db642761f526d6a3661fd5a9eae032ed58a4586490365ac38
STEP 3: RUN ls -al /
total 12
drwxr-xr-x.   1 root root   62 Oct  9 18:05 .
drwxr-xr-x.   1 root root   62 Oct  9 18:05 ..
drwxr-xr-x.   2 root root 4096 Sep 30 13:27 bin
drwxr-xr-x.  16 root root 2960 Oct  9 18:05 dev
drwxr-xr-x.   1 root root   25 Oct  9 18:05 etc
drwxr-xr-x.   2 root root    6 Jun 27  2017 home
drwxr-xr-x.   7 root root   76 Sep 30 13:27 lib
drwxr-xr-x.   5 root root 4096 Sep 30 13:27 lib64
drwxr-xr-x.   2 root root    6 Jun 27  2017 mnt
drwxr-xr-x.   2 root root    6 Jun 27  2017 opt
dr-xr-xr-x. 524 root root    0 Oct  9 18:05 proc
drwx------.   4 root root   52 Sep 30 13:27 root
drwxr-xr-x.   1 root root   42 Oct  9 18:05 run
drwxr-xr-x.   2 root root 4096 Sep 30 13:27 sbin
drwxr-xr-x.   2 root root    6 Jun 27  2017 selinux
drwxr-xr-x.   4 root root   28 Sep 30 13:27 srv
dr-xr-xr-x.  13 root root    0 Oct  9 17:35 sys
drwxrwxrwt.   2 root root    6 Sep 30 13:27 tmp
drwxr-xr-x.  13 root root  167 Sep 30 13:27 usr
drwxr-xr-x.  11 root root  148 Sep 30 13:27 var
2a16a11b8af7cb61bb0ca05ba1519ac5c77408e47d32cc2a2df762bd9e64aec6
STEP 4: CMD ["/bin/bash","-c","sleep infinity"]
56e4a7c6b340e6f1352f93bc032e49a3c4da4f2c1a44469b21653ed143e44796
STEP 5: ENV "OPENSHIFT_BUILD_NAME"="sles-bc-1" "OPENSHIFT_BUILD_NAMESPACE"="default"
9ae792c8ef81feac32c608f8d11d3acd39faa0d091a673d84150a5ed284e635c
STEP 6: LABEL "io.openshift.build.name"="sles-bc-1" "io.openshift.build.namespace"="default"
STEP 7: COMMIT temp.builder.openshift.io/default/sles-bc-1:084c0285
ab9329f2a45a29eeebbdc7e1075ba1c4c3f6c088443dcb8e9a56960bf48eed41
ab9329f2a45a29eeebbdc7e1075ba1c4c3f6c088443dcb8e9a56960bf48eed41
Tagging local image "temp.builder.openshift.io/default/sles-bc-1:084c0285" with name "quay.io/robbmanes/sles12sp3:1.0.0".
Added name "quay.io/robbmanes/sles12sp3:1.0.0" to local image.
Removing name "temp.builder.openshift.io/default/sles-bc-1:084c0285" from local image.
I1009 18:05:53.782094       1 docker.go:147] Locating docker auth for image quay.io/robbmanes/sles12sp3:1.0.0 and type PUSH_DOCKERCFG_PATH
I1009 18:05:53.782150       1 cfg.go:80] Locating docker config paths for type PUSH_DOCKERCFG_PATH
I1009 18:05:53.782169       1 cfg.go:80] Getting docker config in paths : [/var/run/secrets/openshift.io/push]
I1009 18:05:53.782182       1 config.go:137] looking for config.json at /var/run/secrets/openshift.io/push/config.json
I1009 18:05:53.782423       1 docker.go:147] Using robbmanes user for Docker authentication for image quay.io/robbmanes/sles12sp3:1.0.0
I1009 18:05:53.782444       1 builder.go:318] Authenticating Docker push with user "robbmanes"

Pushing image quay.io/robbmanes/sles12sp3:1.0.0 ...
Pushing image "quay.io/robbmanes/sles12sp3:1.0.0" from local storage.
Setting authentication secret for "".
Getting image source signatures
Copying blob sha256:bb49d719eaee0cdbf07ada03af9f30a57fba73db7f7d56b8b668a4da4d8b70c2
Copying blob sha256:3710c4862a5425e82244ff28dd5cfa0f3bd69626d5f1b9ce3c43c5bfcd4e2c20
Successfully pushed quay.io/robbmanes/sles12sp3:1.0.0
Warning: Push failed, retrying in 5s ...
Pushing image "quay.io/robbmanes/sles12sp3:1.0.0" from local storage.
Setting authentication secret for "".
Getting image source signatures
Copying blob sha256:bb49d719eaee0cdbf07ada03af9f30a57fba73db7f7d56b8b668a4da4d8b70c2
Copying blob sha256:3710c4862a5425e82244ff28dd5cfa0f3bd69626d5f1b9ce3c43c5bfcd4e2c20
Copying config sha256:ab9329f2a45a29eeebbdc7e1075ba1c4c3f6c088443dcb8e9a56960bf48eed41
Writing manifest to image destination
Writing manifest to image destination
Storing signatures
Successfully pushed quay.io/robbmanes/sles12sp3@sha256:525192e1ecdf9cce4706cfeacb7e538609de524171c201e61d392488faec585e
Push successful

However, if I run my newly-made image in the same cluster, /dev is now set to drwx------, as opposed to just using the registry.suse.com/suse/sles12sp4:latest image.

$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: sles-test
spec:
  containers:
    - name: sles-test
      image: quay.io/robbmanes/sles12sp3:1.0.0

$ oc create -f pod.yaml
pod/sles-test created

$ oc get pods
NAME              READY     STATUS      RESTARTS   AGE
sles-bc-1-build   0/1       Completed   0          3m47s
sles-test         1/1       Running     0          8s

$ oc exec sles-test -- ls -al /
total 12
drwxr-xr-x.   1 root root    6 Oct  9 18:09 .
drwxr-xr-x.   1 root root    6 Oct  9 18:09 ..
drwxr-xr-x.   2 root root 4096 Sep 30 13:27 bin
drwx------.   5 root root  360 Oct  9 18:09 dev
drwxr-xr-x.   1 root root   25 Oct  9 18:05 etc
drwxr-xr-x.   2 root root    6 Jun 27  2017 home
drwxr-xr-x.   7 root root   76 Sep 30 13:27 lib
drwxr-xr-x.   5 root root 4096 Sep 30 13:27 lib64
drwxr-xr-x.   2 root root    6 Jun 27  2017 mnt
drwxr-xr-x.   2 root root    6 Jun 27  2017 opt
dr-xr-xr-x. 287 root root    0 Oct  9 18:09 proc
drwx------.   4 root root   52 Sep 30 13:27 root
drwxr-xr-x.   1 root root   42 Oct  9 18:05 run
drwxr-xr-x.   2 root root 4096 Sep 30 13:27 sbin
drwxr-xr-x.   2 root root    6 Jun 27  2017 selinux
drwxr-xr-x.   4 root root   28 Sep 30 13:27 srv
dr-xr-xr-x.  13 root root    0 Oct  9 17:35 sys
drwxrwxrwt.   2 root root    6 Sep 30 13:27 tmp
drwxr-xr-x.  13 root root  167 Sep 30 13:27 usr
drwxr-xr-x.  11 root root  148 Sep 30 13:27 var

This has been tested on OCP 4.3.29 and OCP 4.5.14 myself, in brand-new cluster environments.

This does not happen with UBI8 images; specifically, I have found that if the original container image does not contain an (even empty) /dev directory in it, this appears to be the result, but I have not been able to test all conditions for this.

Comment 8 Robb Manes 2020-10-09 18:38:07 UTC
I should also note plain podman or buildah do not have this issue with the rebuilt image:

$ podman run --rm quay.io/robbmanes/sles12sp3:1.0.0 ls -al /
total 12
drwxr-xr-x.  20 root   root      28 Oct  9 18:24 .
drwxr-xr-x.   2 root   root    4096 Sep 30 13:27 bin
drwxr-xr-x.   5 root   root     340 Oct  9 18:24 dev
drwxr-xr-x.  43 root   root      54 Oct  9 18:24 etc
drwxr-xr-x.   2 root   root       6 Jun 27  2017 home
drwxr-xr-x.   7 root   root      76 Sep 30 13:27 lib
drwxr-xr-x.   5 root   root    4096 Sep 30 13:27 lib64
drwxr-xr-x.   2 root   root       6 Jun 27  2017 mnt
drwxr-xr-x.   2 root   root       6 Jun 27  2017 opt
dr-xr-xr-x. 404 nobody nogroup    0 Oct  9 18:24 proc
drwx------.   4 root   root      52 Sep 30 13:27 root
drwxr-xr-x.   5 root   root      27 Oct  9 18:24 run
drwxr-xr-x.   2 root   root    4096 Sep 30 13:27 sbin
drwxr-xr-x.   2 root   root       6 Jun 27  2017 selinux
drwxr-xr-x.   4 root   root      28 Sep 30 13:27 srv
dr-xr-xr-x.  13 nobody nogroup    0 Oct  8 16:18 sys
drwxrwxrwt.   2 root   root       6 Sep 30 13:27 tmp
drwxr-xr-x.  13 root   root     167 Sep 30 13:27 usr
drwxr-xr-x.  11 root   root     148 Sep 30 13:27 var

And if I make it manually by hand with buildah, I similarly have no such issues:

$ buildah bud -t quay.io/robbmanes/sles12sp3:1.0.0 .
STEP 1: FROM registry.suse.com/suse/sles12sp4:latest
Getting image source signatures
Copying blob bb49d719eaee [--------------------------------------] 0.0b / 0.0b
Copying config 4ea3dbf9c5 done
Writing manifest to image destination
Storing signatures
STEP 2: RUN ls -al /
total 12
drwxr-xr-x.  20 root   root      62 Oct  9 18:26 .
drwxr-xr-x.   2 root   root    4096 Sep 30 13:27 bin
drwxr-xr-x.   5 root   root     360 Oct  9 18:26 dev
drwxr-xr-x.  43 root   root      38 Oct  9 18:26 etc
drwxr-xr-x.   2 root   root       6 Jun 27  2017 home
drwxr-xr-x.   7 root   root      76 Sep 30 13:27 lib
drwxr-xr-x.   5 root   root    4096 Sep 30 13:27 lib64
drwxr-xr-x.   2 root   root       6 Jun 27  2017 mnt
drwxr-xr-x.   2 root   root       6 Jun 27  2017 opt
dr-xr-xr-x. 406 nobody nogroup    0 Oct  9 18:26 proc
drwx------.   4 root   root      52 Sep 30 13:27 root
drwxr-xr-x.   5 root   root      42 Oct  9 18:26 run
drwxr-xr-x.   2 root   root    4096 Sep 30 13:27 sbin
drwxr-xr-x.   2 root   root       6 Jun 27  2017 selinux
drwxr-xr-x.   4 root   root      28 Sep 30 13:27 srv
dr-xr-xr-x.  13 nobody nogroup    0 Oct  8 16:18 sys
drwxrwxrwt.   2 root   root       6 Sep 30 13:27 tmp
drwxr-xr-x.  13 root   root     167 Sep 30 13:27 usr
drwxr-xr-x.  11 root   root     148 Sep 30 13:27 var
STEP 3: CMD [ "/bin/bash", "-c", "sleep infinity" ]
STEP 4: COMMIT quay.io/robbmanes/sles12sp3:1.0.0
Getting image source signatures
Copying blob c682d6f644e6 skipped: already exists
Copying blob 2404a2b42e44 done
Copying config cff48723cd done
Writing manifest to image destination
Storing signatures
--> cff48723cd1
cff48723cd1d876492ebaa4c99c1474349b3e1c3602da4efa7ae3beb827bc6d8

$ buildah push quay.io/robbmanes/sles12sp3:1.0.0
Getting image source signatures
Copying blob 2404a2b42e44 done
Copying blob c682d6f644e6 done
Copying config cff48723cd done
Writing manifest to image destination
Copying config cff48723cd [--------------------------------------] 0.0b / 2.2KiB
Writing manifest to image destination
Storing signatures

$ podman rmi quay.io/robbmanes/sles12sp3:1.0.0
Untagged: quay.io/robbmanes/sles12sp3:1.0.0
Deleted: cff48723cd1d876492ebaa4c99c1474349b3e1c3602da4efa7ae3beb827bc6d8

$ podman run quay.io/robbmanes/sles12sp3:1.0.0 ls -al /
- - - - 8< - - - -
total 12
drwxr-xr-x.  20 root   root      28 Oct  9 18:30 .
drwxr-xr-x.   2 root   root    4096 Sep 30 13:27 bin
drwxr-xr-x.   5 root   root     340 Oct  9 18:30 dev
drwxr-xr-x.  43 root   root      54 Oct  9 18:30 etc
drwxr-xr-x.   2 root   root       6 Jun 27  2017 home
drwxr-xr-x.   7 root   root      76 Sep 30 13:27 lib
drwxr-xr-x.   5 root   root    4096 Sep 30 13:27 lib64
drwxr-xr-x.   2 root   root       6 Jun 27  2017 mnt
drwxr-xr-x.   2 root   root       6 Jun 27  2017 opt
dr-xr-xr-x. 402 nobody nogroup    0 Oct  9 18:30 proc
drwx------.   4 root   root      52 Sep 30 13:27 root
drwxr-xr-x.   5 root   root      27 Oct  9 18:30 run
drwxr-xr-x.   2 root   root    4096 Sep 30 13:27 sbin
drwxr-xr-x.   2 root   root       6 Jun 27  2017 selinux
drwxr-xr-x.   4 root   root      28 Sep 30 13:27 srv
dr-xr-xr-x.  13 nobody nogroup    0 Oct  8 16:18 sys
drwxrwxrwt.   2 root   root       6 Sep 30 13:27 tmp
drwxr-xr-x.  13 root   root     167 Sep 30 13:27 usr
drwxr-xr-x.  11 root   root     148 Sep 30 13:27 var

As expected, if I run this build in the buildah container, I similarly see no issues:

$ podman run -it --rm --name buildah registry.redhat.io/rhel8/buildah bash

# buildah bud -t quay.io/robbmanes/sles12sp3:1.0.0 .
WARN[0000] Can not find crun package on the host, containers might fail to run on cgroup V2 systems without crun: "exec: \"crun\": executable file not found in $PATH"
STEP 1: FROM registry.suse.com/suse/sles12sp4:latest
STEP 2: CMD [ "/bin/bash", "-c", "sleep infinity" ]
STEP 3: COMMIT quay.io/robbmanes/sles12sp3:1.0.0
Getting image source signatures
Copying blob c682d6f644e6 skipped: already exists
Copying blob 5f70bf18a086 done
Copying config 2055e51fd3 done
Writing manifest to image destination
Storing signatures
--> 2055e51fd3c
2055e51fd3c5fdc3ae92bfde6b1c7f807ee51578cc50a7d4bd5da81f29556871

# buildah login quay.io

# buildah push quay.io/robbmanes/sles12sp3:1.0.0

# exit

$ podman run quay.io/robbmanes/sles12sp3:1.0.0 ls -al /
total 12
drwxr-xr-x.  20 root   root      28 Oct  9 18:37 .
drwxr-xr-x.   2 root   root    4096 Sep 30 13:27 bin
drwxr-xr-x.   5 root   root     340 Oct  9 18:37 dev
drwxr-xr-x.  43 root   root      54 Oct  9 18:37 etc
drwxr-xr-x.   2 root   root       6 Jun 27  2017 home
drwxr-xr-x.   7 root   root      76 Sep 30 13:27 lib
drwxr-xr-x.   5 root   root    4096 Sep 30 13:27 lib64
drwxr-xr-x.   2 root   root       6 Jun 27  2017 mnt
drwxr-xr-x.   2 root   root       6 Jun 27  2017 opt
dr-xr-xr-x. 403 nobody nogroup    0 Oct  9 18:37 proc
drwx------.   4 root   root      52 Sep 30 13:27 root
drwxr-xr-x.   5 root   root      27 Oct  9 18:37 run
drwxr-xr-x.   2 root   root    4096 Sep 30 13:27 sbin
drwxr-xr-x.   2 root   root       6 Jun 27  2017 selinux
drwxr-xr-x.   4 root   root      28 Sep 30 13:27 srv
dr-xr-xr-x.  13 nobody nogroup    0 Oct  8 16:18 sys
drwxrwxrwt.   2 root   root       6 Sep 30 13:27 tmp
drwxr-xr-x.  13 root   root     167 Sep 30 13:27 usr
drwxr-xr-x.  11 root   root     148 Sep 30 13:27 var

So this appears to only relate to executing a build within OpenShift.

Comment 9 Robb Manes 2020-10-09 18:39:19 UTC
Reassigning to OpenShift/Build as per #7 and #8

Comment 11 Adam Kaplan 2020-10-14 22:21:11 UTC
Created attachment 1721627 [details]
buildConfig for reproducer

Reproducing BuildConfig

Comment 12 Adam Kaplan 2020-10-14 22:21:54 UTC
Created attachment 1721628 [details]
pod definition for reproducer

Pod spec for reproducer (based on built image)

Comment 13 Adam Kaplan 2020-10-14 22:22:52 UTC
Created attachment 1721629 [details]
[4.5] pod logs

Logs running the resulting pod on 4.5

Comment 14 Adam Kaplan 2020-10-14 22:23:34 UTC
Created attachment 1721630 [details]
[4.6] pod logs

Comment 15 Adam Kaplan 2020-10-14 22:28:57 UTC
There was a bug in buildah where it created directories with permission 0700 for bind mount targets if those directories were not present in the base image [1]. This was fixed in buildah v1.16.4, which will be used to drive builds in v4.6.0.

It appears that when cri-o runs the container in the pod, it preserves the directory permission settings when it mounts in /dev. You can see this comparing the pod logs from the built image on 4.5 [2] vs. 4.6 [3].

[1] https://github.com/containers/buildah/pull/2651
[2] https://bugzilla.redhat.com/attachment.cgi?id=1721629
[3] https://bugzilla.redhat.com/attachment.cgi?id=1721630

Comment 17 wewang 2020-10-15 09:14:38 UTC
Verified in version:
Version:
4.6.0-rc.4

Steps:
1. Using following bc to create build
```
apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
  name: sles-build
spec:
  resources:
    limits:
      cpu: "1"
      memory: 1Gi
  runPolicy: Serial
  source:
    dockerfile: |-
      FROM registry.suse.com/suse/sles12sp4:latest

      RUN ls -al /

      CMD [ "/bin/bash", "-c", "sleep infinity" ]
    type: Dockerfile
  output:
    to:
      kind: ImageStreamTag
      name: sles-build:latest
  strategy:
    dockerStrategy:
      env:
      - name: BUILD_LOGLEVEL
        value: "5"
    type: Docker
```
2. Check the build and log, now permission of /dev is drwxr-xr-x.

[wewang@wangwen work]$ oc get builds
NAME           TYPE     FROM         STATUS    STARTED         DURATION
sles-build-1   Docker   Dockerfile   Running   5 seconds ago   
[wewang@wangwen work]$ oc logs -f build/sles-build-1
9.112:5000" at "172.30.19.112:5000".
STEP 1: FROM registry.suse.com/suse/sles12sp4:latest
STEP 2: ENV "BUILD_LOGLEVEL"="5"
--> 689984a15a8
STEP 3: RUN ls -al /
total 12
drwxr-xr-x.   1 root root   62 Oct 15 09:10 .
drwxr-xr-x.   1 root root   62 Oct 15 09:10 ..
drwxr-xr-x.   2 root root 4096 Oct 14 20:39 bin
drwxr-xr-x.  15 root root 2900 Oct 15 09:10 dev
drwxr-xr-x.   1 root root   36 Oct 15 09:10 etc
drwxr-xr-x.   2 root root    6 Jun 27  2017 home
drwxr-xr-x.   7 root root   76 Oct 14 20:39 lib
drwxr-xr-x.   5 root root 4096 Oct 14 20:39 lib64
drwxr-xr-x.   2 root root    6 Jun 27  2017 mnt
drwxr-xr-x.   2 root root    6 Jun 27  2017 opt
dr-xr-xr-x. 503 root root    0 Oct 15 09:10 proc
drwx------.   4 root root   52 Oct 14 20:39 root
drwxr-xr-x.   1 root root   42 Oct 15 09:10 run
drwxr-xr-x.   2 root root 4096 Oct 14 20:39 sbin
drwxr-xr-x.   2 root root    6 Jun 27  2017 selinux
drwxr-xr-x.   4 root root   28 Oct 14 20:39 srv
dr-xr-xr-x.  13 root root    0 Oct 14 08:28 sys
drwxrwxrwt.   2 root root    6 Oct 14 20:39 tmp
drwxr-xr-x.  13 root root  167 Oct 14 20:39 usr
drwxr-xr-x.  11 root root  148 Oct 14 20:39 var
--> 959a9aed9d1
STEP 4: CMD ["/bin/bash","-c","sleep infinity"]
--> b32943474f8
STEP 5: ENV "OPENSHIFT_BUILD_NAME"="sles-build-1" "OPENSHIFT_BUILD_NAMESPACE"="wewang"
--> 4b08a45d569
STEP 6: LABEL "io.openshift.build.name"="sles-build-1" "io.openshift.build.namespace"="wewang"
STEP 7: COMMIT temp.builder.openshift.io/wewang/sles-build-1:4244f112
--> 00f798810d6
00f798810d67ae59fdea1be9e4ab0dc78c1b4b4493a4cf6c48c855e3861d9866
Tagging local image "temp.builder.openshift.io/wewang/sles-build-1:4244f112" with name "image-registry.openshift-image-registry.svc:5000/wewang/sles-build:latest".
Added name "image-registry.openshift-image-registry.svc:5000/wewang/sles-build:latest" to local image.
Removing name "temp.builder.openshift.io/wewang/sles-build-1:4244f112" from local image.
I1015 09:10:37.596406       1 docker.go:149] Locating docker auth for image image-registry.openshift-image-registry.svc:5000/wewang/sles-build:latest and type PUSH_DOCKERCFG_PATH
I1015 09:10:37.596460       1 cfg.go:68] Locating docker config paths for type PUSH_DOCKERCFG_PATH
I1015 09:10:37.596481       1 cfg.go:68] Getting docker config in paths : [/var/run/secrets/openshift.io/push]
I1015 09:10:37.596492       1 config.go:140] looking for config.json at /var/run/secrets/openshift.io/push/config.json
I1015 09:10:37.596539       1 config.go:110] looking for .dockercfg at /var/run/secrets/openshift.io/push/.dockercfg
I1015 09:10:37.596785       1 config.go:121] found .dockercfg at /var/run/secrets/openshift.io/push/.dockercfg
I1015 09:10:37.596856       1 docker.go:149] Using serviceaccount user for Docker authentication for image image-registry.openshift-image-registry.svc:5000/wewang/sles-build:latest
I1015 09:10:37.596866       1 builder.go:346] Authenticating Docker push with user "serviceaccount"

Pushing image image-registry.openshift-image-registry.svc:5000/wewang/sles-build:latest ...
Pushing image "image-registry.openshift-image-registry.svc:5000/wewang/sles-build:latest" from local storage.
Setting authentication secret for "".
Getting image source signatures
Copying blob sha256:ccc34666d0b6bf74442dbf0dc9424d94822623005b721e92a2df03bcf61c0f92
Copying blob sha256:8e7647b75465b11bf688f84911cd4ea441f28300289097a41430debb37ba30e3
Copying config sha256:00f798810d67ae59fdea1be9e4ab0dc78c1b4b4493a4cf6c48c855e3861d9866
Writing manifest to image destination
Storing signatures
Successfully pushed image-registry.openshift-image-registry.svc:5000/wewang/sles-build@sha256:a04087d08afb713ef50a7035860dbf619d2bc9a09eebcfb6f729637eb5214194
Push successful

Comment 18 wewang 2020-10-15 11:01:21 UTC
[wewang@wangwen work]$ oc create -f pod.yaml 
pod/sles-test created
[wewang@wangwen work]$ oc get pods
NAME                 READY   STATUS      RESTARTS   AGE
sles-build-1-build   0/1     Completed   0          2m48s
sles-test            1/1     Running     0          15s
[wewang@wangwen work]$ oc exec sles-test -- ls -al /

total 12
drwxr-xr-x.   1 root root   18 Oct 15 10:53 .
drwxr-xr-x.   1 root root   18 Oct 15 10:53 ..
drwxr-xr-x.   2 root root 4096 Sep 30 13:27 bin
drwxr-xr-t.   5 root root  360 Oct 15 10:53 dev   #permission is drwxr-xr-t. 
drwxr-xr-x.   1 root root   25 Oct  9 18:26 etc
drwxr-xr-x.   2 root root    6 Jun 27  2017 home
drwxr-xr-x.   7 root root   76 Sep 30 13:27 lib
drwxr-xr-x.   5 root root 4096 Sep 30 13:27 lib64
drwxr-xr-x.   2 root root    6 Jun 27  2017 mnt
drwxr-xr-x.   2 root root    6 Jun 27  2017 opt
dr-xr-xr-x. 189 root root    0 Oct 15 10:53 proc
drwx------.   1 root root   27 Sep 30 13:27 root
drwxr-xr-x.   1 root root   42 Oct  9 18:26 run
drwxr-xr-x.   2 root root 4096 Sep 30 13:27 sbin
drwxr-xr-x.   2 root root    6 Jun 27  2017 selinux
drwxr-xr-x.   4 root root   28 Sep 30 13:27 srv
dr-xr-xr-x.  13 root root    0 Oct 15 10:40 sys
drwxrwxrwt.   2 root root    6 Sep 30 13:27 tmp
drwxr-xr-x.  13 root root  167 Sep 30 13:27 usr
drwxr-xr-x.  11 root root  148 Sep 30 13:27 var

Comment 21 errata-xmlrpc 2020-10-27 16:44:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.