Hide Forgot
Created attachment 1768732 [details] Build log Created attachment 1768732 [details] Build log Description of problem: If I run custom 389-ds container based on fedora33 image with podman in rootful mode, it works, but dscontainer fails if I reuse data volume with another instance of the container. In addition, I tried to run containers from a custom image based on quay.io/centos/centos:stream8 and from the image docker.io/389ds/dirsrv:latest provided by one of 389ds developers, and I got the same result both times. I believe it's python3-lib389 issue because /usr/libexec/dirsrv/dscontainer relabels files inside a container (see bug #1945968). Also, I reported this bug at 389ds GitHub Issues: https://github.com/389ds/389-ds-base/issues/4717 Version-Release number of selected component (if applicable): 389-ds-base-1.4.4.14-1.fc33.x86_64 python3-lib389-1.4.4.14-1.fc33.x86_64 How reproducible: always Steps to Reproduce: 1. Create a 389-ds container image with buildah from quay.io/fedora/fedora:33-x86_64 2. Run the first instance of 389-ds container with podman 3. Check that the container is running and then stop and delete it 4. Run the second instance of 389-ds container with podman Actual results: # podman run -dt --volume=389ds:/data:z --publish=3389:3389 --publish=3636:3636 --env='DS_DM_PASSWORD=pass' --env='LDAPBASE=dc=test,dc=internal' --env='LDAPBINDDN=cn="Directory Manager"' --health-cmd="/usr/libexec/dirsrv/dscontainer -H" --health-interval=5s --health-retries=2 --health-start-period=5m --health-timeout=5s localhost/fedora33-389-ds:1.4 33c792907e63096779b7d4138b9cdec2a291bbc0d6a014f4eac5857e043a428f # podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 33c792907e63 localhost/fedora33-389-ds:1.4 /usr/libexec/dirs... 11 seconds ago Up 9 seconds ago 0.0.0.0:3389->3389/tcp, 0.0.0.0:3636->3636/tcp jovial_burnell # podman stop jovial_burnell && podman rm jovial_burnell 33c792907e63096779b7d4138b9cdec2a291bbc0d6a014f4eac5857e043a428f 33c792907e63096779b7d4138b9cdec2a291bbc0d6a014f4eac5857e043a428f # podman run -dt --volume=389ds:/data:z --publish=3389:3389 --publish=3636:3636 --env='DS_DM_PASSWORD=pass' --env='LDAPBASE=dc=test,dc=internal' --env='LDAPBINDDN=cn="Directory Manager"' --health-cmd="/usr/libexec/dirsrv/dscontainer -H" --health-interval=5s --health-retries=2 --health-start-period=5m --health-timeout=5s localhost/fedora33-389-ds:1.4 38d9b1b2c31a759ebc45d64ed1c57dd0f86529630f43ffe225f8a7649f82ef52 # podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 38d9b1b2c31a localhost/fedora33-389-ds:1.4 /usr/libexec/dirs... 10 seconds ago Exited (1) 5 seconds ago 0.0.0.0:3389->3389/tcp, 0.0.0.0:3636->3636/tcp condescending_curran # podman logs condescending_curran /usr/libexec/dirsrv/dscontainer:435: SyntaxWarning: "is" with a literal. Did you mean "=="? if begin_healthcheck(None) is (False, True): INFO: The 389 Directory Server Container Bootstrap INFO: Inspired by works of: ITS, The University of Adelaide INFO: 389 Directory Server Version: 1.4.4.14 INFO: Checking for PEM TLS files ... INFO: Have /data/tls/server.key -> False INFO: Have /data/tls/server.crt -> False INFO: Have /data/tls/ca -> False INFO: Have /data/config/pwdfile.txt -> True INFO: Unable to configure TLS from PEM, missing a required file. INFO: Starting 389-ds-container ... DEBUG: Allocate local instance <class 'lib389.DirSrv'> with None DEBUG: open(): Connecting to uri ldapi://%2Fdata%2Frun%2Fslapd-localhost.socket DEBUG: Using dirsrv ca certificate /etc/dirsrv/slapd-localhost DEBUG: Using external ca certificate /etc/dirsrv/slapd-localhost DEBUG: Using external ca certificate /etc/dirsrv/slapd-localhost DEBUG: Using /etc/openldap/ldap.conf certificate policy DEBUG: ldap.OPT_X_TLS_REQUIRE_CERT = 2 DEBUG: open(): Using root autobind ... DEBUG: Instance LDAPI not functional (yet?) WARNING: ns-slapd pid has completed, you should check the error log ... ERROR: 389-ds-container failed to start INFO: STOPPING: Sent SIGTERM to ns-slapd ... INFO: STOPPING: Shutting down 389-ds-container ... INFO: STOPPED: Shut down 389-ds-container # ausearch -m avc -c ns-slapd ---- time->Sat Apr 3 07:52:07 2021 type=AVC msg=audit(1617425527.770:2950): avc: denied { read } for pid=153670 comm="ns-slapd" name="slapd-collations.conf" dev="dm-1" ino=34999606 scontext=system_u:system_r:container_t:s0:c70,c662 tcontext=system_u:object_r:container_file_t:s0:c10,c529 tclass=file permissive=0 ---- time->Sat Apr 3 07:52:07 2021 type=AVC msg=audit(1617425527.782:2951): avc: denied { write } for pid=153670 comm="ns-slapd" name="99user.ldif" dev="dm-1" ino=18571377 scontext=system_u:system_r:container_t:s0:c70,c662 tcontext=system_u:object_r:container_file_t:s0:c10,c529 tclass=file permissive=0 ---- time->Sat Apr 3 07:52:07 2021 type=AVC msg=audit(1617425527.851:2952): avc: denied { read } for pid=153670 comm="ns-slapd" name="99user.ldif" dev="dm-1" ino=18571377 scontext=system_u:system_r:container_t:s0:c70,c662 tcontext=system_u:object_r:container_file_t:s0:c10,c529 tclass=file permissive=0 # ls -lZ /var/lib/containers/storage/volumes/389ds/_data/config/ total 324 lrwxrwxrwx. 1 root root system_u:object_r:container_file_t:s0 6 Apr 3 07:49 18a51399.0 -> ca.crt lrwxrwxrwx. 1 root root system_u:object_r:container_file_t:s0 15 Apr 3 07:49 5f0cb753.0 -> Server-Cert.crt -rw-rw----. 1 root root system_u:object_r:container_file_t:s0 2170 Apr 3 07:49 Self-Signed-CA.pem -rw-rw----. 1 root root system_u:object_r:container_file_t:s0 3388 Apr 3 07:49 Server-Cert-Key.pem -rw-rw----. 1 root root system_u:object_r:container_file_t:s0 2095 Apr 3 07:49 Server-Cert.crt -rw-rw----. 1 root root system_u:object_r:container_file_t:s0 2099 Apr 3 07:49 Server-Cert.csr -rw-rw----. 1 root root system_u:object_r:container_file_t:s0 2343 Apr 3 07:49 Server-Cert.pem -rw-rw----. 1 root root system_u:object_r:container_file_t:s0 1959 Apr 3 07:49 ca.crt -rw-------. 1 root root system_u:object_r:container_file_t:s0 36864 Apr 3 07:49 cert9.db -r--r-----. 1 root root system_u:object_r:container_file_t:s0:c10,c529 1676 Mar 19 16:43 certmap.conf -rwxr-xr-x. 1 root root system_u:object_r:container_file_t:s0 269 Apr 3 07:49 container.inf -rw-------. 1 root root system_u:object_r:container_file_t:s0 58547 Apr 3 07:51 dse.ldif -rw-------. 1 root root system_u:object_r:container_file_t:s0 58547 Apr 3 07:49 dse.ldif.bak -rw-------. 1 root root system_u:object_r:container_file_t:s0 58547 Apr 3 07:49 dse.ldif.startOK -rw-------. 1 root root system_u:object_r:container_file_t:s0 45056 Apr 3 07:49 key4.db -rw-------. 1 root root system_u:object_r:container_file_t:s0 257 Apr 3 07:49 noise.txt -rw-------. 1 root root system_u:object_r:container_file_t:s0 91 Apr 3 07:49 pin.txt -rw-------. 1 root root system_u:object_r:container_file_t:s0 560 Apr 3 07:49 pkcs11.txt -rw-------. 1 root root system_u:object_r:container_file_t:s0 65 Apr 3 07:49 pwdfile.txt drwxrwx---. 2 root root system_u:object_r:container_file_t:s0 25 Apr 3 07:49 schema -r--r-----. 1 root root system_u:object_r:container_file_t:s0:c10,c529 15142 Mar 19 16:43 slapd-collations.conf # ls -lZ /var/lib/containers/storage/volumes/389ds/_data/config/schema/ total 4 -rw-r--r--. 1 root root system_u:object_r:container_file_t:s0:c10,c529 291 Mar 19 16:43 99user.ldif Expected results: Successful relabeling of all files at the reused volume, including ones created by dscontainer. Additional info: $ cat fedora33-389-ds.sh #!/usr/bin/env bash # See also https://build.opensuse.org/package/view_file/home:firstyear/389-ds-container/Dockerfile set -x image=$(buildah from quay.io/fedora/fedora:33-x86_64) buildah run "$image" -- dnf -y install --setopt=install_weak_deps=False \ 389-ds-base python3-lib389 buildah run "$image" -- dnf -y clean all buildah run "$image" -- mkdir -p /data/{config,ssca,run} /var/run/dirsrv buildah run "$image" -- ln -s /data/config /etc/dirsrv/slapd-localhost buildah run "$image" -- ln -s /data/ssca /etc/dirsrv/ssca buildah run "$image" -- ln -s /data/run /var/run/dirsrv buildah config --volume /data --port 3389 --port 3636 \ --cmd "/usr/libexec/dirsrv/dscontainer -r" "$image" buildah commit "$image" "fedora33-389-ds:1.4" buildah rm "$image" set +x # sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Memory protection checking: actual (secure) Max kernel policy version: 33 # podman --version podman version 3.0.1 # uname -r 5.11.10-200.fc33.x86_64