While trying to upgrade a systemd-nspawn container from Fedora 40 to Fedora 41, I received the following error: Upgrading : sssd-common-2.10.0-2.fc41.x86_64 495/1223 error: unpacking of archive failed on file /var/lib/sss/mc: cpio: chown failed - No data available The container's /var/lib/sss/mc directory is ro-bind-mounted from the host system similar to what is described in the last post (dated 11/05/16 14:29) here: https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.org/message/X4WCBVUYS6H65V3Z3DC44NURL4VGGU3H/ Reproducible: Always Steps to Reproduce: 1. dnf upgrade --releasever=41 Actual Results: Failed: sssd-common-2.9.5-1.fc40.x86_64 sssd-common-2.10.0-2.fc41.x86_64 Error: Transaction failed Expected Results: Transaction succeeded
> The container's /var/lib/sss/mc directory is ro-bind-mounted from the host system How is it expected to work then if SSSD needs to write to files in this folder?
Ah, do you use only SSSD client libs from within a container?
Was host system upgraded first? There are no explicit chown-s of '%{mcpath}' in the spec.file. There is ``` %attr(775,%{sssd_user},%{sssd_user}) %dir %{mcpath} ``` I think if host system already installed sssd-2.10 and ownership was already updated, then upgrade withing container probably will not fail.
> I think if host system already installed sssd-2.10 and ownership was already updated, then upgrade withing container probably will not fail. That would be my normal upgrade path, but I was unable to update the host system to Fedora 41 this time around (due to BZ#2333179). I guess it would be reasonable to say that the container cannot run a newer version of the SSSD client than the host when they are linked like this?
> I guess it would be reasonable to say that the container cannot run a newer version of the SSSD client than the host when they are linked like this? In general mixing different versions of SSSD and sss_client libs is unsupported.
I consider that answer satisfactory. Thanks. 🙂
> I think if host system already installed sssd-2.10 and ownership was already updated, then upgrade withing container probably will not fail. I was able to work around other bug that was preventing me from updating the host server. However, the situation is not resolved when updating the container. It turns out that the sssd user has different UIDs on the host system versus in the container. Consequently, SSSD is not working in the container when /var/lib/sss/mc and /var/lib/sss/pipes are bind-mounted from the host. [root@container ~]# mount | grep sss root/0 on /var/lib/sss/mc type zfs (ro,relatime,seclabel,xattr,noacl,casesensitive) root/0 on /var/lib/sss/pipes type zfs (ro,relatime,seclabel,xattr,noacl,casesensitive) [root@container ~]# ls -ld /var/lib/sss/{mc,pipes} drwxrwxr-x. 2 unbound radvd 6 Dec 24 12:20 /var/lib/sss/mc drwxrwxr-x. 3 unbound radvd 7 Dec 24 12:20 /var/lib/sss/pipes [root@container ~]# getent passwd sssd sssd:x:942:395:User for sssd:/run/sssd:/sbin/nologin [root@host ~]# getent passwd sssd sssd:x:986:982:User for sssd:/run/sssd/:/sbin/nologin I think it is supposed to be possible to bind mount the /var/lib/sss/pipes and /var/lib/sss/mc directories from the host to the container and run just the sssd client in the container.
> Consequently, SSSD is not working in the container when /var/lib/sss/mc and /var/lib/sss/pipes are bind-mounted from the host. What exactly is broken? IIRC, '/var/lib/sss/pipes' ownership is checked only for PAM requests. Maybe custom '--uidmap' while running container can help.
> What exactly is broken? With "account required pam_sss.so" in /etc/pam.d/password-auth, I would get the following when trying to ssh into the container: sshd[167993]: fatal: Access denied for user slartibartfast by PAM account configuration [preauth] Changing that line in the PAM stack to "account required pam_sss.so ignore_authinfo_unavail" would resolve the error (or manually changing the UID and GID mappings around so that the sssd user would have the same IDs in the container as in the host). IMHO, SSSD should get one of the reserved UIDs in the 0-200 range so it will be the same across different installations and bind mounts between them would have correct ownership. 42 would be a good UID/UID for SSSD. :-)
(In reply to Gregory Lee Bartholomew from comment #9) > > IMHO, SSSD should get one of the reserved UIDs in the 0-200 range so it will > be the same across different installations and bind mounts between them > would have correct ownership. This is quite old standing idea: https://pagure.io/packaging-committee/issue/570 But take a note this wouldn't help if host and container run different OS.