Bug 1676946 - sssd.service fails to start with status=3/NOTIMPLEMENTED
Summary: sssd.service fails to start with status=3/NOTIMPLEMENTED
Keywords:
Status: CLOSED RAWHIDE
Alias: None
Product: Fedora
Classification: Fedora
Component: sssd
Version: rawhide
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Jakub Hrozek
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard: openqa
: 1677095 1677163 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-13 16:17 UTC by Jan Pazdziora (Red Hat)
Modified: 2024-05-27 00:21 UTC (History)
12 users (show)

Fixed In Version: sssd-2.0.0-8.fc30
Clone Of:
Environment:
Last Closed: 2019-02-14 09:32:41 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description Jan Pazdziora (Red Hat) 2019-02-13 16:17:36 UTC
Description of problem:

Starting systemd in container with sssd installed, the sssd.service is shown failed.

Version-Release number of selected component (if applicable):

sssd-2.0.0-7.fc30.x86_64

How reproducible:

Deterministic.

Steps to Reproduce:
1. Have Dockerfile:

# Clone from the Fedora rawhide image
FROM registry.fedoraproject.org/fedora:rawhide

MAINTAINER FreeIPA Developers <freeipa-devel.org>

# Workaround 1615948
RUN ln -s /bin/false /usr/sbin/systemd-machine-id-setup
RUN dnf upgrade -y --setopt=install_weak_deps=False \
	&& dnf install -y --setopt=install_weak_deps=False sssd \
	&& dnf clean all

# Workaround 1668836
RUN systemctl mask nfs-convert.service
# var-lib-nfs-rpc_pipefs.mount would run (and fail) nondeterministically
RUN systemctl mask rpc-gssd.service

# Container image which runs systemd
RUN test -f /etc/machine-id && ! test -s /etc/machine-id
RUN test -z "$container"
ENV container oci
ENTRYPOINT [ "/usr/sbin/init" ]
STOPSIGNAL RTMIN+3
VOLUME [ "/var/log/journal" ]

2. Build container image:
   docker build -t sssd .
3. Run container:
   docker run --name sssd -d sssd
4. Wait a while
5. Run
   docker exec sssd systemctl | grep sssd

Actual results:

● sssd.service                           loaded failed failed    System Security Services Daemon                      

Expected results:

● sssd.service                           loaded active running    System Security Services Daemon                      

Additional info:

Running

docker exec sssd systemctl status sssd

shows:

● sssd.service - System Security Services Daemon
   Loaded: loaded (/usr/lib/systemd/system/sssd.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Wed 2019-02-13 16:15:48 UTC; 1min 29s ago
  Process: 27 ExecStart=/usr/sbin/sssd -i ${DEBUG_LOGGER} (code=exited, status=3)
 Main PID: 27 (code=exited, status=3)

Feb 13 16:15:48 869f1e0a9013 systemd[1]: Starting System Security Services Daemon...
Feb 13 16:15:48 869f1e0a9013 sssd[27]: Could not create private keyring session. If you store password there they may be easily accessible to the root user. (1, Operation not permitted)
Feb 13 16:15:48 869f1e0a9013 sssd[27]: Could not set permissions on private keyring. If you store password there they may be easily accessible to the root user. (1, Operation not permitted)
Feb 13 16:15:48 869f1e0a9013 sssd[27]: Starting up
Feb 13 16:15:48 869f1e0a9013 systemd[1]: sssd.service: Main process exited, code=exited, status=3/NOTIMPLEMENTED
Feb 13 16:15:48 869f1e0a9013 systemd[1]: sssd.service: Failed with result 'exit-code'.
Feb 13 16:15:48 869f1e0a9013 systemd[1]: Failed to start System Security Services Daemon.

Comment 1 Sumit Bose 2019-02-13 16:24:17 UTC
This is most probably caused by https://pagure.io/SSSD/sssd/issue/3924.

Comment 2 Lukas Slebodnik 2019-02-13 16:25:31 UTC
It is not related to containers

sh# rpm -q sssd-common
sssd-common-2.0.0-7.fc30.x86_64
sh# systemctl restart sssd
Job for sssd.service failed because the control process exited with error code.
See "systemctl status sssd.service" and "journalctl -xe" for details.

Comment 3 Lukas Slebodnik 2019-02-13 16:26:16 UTC
(In reply to Sumit Bose from comment #1)
> This is most probably caused by https://pagure.io/SSSD/sssd/issue/3924.

Exactly

Comment 4 Adam Williamson 2019-02-14 01:48:30 UTC
openQA is hitting this too, but I didn't get around to reporting it until now :) (it is also preventing me logging in as myself on my desktop, so I'm gonna try and backport the patches to fix it).

Comment 5 Paul Whalen 2019-02-14 03:01:53 UTC
*** Bug 1677095 has been marked as a duplicate of this bug. ***

Comment 6 Adam Williamson 2019-02-14 07:22:30 UTC
Fix worked on my machine at least...

Comment 7 Sumit Bose 2019-02-14 08:03:55 UTC
*** Bug 1677163 has been marked as a duplicate of this bug. ***

Comment 8 Jan Pazdziora (Red Hat) 2019-02-14 08:09:34 UTC
Fedora rawhide still only has 2.0.0-7.fc30, so not fixed.

Comment 9 Jan Pazdziora (Red Hat) 2019-02-14 08:12:44 UTC
Sumit, I confirm that https://copr-be.cloud.fedoraproject.org/results/sbose/sssd-rawhide/ with 2.0.0-7.fc30sb addresses the problem. Thanks for the quick turnaround on the test build.

Comment 10 Sumit Bose 2019-02-14 08:42:00 UTC
Hi Jan,

thank you for testing. About the new build, https://apps.fedoraproject.org/packages/sssd already says that the latest version is 2.0.0-8.fc30, so I guess the build was not mirrored to all repos yet. Can you close the ticket again if 2.0.0-8.fc30 is available in the repo you use?

bye,
Sumit

Comment 11 Lukas Slebodnik 2019-02-14 09:32:41 UTC
https://fedoraproject.org/wiki/BugZappers/BugStatusWorkFlow says:

"""
Once a bug has been fixed and included in a new package in rawhide or the updates repo it should be closed. For a stable or Branched release, the resolution ERRATA should be used. For Rawhide, the resolution RAWHIDE should be used. 
"""

and

"""
For Rawhide, maintainers can choose to move to CLOSED RAWHIDE as soon as they commit a fix to CVS, if outside a freeze period; the MODIFIED process is optional. 
"""

And because new package is already in rawhide (koji build has tag f30)
https://koji.fedoraproject.org/koji/buildinfo?buildID=1209592

This BZ can be closed. This is a disadvantage of missing bodhi process in rawhide. I am sorry for that.

Comment 12 Jan Pazdziora (Red Hat) 2019-02-14 09:55:24 UTC
Sure. The comment 6 did not mention there was new build around nor updated Fixed In Version, and I had no idea what fix it refers to.

Comment 13 Adam Williamson 2019-02-14 15:55:02 UTC
@Jan sorry, I figured it'd be clear from comment #4 that I was doing a new build, but obviously not.

The package will appear in repos with 20190214.n.0 compose, which completed a couple of hours ago, so it'll be making its way to mirrors around now (faster ones probably have it already).

Comment 14 RalphScott 2024-05-23 17:22:57 UTC Comment hidden (spam)

Note You need to log in before you can comment on or make changes to this bug.