Bug 1275399 - docker spam about loginuid and pwuid...
Summary: docker spam about loginuid and pwuid...
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: docker
Version: 7.3
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: ---
Assignee: Lokesh Mandvekar
QA Contact: atomic-bugs@redhat.com
URL: https://github.com/projectatomic/dock...
Depends On:
TreeView+ depends on / blocked
Reported: 2015-10-26 19:36 UTC by Eric Paris
Modified: 2019-03-06 01:23 UTC (History)
6 users (show)

Fixed In Version: docker-1.9.1-7.el7_2
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2016-03-31 23:22:42 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1335635 0 unspecified CLOSED "Failed to get pwuid struct: user: unknown userid " log spam 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHBA-2016:0536 0 normal SHIPPED_LIVE docker bug fix and enhancement update 2016-04-01 03:19:56 UTC

Internal Links: 1335635

Description Eric Paris 2015-10-26 19:36:52 UTC
Oct 26 19:33:24 eparis-submit-queue-registry.osop.rhcloud.com Docker[1307]: {Action=json, LoginUID=4294967295, PID=25266}
Oct 26 19:33:24 eparis-submit-queue-registry.osop.rhcloud.com docker[1307]: time="2015-10-26T19:33:24.134250776Z" level=error msg="Failed to get pwuid struct: user: unknown userid 4294967295"

When driving docker-1.9.0-3.el7.x86_64 with kubernetes I get constant log spam like above.

PID=25266 is kubelet and 4294967295 is the unset loginuid. Which is correct. Things run from systemd (like this daemon) do not have a loginuid....

Comment 1 Daniel Walsh 2015-10-26 19:39:20 UTC
Probably should not report this as a failure.

Comment 3 Nalin Dahyabhai 2015-10-26 20:32:32 UTC
For values other than -1, it's probably still indicative of a misconfiguration on the host, but suppressing the error message when the login UID is 0xffffffff (in practice, uid_t is unsigned) sounds reasonable.

Comment 4 Eric Paris 2015-10-26 21:54:50 UTC
agreed. thanks Nalin.

Comment 5 Daniel Walsh 2015-10-28 14:39:00 UTC
Fixed in docker-1.9

Comment 6 Eric Paris 2015-11-24 20:41:02 UTC

Nov 24 20:38:21 eparis-submit-queue-registry.osop.rhcloud.com Docker[20861]: {Action=json, ID=18e21090551b6807ad1a0f5073e5b673dd608611befee643789d904ef3516f1d, LoginUID=4294967295, PID=20902}

Nov 24 20:38:21 eparis-submit-queue-registry.osop.rhcloud.com Docker[20861]: {Action=json, LoginUID=4294967295, PID=20902}

Nov 24 20:38:21 eparis-submit-queue-registry.osop.rhcloud.com docker[20861]: time="2015-11-24T20:38:21.051626373Z" level=error msg="Failed to get pwuid struct: user: unknown userid 4294967295"

Comment 7 Eric Paris 2015-11-24 21:58:52 UTC
Getting better.  The error message is gone in docker-1.9.1-3.el7.x86_64

I'm still getting tons and tons of log spam in the steady state using kubernetes.


{Action=json, ID=b243494a4a8941266b45656723b0fe17488fd22d7f88bf1cb1a55c1681497a1b, LoginUID=4294967295, PID=21041}

{Action=json, LoginUID=4294967295, PID=21041}

{Action=version, LoginUID=4294967295, PID=21041}

I have `OPTIONS=--selinux-enabled --log-level=warn` in my /etc/sysconfig/docker

Comment 8 Daniel Walsh 2015-11-25 13:13:45 UTC
This is the syslog messages on managers of docker.  We have added syslog and audit log.  How often is k8s contacting docker?  Perhaps we need a way to throttle messages.

Comment 9 Eric Paris 2015-11-25 14:16:53 UTC
Those messages seem devoid of useful information. What is it telling me? Also note I specifically requested a docker log level of warn, because docker was too spammy about things...

I'm getting about 75,000 or 10Meg of messages per hour.

Comment 10 Eric Paris 2015-11-25 14:23:21 UTC
removing the level=info I get so much duplicate.

{Action=json, LoginUID=4294967295, PID=10661}
time="2015-11-25T14:21:40.595754961Z" level=info msg="GET /containers/json"

These things should be merged, and let me drop all this 'info', no?

Comment 11 Colin Walters 2015-11-25 15:07:56 UTC
My 2c: Docker should support basically disabling auth{z} logging for kubelet.  Or more generally for any non-human users.

Comment 12 Daniel Walsh 2015-11-25 15:26:56 UTC
The problem with that then is that an admin could easily get around logging/auditing by setting up a system service that triggers docker to do stuff on their behalf.

I have changed the admin logging patch to follow dockers logging and to log at "info" level.  Perhaps we should ship docker with a default to send messages at the "warn" level.  I also fixed up the audit patch to just send on "start", "stop", "create", "remove".

Comment 13 Colin Walters 2015-11-25 15:39:38 UTC
An admin who can create arbitrary system services could do `ExecStart=/bin/rm -rf /`, and there already wouldn't be anything auditing that "they" deleted the files, right?  Basically if the user has CAP_SYS_ADMIN we've already lost if they are trying to avoid audit.

Comment 14 Eric Paris 2015-11-25 15:49:39 UTC
I say we push GET calls on /json and /version to debug. These aren't changing the system. These aren't events we need to record/audit...

Comment 15 Daniel Walsh 2015-11-25 15:55:01 UTC
Colin, if the machine is sending its logs off machine, they will at least know that the admin did something to block the messages from going off machine.  I know it is of limited value, but this is the way the CC people think.  I also know that they would know that a the evil admin configured a systemd service and started it up.

I changed the code that does status type calls to go to Debug and the Modify types to go to Info.

	switch action {
	case "info":
	case "images":
	case "version":
	case "json":
	case "search":
	case "stats":
	case "events":
	case "history":

Comment 16 Eric Paris 2015-11-25 17:33:31 UTC
We are now regression free using docker-1.9.1-4.el7.x86_64

But there is still tons and tons of useless log spam which I think belongs in debug.

time="2015-11-25T17:29:42.292843622Z" level=info msg="GET /version"
time="2015-11-25T17:29:37.153226450Z" level=info msg="GET /containers/json"
time="2015-11-25T17:29:35.494220670Z" level=info msg="GET /containers/669652cdd176af134c4ba37301d4391540a9a6e8518dfd5b17b0b56fa8e1550a/json"

Are all messages seen VERY frequently (40k per hour) when running kubernetes. (16 containers running)

Comment 17 Daniel Walsh 2015-11-27 12:38:06 UTC
Opened a pull request with docker to fix this issue.


I have also update the 1.9 and 1.10 docker branches on ProjectAtomic with this fix.

Comment 18 Daniel Walsh 2015-12-01 22:01:10 UTC
Now moved to 

Comment 19 Daniel Walsh 2015-12-01 22:01:10 UTC
Now moved to 

Comment 20 Daniel Walsh 2015-12-01 22:45:19 UTC
Merged.  This will be fixed in docker-1.10.  Do we need to back port this into docker-1.9?

Comment 21 Eric Paris 2015-12-01 22:48:08 UTC
The last 1.9 I got from Lokesh is no worse than 1.8. If it's easy I'd like to see it in 1.9. If it's more than 30 minutes work we can certainly live until 1.10.

Comment 22 Daniel Walsh 2015-12-01 22:51:37 UTC
I can just apply the pull request I originally suggested and be done with it for docker-1.9.

Comment 23 Daniel Walsh 2015-12-02 15:00:58 UTC
Actually I already did.  Lokesh could you build an updated version of docker-1.9.1 for RHEL and Fedora.

Comment 26 Luwen Su 2016-03-18 08:49:27 UTC
Hi Eric,

Could you have a look at the latest version -23
if make the log quite?

I only test a simple kubectl example which
communicates with docker,it's looks good to me
but hard to build a complicate one to simulate
the real product environment.

Comment 27 Eric Paris 2016-03-19 14:37:20 UTC
This LGTM. I did get some spam on startup that I've not seen before. But I'll file another bug on that. this is VERIFIED

Comment 28 Luwen Su 2016-03-20 13:13:27 UTC
(In reply to Eric Paris from comment #27)
> This LGTM. I did get some spam on startup that I've not seen before. But
> I'll file another bug on that. this is VERIFIED

thanks eric, in docker-1.9.1-23.el7.x86_64, move to verified

Comment 30 errata-xmlrpc 2016-03-31 23:22:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.