Bug 1298171 - Starting kubernetes early produces audit messages on RHEL Atomic Host
Summary: Starting kubernetes early produces audit messages on RHEL Atomic Host
Keywords:
Status: CLOSED DUPLICATE of bug 1376343
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: selinux-policy
Version: 7.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Miroslav Grepl
QA Contact: BaseOS QE Security Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-01-13 12:02 UTC by Marius Vollmer
Modified: 2016-09-22 14:44 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-09-22 14:44:57 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Marius Vollmer 2016-01-13 12:02:09 UTC
Description of problem:

With rhel-atomic-cloud-7.2-10.x86_64.qcow2 plus "atomic host upgrade".

When kubernetes is started early during/after boot, these messages appear in the journal:

    type=1400 audit(1452685817.385:5): avc:  denied  { read } for  pid=1988 comm="iptables" name="xtables.lock" dev="tmpfs" ino=19138 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file

When kubernetes is started later, the messages don't appear.  In our case, delaying the start by 3 seconds is enough, but 2 seconds are not. 

This happens in the Cockpit integration tests.  We boot a VM and start the test as soon as SSH to the machine starts working.  Pretty much the first thing the tests do is starting kubernetes.

Version-Release number of selected component (if applicable):
selinux-policy-targeted-3.13.1-60.el7.noarch

How reproducible:
Always when one is quick enough

Steps to Reproduce:
1. Boot a VM 
2. Within two or three seconds after SSH starts working:
   # systemctl start etcd kube-apiserver kube-controller-manager kube-scheduler docker kube-proxy kubelet
   Use a script, fingers are too slow.  Well, mine are.

Comment 2 Marius Vollmer 2016-01-13 12:23:11 UTC
Looking at this closer, I think this might just be a regular locking conflict of "xtables.lock", and we just happen to find the lock locked.  If true, then locking seems to broken in general, and it's not just a race during boot.

Comment 3 Stef Walter 2016-01-13 12:49:50 UTC
This issue was found by running the Cockpit integration tests.

Comment 4 Miroslav Grepl 2016-01-18 09:13:23 UTC
Ok, the problem is how/when xtables.lock is created. Just to be sure - what is a path to xtables.lock?

Comment 5 Marius Vollmer 2016-01-18 09:56:05 UTC
> Ok, the problem is how/when xtables.lock is created. Just to be sure - what
> is a path to xtables.lock?

I think it is this:

# ls -lZ /var/run/xtables.lock 
-rw-------. root root system_u:object_r:iptables_var_run_t:s0 /var/run/xtables.lock

Comment 7 Lukas Vrabec 2016-09-22 14:44:57 UTC

*** This bug has been marked as a duplicate of bug 1376343 ***


Note You need to log in before you can comment on or make changes to this bug.