Bug 1298171

Summary: Starting kubernetes early produces audit messages on RHEL Atomic Host
Product: Red Hat Enterprise Linux 7 Reporter: Marius Vollmer <mvollmer>
Component: selinux-policyAssignee: Miroslav Grepl <mgrepl>
Status: CLOSED DUPLICATE QA Contact: BaseOS QE Security Team <qe-baseos-security>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.2CC: lvrabec, mgrepl, mmalik, mvollmer, plautrba, pvrabec, ssekidde, stefw
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-09-22 14:44:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Marius Vollmer 2016-01-13 12:02:09 UTC
Description of problem:

With rhel-atomic-cloud-7.2-10.x86_64.qcow2 plus "atomic host upgrade".

When kubernetes is started early during/after boot, these messages appear in the journal:

    type=1400 audit(1452685817.385:5): avc:  denied  { read } for  pid=1988 comm="iptables" name="xtables.lock" dev="tmpfs" ino=19138 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file

When kubernetes is started later, the messages don't appear.  In our case, delaying the start by 3 seconds is enough, but 2 seconds are not. 

This happens in the Cockpit integration tests.  We boot a VM and start the test as soon as SSH to the machine starts working.  Pretty much the first thing the tests do is starting kubernetes.

Version-Release number of selected component (if applicable):
selinux-policy-targeted-3.13.1-60.el7.noarch

How reproducible:
Always when one is quick enough

Steps to Reproduce:
1. Boot a VM 
2. Within two or three seconds after SSH starts working:
   # systemctl start etcd kube-apiserver kube-controller-manager kube-scheduler docker kube-proxy kubelet
   Use a script, fingers are too slow.  Well, mine are.

Comment 2 Marius Vollmer 2016-01-13 12:23:11 UTC
Looking at this closer, I think this might just be a regular locking conflict of "xtables.lock", and we just happen to find the lock locked.  If true, then locking seems to broken in general, and it's not just a race during boot.

Comment 3 Stef Walter 2016-01-13 12:49:50 UTC
This issue was found by running the Cockpit integration tests.

Comment 4 Miroslav Grepl 2016-01-18 09:13:23 UTC
Ok, the problem is how/when xtables.lock is created. Just to be sure - what is a path to xtables.lock?

Comment 5 Marius Vollmer 2016-01-18 09:56:05 UTC
> Ok, the problem is how/when xtables.lock is created. Just to be sure - what
> is a path to xtables.lock?

I think it is this:

# ls -lZ /var/run/xtables.lock 
-rw-------. root root system_u:object_r:iptables_var_run_t:s0 /var/run/xtables.lock

Comment 7 Lukas Vrabec 2016-09-22 14:44:57 UTC

*** This bug has been marked as a duplicate of bug 1376343 ***