Bug 1298171 - Starting kubernetes early produces audit messages on RHEL Atomic Host
Starting kubernetes early produces audit messages on RHEL Atomic Host
Status: CLOSED DUPLICATE of bug 1376343
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: selinux-policy (Show other bugs)
7.2
Unspecified Unspecified
medium Severity medium
: rc
: ---
Assigned To: Miroslav Grepl
BaseOS QE Security Team
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-01-13 07:02 EST by Marius Vollmer
Modified: 2016-09-22 10:44 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-09-22 10:44:57 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Marius Vollmer 2016-01-13 07:02:09 EST
Description of problem:

With rhel-atomic-cloud-7.2-10.x86_64.qcow2 plus "atomic host upgrade".

When kubernetes is started early during/after boot, these messages appear in the journal:

    type=1400 audit(1452685817.385:5): avc:  denied  { read } for  pid=1988 comm="iptables" name="xtables.lock" dev="tmpfs" ino=19138 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file

When kubernetes is started later, the messages don't appear.  In our case, delaying the start by 3 seconds is enough, but 2 seconds are not. 

This happens in the Cockpit integration tests.  We boot a VM and start the test as soon as SSH to the machine starts working.  Pretty much the first thing the tests do is starting kubernetes.

Version-Release number of selected component (if applicable):
selinux-policy-targeted-3.13.1-60.el7.noarch

How reproducible:
Always when one is quick enough

Steps to Reproduce:
1. Boot a VM 
2. Within two or three seconds after SSH starts working:
   # systemctl start etcd kube-apiserver kube-controller-manager kube-scheduler docker kube-proxy kubelet
   Use a script, fingers are too slow.  Well, mine are.
Comment 2 Marius Vollmer 2016-01-13 07:23:11 EST
Looking at this closer, I think this might just be a regular locking conflict of "xtables.lock", and we just happen to find the lock locked.  If true, then locking seems to broken in general, and it's not just a race during boot.
Comment 3 Stef Walter 2016-01-13 07:49:50 EST
This issue was found by running the Cockpit integration tests.
Comment 4 Miroslav Grepl 2016-01-18 04:13:23 EST
Ok, the problem is how/when xtables.lock is created. Just to be sure - what is a path to xtables.lock?
Comment 5 Marius Vollmer 2016-01-18 04:56:05 EST
> Ok, the problem is how/when xtables.lock is created. Just to be sure - what
> is a path to xtables.lock?

I think it is this:

# ls -lZ /var/run/xtables.lock 
-rw-------. root root system_u:object_r:iptables_var_run_t:s0 /var/run/xtables.lock
Comment 7 Lukas Vrabec 2016-09-22 10:44:57 EDT

*** This bug has been marked as a duplicate of bug 1376343 ***

Note You need to log in before you can comment on or make changes to this bug.