Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
With rhel-atomic-cloud-7.2-10.x86_64.qcow2 plus "atomic host upgrade".
When kubernetes is started early during/after boot, these messages appear in the journal:
type=1400 audit(1452685817.385:5): avc: denied { read } for pid=1988 comm="iptables" name="xtables.lock" dev="tmpfs" ino=19138 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file
When kubernetes is started later, the messages don't appear. In our case, delaying the start by 3 seconds is enough, but 2 seconds are not.
This happens in the Cockpit integration tests. We boot a VM and start the test as soon as SSH to the machine starts working. Pretty much the first thing the tests do is starting kubernetes.
Version-Release number of selected component (if applicable):
selinux-policy-targeted-3.13.1-60.el7.noarch
How reproducible:
Always when one is quick enough
Steps to Reproduce:
1. Boot a VM
2. Within two or three seconds after SSH starts working:
# systemctl start etcd kube-apiserver kube-controller-manager kube-scheduler docker kube-proxy kubelet
Use a script, fingers are too slow. Well, mine are.
Looking at this closer, I think this might just be a regular locking conflict of "xtables.lock", and we just happen to find the lock locked. If true, then locking seems to broken in general, and it's not just a race during boot.
> Ok, the problem is how/when xtables.lock is created. Just to be sure - what
> is a path to xtables.lock?
I think it is this:
# ls -lZ /var/run/xtables.lock
-rw-------. root root system_u:object_r:iptables_var_run_t:s0 /var/run/xtables.lock
Description of problem: With rhel-atomic-cloud-7.2-10.x86_64.qcow2 plus "atomic host upgrade". When kubernetes is started early during/after boot, these messages appear in the journal: type=1400 audit(1452685817.385:5): avc: denied { read } for pid=1988 comm="iptables" name="xtables.lock" dev="tmpfs" ino=19138 scontext=system_u:system_r:iptables_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file When kubernetes is started later, the messages don't appear. In our case, delaying the start by 3 seconds is enough, but 2 seconds are not. This happens in the Cockpit integration tests. We boot a VM and start the test as soon as SSH to the machine starts working. Pretty much the first thing the tests do is starting kubernetes. Version-Release number of selected component (if applicable): selinux-policy-targeted-3.13.1-60.el7.noarch How reproducible: Always when one is quick enough Steps to Reproduce: 1. Boot a VM 2. Within two or three seconds after SSH starts working: # systemctl start etcd kube-apiserver kube-controller-manager kube-scheduler docker kube-proxy kubelet Use a script, fingers are too slow. Well, mine are.