Bug 1125075
Summary: | Failure in iptables-save: Table does not exist | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | John Eckersberg <jeckersb> |
Component: | rhel-osp-installer | Assignee: | John Eckersberg <jeckersb> |
Status: | CLOSED ERRATA | QA Contact: | Omri Hochman <ohochman> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | Foreman (RHEL 6) | CC: | acathrow, jeckersb, mburns, mhulan, morazi, rhos-maint, sclewis, yeylon |
Target Milestone: | ga | ||
Target Release: | Installer | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | rhel-osp-installer-0.1.6-6.el6ost | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2014-08-21 18:07:12 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
John Eckersberg
2014-07-31 03:09:02 UTC
in my tests with packstack i haven't found this problem, could you be more specific in how to recreate this problem. checking 1125136 it seems like it might be a staypuft problem. I need to run, so I don't have time for a lengthy explanation, but: 17:55:59 eck | i am 99% positive the iptables service is buggy 17:56:13 eck | it is a oneshot service under systemd 17:56:36 eck | so i suspect it forks off to do its start up and then systemd reports it to be successfully started 17:56:53 eck | then puppet runs iptables-save before it's really initialized all the way 17:56:54 eck | and that fails This appears to be an iptables problem. This would be better fixed in the staypuft puppet manifest instead of the puppet module. I've figured out what is happening here. I think this is really a systemd bug, (although there are any number of ways we can work around it). Here's the scenario: The host has firewalld installed and running. The service unit contains: Conflicts=iptables.service ip6tables.service ebtables.service However, the ordering of the conflict operation seems to be wrong. Here's the journal while executing 'systemctl start iptables.service': Aug 01 15:05:03 mac5254008803ab.example.org systemd[1]: Stopping firewalld - dynamic firewall daemon... Aug 01 15:05:03 mac5254008803ab.example.org systemd[1]: Starting IPv4 firewall with iptables... Aug 01 15:05:03 mac5254008803ab.example.org systemd[1]: Started IPv4 firewall with iptables. Aug 01 15:05:04 mac5254008803ab.example.org kernel: Ebtables v2.0 unregistered Aug 01 15:05:04 mac5254008803ab.example.org systemd[1]: Stopped firewalld - dynamic firewall daemon. (Note that although systemd initiates the process of stopping firewalld, it does not wait for it to complete before it then starts iptables.) So what happens with puppet involved is: 1. Puppet runs 'systemctl start iptables.service' 2. Systemd begins stopping firewalld 3. Systemd begins starting iptables 4. Systemd determines iptables is started 5. Puppet returns from (1) 6. Puppet continues the catalog and begins inserting iptables rules 7. The puppet firewall provider runs iptables-save to persist new rules 8. iptables-save is in the process of executing, and has handles into the kernel netfilter bits 9. Firewalld is still stopping. At this point it's unregistering its netfilter hooks in the kernel. 10. Some resource that iptables-save has open (in my testing it was always the 'nat' table) now becomes invalid. 11. iptables-save aborts 12. Puppet fails The easiest way to make this stop being a problem in rhel-osp is probably to remove the firewalld package in the kickstart template. As such, I'm going to reassign this over to rhel-osp-installer for an expedited fix. Verified with: ruby193-rubygem-staypuft-0.1.22-1.el6ost.noarch openstack-puppet-modules-2014.1-19.9.el6ost.noarch rhel-osp-installer-0.1.6-5.el6ost.noarch Unable to reproduce foreman puddle 2014-07-31.1. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-1090.html |