Description of problem: Puppet failed with a neutron/gre non-HA install with the following error: Execution of '/usr/sbin/iptables-save' returned 1: iptables-save v1.4.21: Cannot initialize: Table does not exist (do you need to insmod?) Jul 31 01:17:40 mac5254008803ab.example.org puppet-agent[3861]: (/Stage[main]/Quickstack::Neutron::Firewall::Gre/Firewall[002 gre]/ensure) change from absent to present failed: Execution of '/usr/sbin/iptables-save' returned 1: iptables-save v1.4.21: Cannot initialize: Table does not exist (do you need to insmod?) This caused the deployment to pause. Running puppet a second time manually caused the resource to execute correctly, and allowed the deployment to continue after manually resuming in the Foreman UI. Version-Release number of selected component (if applicable): iptables-1.4.21-13.el7.x86_64 (on networker) rhel-osp-installer-0.1.6-4.el6ost.noarch openstack-puppet-modules-2014.1-19.9.el6ost.noarch
in my tests with packstack i haven't found this problem, could you be more specific in how to recreate this problem. checking 1125136 it seems like it might be a staypuft problem.
I need to run, so I don't have time for a lengthy explanation, but: 17:55:59 eck | i am 99% positive the iptables service is buggy 17:56:13 eck | it is a oneshot service under systemd 17:56:36 eck | so i suspect it forks off to do its start up and then systemd reports it to be successfully started 17:56:53 eck | then puppet runs iptables-save before it's really initialized all the way 17:56:54 eck | and that fails
This appears to be an iptables problem. This would be better fixed in the staypuft puppet manifest instead of the puppet module.
I've figured out what is happening here. I think this is really a systemd bug, (although there are any number of ways we can work around it). Here's the scenario: The host has firewalld installed and running. The service unit contains: Conflicts=iptables.service ip6tables.service ebtables.service However, the ordering of the conflict operation seems to be wrong. Here's the journal while executing 'systemctl start iptables.service': Aug 01 15:05:03 mac5254008803ab.example.org systemd[1]: Stopping firewalld - dynamic firewall daemon... Aug 01 15:05:03 mac5254008803ab.example.org systemd[1]: Starting IPv4 firewall with iptables... Aug 01 15:05:03 mac5254008803ab.example.org systemd[1]: Started IPv4 firewall with iptables. Aug 01 15:05:04 mac5254008803ab.example.org kernel: Ebtables v2.0 unregistered Aug 01 15:05:04 mac5254008803ab.example.org systemd[1]: Stopped firewalld - dynamic firewall daemon. (Note that although systemd initiates the process of stopping firewalld, it does not wait for it to complete before it then starts iptables.) So what happens with puppet involved is: 1. Puppet runs 'systemctl start iptables.service' 2. Systemd begins stopping firewalld 3. Systemd begins starting iptables 4. Systemd determines iptables is started 5. Puppet returns from (1) 6. Puppet continues the catalog and begins inserting iptables rules 7. The puppet firewall provider runs iptables-save to persist new rules 8. iptables-save is in the process of executing, and has handles into the kernel netfilter bits 9. Firewalld is still stopping. At this point it's unregistering its netfilter hooks in the kernel. 10. Some resource that iptables-save has open (in my testing it was always the 'nat' table) now becomes invalid. 11. iptables-save aborts 12. Puppet fails
The easiest way to make this stop being a problem in rhel-osp is probably to remove the firewalld package in the kickstart template. As such, I'm going to reassign this over to rhel-osp-installer for an expedited fix.
https://github.com/theforeman/foreman-installer-staypuft/pull/61
Verified with: ruby193-rubygem-staypuft-0.1.22-1.el6ost.noarch openstack-puppet-modules-2014.1-19.9.el6ost.noarch rhel-osp-installer-0.1.6-5.el6ost.noarch Unable to reproduce foreman puddle 2014-07-31.1.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-1090.html