openstack-packstack: Installation failed on iptables Command Error 'Resource temporarily unavailable'. Environment: ************ openstack-packstack-2012.2.2-0.8.dev346.el6ost.noarch iptables-1.4.7-9.el6.x86_64 selinux-policy-3.7.19-195.el6.noarch kernel-2.6.32-358.el6.x86_64 Description: ************* I attempted to install 1 controller and 2 compute nodes, all on physical machines. The installation failed with the following error: '/sbin/iptables -I INPUT 1 -t filter -p tcp -m multiport --dports 5900:5999 -m comment --comment 001 nove compute incoming -j ACCEPT' returned 4: iptables: Resource temporarily unavailable. Note: ***** When attempted to execute the command that failed during installation, it worked well (on controller and compute nodes).. Testing if puppet apply is finished : 10.35.160.11_horizon.pp Testing if puppet apply is finished : 10.35.160.11_nova.pp OK Testing if puppet apply is finished : 10.35.160.15_nova.pp Testing if puppet apply is finished : 10.35.160.11_osclient.pp Testing if puppet apply is finished : 10.35.160.11_horizon.pp Testing if puppet apply is finished : 10.35.160.13_nova.pp Testing if puppet apply is finished : 10.35.160.15_nova.pp Testing if puppet apply is finished : 10.35.160.11_osclient.pp Testing if puppet apply is finished : 10.35.160.11_horizon.pp ERROR:root:Error during remote puppet apply of /var/tmp/packstack/20130207-0915/manifests/10.35.160.13_nova.pp ERROR:root:warning: You cannot collect exported resources without storeconfigs being set; the collection will be ignored on line 142 in file /var/tmp/packstack/20130207-0915/modules/nova/manifests/init.pp warning: You cannot collect exported resources without storeconfigs being set; the collection will be ignored on line 152 in file /var/tmp/packstack/20130207-0915/modules/nova/manifests/init.pp warning: You cannot collect exported resources without storeconfigs being set; the collection will be ignored on line 161 in file /var/tmp/packstack/20130207-0915/modules/nova/manifests/init.pp warning: You cannot collect exported resources without storeconfigs being set; the collection will be ignored on line 21 in file /var/tmp/packstack/20130207-0915/modules/nova/manifests/compute.pp notice: /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Package[nova-compute]/ensure: created notice: /Stage[main]/Nova::Compute::Libvirt/Service[libvirt]/ensure: ensure changed 'stopped' to 'running' err: /Firewall[001 nove compute incoming]/ensure: change from absent to present failed: Execution of '/sbin/iptables -I INPUT 1 -t filter -p tcp -m multiport --dports 5900:5999 -m comment --comment 001 nove compute incoming -j ACCEPT' returned 4: iptables: Resource temporarily unavailable. notice: /Stage[main]//Package[python-cinderclient]/ensure: created notice: /Stage[main]/Nova/Nova_config[rabbit_port]/ensure: created notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vncserver_listen]/ensure: created notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[compute_driver]/ensure: created notice: /File[/var/log/nova]/group: group changed 'root' to 'nova' notice: /File[/var/log/nova]/mode: mode changed '755' to '751' notice: /Stage[main]/Nova/Nova_config[logdir]/ensure: created notice: /Stage[main]/Nova/Nova_config[sql_connection]/ensure: created notice: /Stage[main]/Nova/Nova_config[rabbit_virtual_host]/ensure: created notice: /Stage[main]/Nova/Nova_config[glance_api_servers]/ensure: created notice: /Stage[main]//Nova_config[flat_interface]/ensure: created notice: /Stage[main]/Nova/Nova_config[service_down_time]/ensure: created notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[connection_type]/ensure: created notice: /Stage[main]/Nova/Nova_config[state_path]/ensure: created notice: /Stage[main]/Nova/Nova_config[rabbit_host]/ensure: created notice: /Stage[main]/Nova::Compute/Nova_config[novncproxy_base_url]/ensure: created notice: /Stage[main]/Nova/Nova_config[rabbit_userid]/ensure: created notice: /Stage[main]/Nova::Compute/Nova_config[vncserver_proxyclient_address]/ensure: created notice: /Stage[main]/Nova/Nova_config[lock_path]/ensure: created notice: /Stage[main]/Nova/Nova_config[rabbit_password]/ensure: created notice: /Stage[main]//Nova_config[libvirt_inject_partition]/ensure: created notice: /Stage[main]/Nova/Nova_config[rootwrap_config]/ensure: created notice: /Stage[main]/Nova/Nova_config[image_service]/ensure: created notice: /Stage[main]//Nova_config[metadata_host]/ensure: created notice: /Stage[main]/Nova/Nova_config[verbose]/ensure: created notice: /Stage[main]//Nova_config[rpc_backend]/ensure: created notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt_type]/ensure: created notice: /Stage[main]//Nova_config[qpid_hostname]/ensure: created notice: /Stage[main]/Nova::Compute/Nova_config[vnc_enabled]/ensure: created notice: /Stage[main]/Nova/Nova_config[auth_strategy]/ensure: created notice: /Stage[main]//Nova_config[volume_api_class]/ensure: created notice: /Stage[main]//Nova_config[network_host]/ensure: created notice: /File[/etc/nova/nova.conf]/owner: owner changed 'root' to 'nova' notice: /Stage[main]/Nova/Exec[post-nova_config]: Triggered 'refresh' from 29 events notice: /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Service[nova-compute]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Service[nova-compute]: Triggered 'refresh' from 2 events notice: /Stage[main]//Exec[load_kvm]/returns: executed successfully notice: Finished catalog run in 101.32 seconds Testing if puppet apply is finished : 10.35.160.13_nova.pp [ ERROR ] ERROR:root:Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 806, in main _main(confFile) File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 593, in _main runSequences() File "/usr/lib/python2.6/site-packages/packstack/installer/run_setup.py", line 569, in runSequences controller.runAllSequences() File "/usr/lib/python2.6/site-packages/packstack/installer/setup_controller.py", line 57, in runAllSequences sequence.run() File "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", line 154, in run step.run() File "/usr/lib/python2.6/site-packages/packstack/installer/setup_sequences.py", line 60, in run function() File "/usr/lib/python2.6/site-packages/packstack/plugins/puppet_950.py", line 123, in applyPuppetManifest waitforpuppet(currently_running) File "/usr/lib/python2.6/site-packages/packstack/plugins/puppet_950.py", line 111, in waitforpuppet validate_puppet_logfile(log) File "/usr/lib/python2.6/site-packages/packstack/modules/ospluginutils.py", line 137, in validate_puppet_logfile raise PackStackError(message) PackStackError: Error during puppet run : err: /Firewall[001 nove compute incoming]/ensure: change from absent to present failed: Execution of '/sbin/iptables -I INPUT 1 -t filter -p tcp -m multiport --dports 5900:5999 -m comment --comment 001 nove compute incoming -j ACCEPT' returned 4: iptables: Resource temporarily unavailable. Error during puppet run : err: /Firewall[001 nove compute incoming]/ensure: change from absent to present failed: Execution of '/sbin/iptables -I INPUT 1 -t filter -p tcp -m multiport --dports 5900:5999 -m comment --comment 001 nove compute incoming -j ACCEPT' returned 4: iptables: Resource temporarily unavailable. Please check log file /var/tmp/packstack/20130207-0915/openstack-setup.log for more information
Created attachment 694400 [details] openstack-setup.log
Created attachment 694401 [details] iptables -L view
I clean the environment and re-installed with the exact same settings, this time: **** Installation completed successfully ****** It's not 100% reproducible.
I reproduced. I think this happens because iptables are shared resource and there is possible run condition. Some locking around the iptables calls would certainly helpe there.
Reproduced again. Raising severity since I start to hate this bug because it breaks my automation.
Unable to reproduce using: openstack-packstack-2012.2.3-0.1.dev454.el6ost.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2013-0671.html