Rubygem-Staypuft: HA Deployment hangs on 60% over keystone rsync error (rsync -q -aIX --delete rsync://192.168.0.95/keystone/ /etc/keystone/ssl- failed) Environment: ------------- puppet-3.6.2-1.el6.noarch puppet-server-3.6.2-1.el6.noarch openstack-puppet-modules-2014.1-18.1.el6ost.noarch foreman-1.6.0.21-1.el6sat.noarch foreman-proxy-1.6.0.8-1.el6sat.noarch ruby193-rubygem-foreman_discovery-1.3.0-0.1.rc2.el6sat.noarch foreman-selinux-1.6.0-2.el6sat.noarch foreman-installer-1.5.0-0.4.RC2.el6ost.noarch ruby193-rubygem-staypuft-0.1.11-1.el6ost.noarch Steps: ------- (1) Install Staypuft from poodle http://ayanami.boston.devel.redhat.com/poodles/rhos-devel-ci/foreman.el6/2014-07-10.4/Foreman-RHEL-6.repo (2) Create Nova-Network HA deployment (3) Assign 3 hosts to controller + 1 Hosts to compute and Start Deploy. Results: --------- - rsync error on one of the nodes, the others host's puppet spin on waiting for all nodes to have keystone up --> deployment hangs. - Depolyments hangs for 1 hour - after 1 hour puppet skip that part and continue working Seems that the command : "rsync -q -aIX --delete rsync://192.168.0.95/keystone/ /etc/keystone/ssl "- Failed journalctl -u puppet: --------------------- Jul 10 20:58:43 525400868093.lab.eng.rdu2.redhat.com puppet-agent[4184]: (/Stage[main]/Quickstack::Pacemaker::Glance/Exec[i-am-glance-vip-OR-glance-is-up-on-vip]) Dependency Exec[rsync /etc/keystone/ssl] has fai Jul 10 20:58:43 525400868093.lab.eng.rdu2.redhat.com puppet-agent[4184]: (/Stage[main]/Quickstack::Pacemaker::Glance/Exec[i-am-glance-vip-OR-glance-is-up-on-vip]) Skipping because of failed dependencies Jul 10 20:58:43 525400868093.lab.eng.rdu2.redhat.com puppet-agent[4184]: (/Stage[main]/Glance::Registry/Exec[glance-manage db_sync]) Dependency Exec[rsync /etc/keystone/ssl] has failures: true Jul 10 20:58:43 525400868093.lab.eng.rdu2.redhat.com puppet-agent[4184]: (/Stage[main]/Glance::Registry/Exec[glance-manage db_sync]) Skipping because of failed dependencies Jul 10 20:58:43 525400868093.lab.eng.rdu2.redhat.com puppet-agent[4184]: (/Stage[main]/Glance::Registry/Exec[glance-manage db_sync]) Failed to call refresh: glance-manage db_sync returned 1 instead of one of [0] Jul 10 20:58:43 525400868093.lab.eng.rdu2.redhat.com puppet-agent[4184]: (/Stage[main]/Glance::Registry/Exec[glance-manage db_sync]) glance-manage db_sync returned 1 instead of one of [0] Jul 10 20:58:43 525400868093.lab.eng.rdu2.redhat.com puppet-agent[4184]: (/Stage[main]/Glance::Registry/Service[glance-registry]) Dependency Exec[rsync /etc/keystone/ssl] has failures: true Jul 10 20:58:43 525400868093.lab.eng.rdu2.redhat.com puppet-agent[4184]: (/Stage[main]/Glance::Registry/Service[glance-registry]) Skipping because of failed dependencies strace -p ----------- [pid 22690] execve("/tmp/ha-all-in-one-util.bash", ["/tmp/ha-all-in-one-util.bash", "all_members_include", "keystone"], [/* 5 vars */]Process 22692 attached strace -p ---------- [pid 25195] execve("/bin/grep", ["/bin/grep", "-q", "keystone"], [/* 8 vars */] <unfinished ...> [pid 25194] exit_group(0) = ? [pid 25194] +++ exited with 0 +++ [pid 25158] <... wait4 resumed> [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 25194 [pid 25158] wait4(-1, <unfinished ...> [pid 25195] <... execve resumed> ) = 0 [pid 25195] arch_prctl(ARCH_SET_FS, 0x7fcdcd7e1740) = 0 [pid 25195] exit_group(1) = ? [pid 25195] +++ exited with 1 +++ [pid 25158] <... wait4 resumed> [{WIFEXITED(s) && WEXITSTATUS(s) == 1}], 0, NULL) = 25195 [pid 25158] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=25194, si_status=0, si_utime=0, si_stime=0} --- [pid 25158] wait4(-1, 0x7fffd96bcfd0, WNOHANG, NULL) = -1 ECHILD (No child processes) [pid 25158] exit_group(1) = ? [pid 25158] +++ exited with 1 +++ [pid 3989] <... wait4 resumed> [{WIFEXITED(s) && WEXITSTATUS(s) == 1}], 0, NULL) = 25158 [pid 3989] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=25158, si_status=1, si_utime=0, si_stime=1} --- [pid 25157] _exit(0) = ? [pid 25157] +++ exited with 0 +++ q^CProcess 3989 detached
https://github.com/redhat-openstack/astapor/pull/307
This is merged, will be in next build
HA+Nova successfully deployed.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-1090.html