Bug 1118513 - Rubygem-Staypuft: HA Deployment hangs on 60% over keystone rsync error (rsync -q -aIX --delete rsync://192.168.0.95/keystone/ /etc/keystone/ssl- failed)
Summary: Rubygem-Staypuft: HA Deployment hangs on 60% over keystone rsync error (rsyn...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rubygem-staypuft
Version: 5.0 (RHEL 6)
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: ga
: Installer
Assignee: Jason Guiditta
QA Contact: Leonid Natapov
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-10 22:37 UTC by Omri Hochman
Modified: 2014-08-21 18:05 UTC (History)
5 users (show)

Fixed In Version: openstack-foreman-installer-2.0.14-1.el6ost
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-08-21 18:05:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1090 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Enhancement Advisory 2014-08-22 15:28:08 UTC

Description Omri Hochman 2014-07-10 22:37:06 UTC
Rubygem-Staypuft: HA Deployment hangs on 60% over keystone rsync error  (rsync -q -aIX --delete  rsync://192.168.0.95/keystone/ /etc/keystone/ssl- failed) 

Environment:
-------------
puppet-3.6.2-1.el6.noarch
puppet-server-3.6.2-1.el6.noarch
openstack-puppet-modules-2014.1-18.1.el6ost.noarch
foreman-1.6.0.21-1.el6sat.noarch
foreman-proxy-1.6.0.8-1.el6sat.noarch
ruby193-rubygem-foreman_discovery-1.3.0-0.1.rc2.el6sat.noarch
foreman-selinux-1.6.0-2.el6sat.noarch
foreman-installer-1.5.0-0.4.RC2.el6ost.noarch
ruby193-rubygem-staypuft-0.1.11-1.el6ost.noarch


Steps:
-------
(1) Install Staypuft from poodle http://ayanami.boston.devel.redhat.com/poodles/rhos-devel-ci/foreman.el6/2014-07-10.4/Foreman-RHEL-6.repo
(2) Create Nova-Network HA deployment
(3) Assign 3 hosts to controller + 1 Hosts to compute and Start Deploy.


Results: 
---------
- rsync error on one of the nodes, the others host's puppet spin on waiting for all nodes to have keystone up --> deployment hangs. 
- Depolyments hangs for 1 hour
- after 1 hour puppet skip that part and continue working

Seems that the command : "rsync -q -aIX --delete  rsync://192.168.0.95/keystone/ /etc/keystone/ssl "- Failed

journalctl -u puppet:
---------------------
Jul 10 20:58:43 525400868093.lab.eng.rdu2.redhat.com puppet-agent[4184]: (/Stage[main]/Quickstack::Pacemaker::Glance/Exec[i-am-glance-vip-OR-glance-is-up-on-vip]) Dependency Exec[rsync /etc/keystone/ssl] has fai
Jul 10 20:58:43 525400868093.lab.eng.rdu2.redhat.com puppet-agent[4184]: (/Stage[main]/Quickstack::Pacemaker::Glance/Exec[i-am-glance-vip-OR-glance-is-up-on-vip]) Skipping because of failed dependencies
Jul 10 20:58:43 525400868093.lab.eng.rdu2.redhat.com puppet-agent[4184]: (/Stage[main]/Glance::Registry/Exec[glance-manage db_sync]) Dependency Exec[rsync /etc/keystone/ssl] has failures: true
Jul 10 20:58:43 525400868093.lab.eng.rdu2.redhat.com puppet-agent[4184]: (/Stage[main]/Glance::Registry/Exec[glance-manage db_sync]) Skipping because of failed dependencies
Jul 10 20:58:43 525400868093.lab.eng.rdu2.redhat.com puppet-agent[4184]: (/Stage[main]/Glance::Registry/Exec[glance-manage db_sync]) Failed to call refresh: glance-manage db_sync returned 1 instead of one of [0]
Jul 10 20:58:43 525400868093.lab.eng.rdu2.redhat.com puppet-agent[4184]: (/Stage[main]/Glance::Registry/Exec[glance-manage db_sync]) glance-manage db_sync returned 1 instead of one of [0]
Jul 10 20:58:43 525400868093.lab.eng.rdu2.redhat.com puppet-agent[4184]: (/Stage[main]/Glance::Registry/Service[glance-registry]) Dependency Exec[rsync /etc/keystone/ssl] has failures: true
Jul 10 20:58:43 525400868093.lab.eng.rdu2.redhat.com puppet-agent[4184]: (/Stage[main]/Glance::Registry/Service[glance-registry]) Skipping because of failed dependencies



strace -p 
-----------
 [pid 22690] execve("/tmp/ha-all-in-one-util.bash", ["/tmp/ha-all-in-one-util.bash", "all_members_include", "keystone"], [/* 5 vars */]Process 22692 attached

strace -p
----------
[pid 25195] execve("/bin/grep", ["/bin/grep", "-q", "keystone"], [/* 8 vars */] <unfinished ...>
[pid 25194] exit_group(0)               = ?
[pid 25194] +++ exited with 0 +++
[pid 25158] <... wait4 resumed> [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 25194
[pid 25158] wait4(-1,  <unfinished ...>
[pid 25195] <... execve resumed> )      = 0
[pid 25195] arch_prctl(ARCH_SET_FS, 0x7fcdcd7e1740) = 0
[pid 25195] exit_group(1)               = ?
[pid 25195] +++ exited with 1 +++
[pid 25158] <... wait4 resumed> [{WIFEXITED(s) && WEXITSTATUS(s) == 1}], 0, NULL) = 25195
[pid 25158] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=25194, si_status=0, si_utime=0, si_stime=0} ---
[pid 25158] wait4(-1, 0x7fffd96bcfd0, WNOHANG, NULL) = -1 ECHILD (No child processes)
[pid 25158] exit_group(1)               = ?
[pid 25158] +++ exited with 1 +++
[pid  3989] <... wait4 resumed> [{WIFEXITED(s) && WEXITSTATUS(s) == 1}], 0, NULL) = 25158
[pid  3989] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=25158, si_status=1, si_utime=0, si_stime=1} ---
[pid 25157] _exit(0)                    = ?
[pid 25157] +++ exited with 0 +++
q^CProcess 3989 detached

Comment 2 Jason Guiditta 2014-07-11 14:51:34 UTC
https://github.com/redhat-openstack/astapor/pull/307

Comment 3 Jason Guiditta 2014-07-14 14:12:33 UTC
This is merged, will be in next build

Comment 11 Leonid Natapov 2014-08-12 15:39:58 UTC
HA+Nova successfully deployed.

Comment 12 errata-xmlrpc 2014-08-21 18:05:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1090.html


Note You need to log in before you can comment on or make changes to this bug.