Hide Forgot
Description of problem: I have a setup with a) RHUA/RHUI-Mgr b) CDS1 c) CDS2 I have around 34GB size repos on RHUA. For RHUA to CDS syncs I ran CDS sync manually, CDS2 sync was SUCCESS, but CDS1 sync shows InProgress now for a very long time. 2011-07-27 04:26:46,317 [INFO][Thread-6] markStatus() @ ParallelFetch.py:165 - 10 threads are active. 1233 items left to be fetched 2011-07-27 04:26:46,364 [INFO][Thread-3] __logchild() @ activeobject.py:141 - Create a link in repo directory for the package at /var/lib/pulp-cds/repos/content/dist/rhel/rhui/server-6/updates/6Server/x86_64/os//Packages/ppl-0.10.2-11.el6.i686.rpm to ../../../../../../../../../../../packages/ppl/0.10.2/11.el6/i686/388/ppl-0.10.2-11.el6.i686.rpm 2011-07-27 04:26:46,366 [INFO][Thread-3] markStatus() @ ParallelFetch.py:165 - 10 threads are active. 1232 items left to be fetched 2011-07-27 04:26:46,374 [INFO][Thread-4] __logchild() @ activeobject.py:141 - Create a link in repo directory for the package at /var/lib/pulp-cds/repos/content/dist/rhel/rhui/server-6/updates/6Server/x86_64/os//Packages/procmail-3.22-25.1.el6.x86_64.rpm to ../../../../../../../../../../../packages/procmail/3.22/25.1.el6/x86_64/42d/procmail-3.22-25.1.el6.x86_64.rpm 2011-07-27 04:26:46,376 [INFO][Thread-4] markStatus() @ ParallelFetch.py:165 - 10 threads are active. 1231 items left to be fetched 2011-07-27 05:12:39,522 [INFO][MainThread] __init__() @ config.py:372 - reading: /etc/gofer/agent.conf 2011-07-27 05:12:39,529 [INFO][MainThread] __init__() @ config.py:372 - reading: /etc/gofer/plugins/builtin.conf 2011-07-27 05:12:39,531 [INFO][MainThread] __init__() @ config.py:372 - reading: /etc/gofer/plugins/cdsplugin.conf 2011-07-27 05:12:39,531 [INFO][MainThread] __import() @ config.py:420 - processing: @import:/etc/pulp/cds.conf:server:host(host) 2011-07-27 05:12:39,532 [INFO][MainThread] __init__() @ config.py:372 - reading: /etc/pulp/cds.conf Version-Release number of selected component (if applicable): pulp :- 214 ; rh-rhui-tools :- 2.0.41 rpm -qav | grep -ie pulp -ie gofer -ie grinder -ie rh-rhui-tools grinder-0.0.108-1.el6.noarch gofer-0.43-1.el6.noarch pulp-0.0.214-1.el6.noarch pulp-common-0.0.214-1.el6.noarch python-gofer-0.43-1.el6.noarch pulp-client-0.0.214-1.el6.noarch rh-rhui-tools-2.0.41-1.el6.noarch How reproducible: During RHUA to CDS sync. Steps to Reproduce: 1. 2. 3. Actual results: RHUA to CDS1 sync hanged Expected results: RHUA to CDS1 sync should go success Additional info:
Is it that we are supposed to sync RHUA to CDS, in a serial fashion or simultaneously. Is it that syncing such huge data 34GB, simultaneously to 2 or 3 CDS nodes fine. As I remember that in RHUI 1.2 it was in serial way.
Observed similar behaviour on amazon during CDS sync. I started the sync for two CDS nodes. One of the node is synchronized successfully and other node showing sync "in progress" from a long time.
It appears to be amazon specific issue. May be network or SAN limitation. Attempting to sync and check whether serial CDS syncs are successful.
I've not seen this issue in 2.0. Can we revalidate this to see if it's still present?
Moving this to NOTABUG as this issue is faced only on ami's.