Bug 726004 - CDS sync hangs after a while
Summary: CDS sync hangs after a while
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Update Infrastructure for Cloud Providers
Classification: Red Hat
Component: Tools
Version: 2.0.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Jay Dobies
QA Contact: wes hayutin
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-07-27 09:53 UTC by Kedar Bidarkar
Modified: 2011-10-04 12:41 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-10-04 12:41:04 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Kedar Bidarkar 2011-07-27 09:53:40 UTC
Description of problem:

I have a setup with 

a) RHUA/RHUI-Mgr
b) CDS1
c) CDS2

I have around 34GB size repos on RHUA.

For RHUA to CDS syncs
I ran CDS sync manually, CDS2 sync was SUCCESS,
but CDS1 sync shows InProgress now for a very long time.

2011-07-27 04:26:46,317 [INFO][Thread-6] markStatus() @ ParallelFetch.py:165 - 10 threads are active. 1233 items left to be fetched
2011-07-27 04:26:46,364 [INFO][Thread-3] __logchild() @ activeobject.py:141 - Create a link in repo directory for the package at /var/lib/pulp-cds/repos/content/dist/rhel/rhui/server-6/updates/6Server/x86_64/os//Packages/ppl-0.10.2-11.el6.i686.rpm to ../../../../../../../../../../../packages/ppl/0.10.2/11.el6/i686/388/ppl-0.10.2-11.el6.i686.rpm
2011-07-27 04:26:46,366 [INFO][Thread-3] markStatus() @ ParallelFetch.py:165 - 10 threads are active. 1232 items left to be fetched
2011-07-27 04:26:46,374 [INFO][Thread-4] __logchild() @ activeobject.py:141 - Create a link in repo directory for the package at /var/lib/pulp-cds/repos/content/dist/rhel/rhui/server-6/updates/6Server/x86_64/os//Packages/procmail-3.22-25.1.el6.x86_64.rpm to ../../../../../../../../../../../packages/procmail/3.22/25.1.el6/x86_64/42d/procmail-3.22-25.1.el6.x86_64.rpm
2011-07-27 04:26:46,376 [INFO][Thread-4] markStatus() @ ParallelFetch.py:165 - 10 threads are active. 1231 items left to be fetched
2011-07-27 05:12:39,522 [INFO][MainThread] __init__() @ config.py:372 - reading: /etc/gofer/agent.conf
2011-07-27 05:12:39,529 [INFO][MainThread] __init__() @ config.py:372 - reading: /etc/gofer/plugins/builtin.conf
2011-07-27 05:12:39,531 [INFO][MainThread] __init__() @ config.py:372 - reading: /etc/gofer/plugins/cdsplugin.conf
2011-07-27 05:12:39,531 [INFO][MainThread] __import() @ config.py:420 - processing: @import:/etc/pulp/cds.conf:server:host(host)
2011-07-27 05:12:39,532 [INFO][MainThread] __init__() @ config.py:372 - reading: /etc/pulp/cds.conf


Version-Release number of selected component (if applicable):
pulp :- 214 ; rh-rhui-tools :- 2.0.41

rpm -qav | grep -ie pulp -ie gofer -ie grinder -ie rh-rhui-tools
grinder-0.0.108-1.el6.noarch
gofer-0.43-1.el6.noarch
pulp-0.0.214-1.el6.noarch
pulp-common-0.0.214-1.el6.noarch
python-gofer-0.43-1.el6.noarch
pulp-client-0.0.214-1.el6.noarch
rh-rhui-tools-2.0.41-1.el6.noarch


How reproducible:

During RHUA to CDS sync.

Steps to Reproduce:
1.
2.
3.
  
Actual results:
RHUA to CDS1 sync hanged

Expected results:
RHUA to CDS1 sync should go success

Additional info:

Comment 1 Kedar Bidarkar 2011-07-27 10:00:37 UTC
Is it that we are supposed to sync RHUA to CDS, in a serial fashion or simultaneously.

Is it that syncing such huge data 34GB, simultaneously to 2 or 3 CDS nodes fine.

As I remember that in RHUI 1.2 it was in serial way.

Comment 2 Sachin Ghai 2011-07-27 10:29:20 UTC
Observed similar behaviour on amazon during CDS sync. I started the sync for two CDS nodes. One of the node is synchronized successfully and other node showing sync "in progress" from a long time.

Comment 3 Kedar Bidarkar 2011-07-27 12:28:29 UTC
It appears to be amazon specific issue.

May be network or SAN  limitation.

Attempting to sync and check whether serial CDS syncs are successful.

Comment 4 James Slagle 2011-10-03 20:28:05 UTC
I've not seen this issue in 2.0.  Can we revalidate this to see if it's still present?

Comment 5 Kedar Bidarkar 2011-10-04 12:41:04 UTC
Moving this to NOTABUG as this issue is faced only on ami's.


Note You need to log in before you can comment on or make changes to this bug.