| Summary: | Sync already in progress when associating a repo to a CDS | ||
|---|---|---|---|
| Product: | [Retired] Pulp | Reporter: | Jay Dobies <jason.dobies> |
| Component: | nodes | Assignee: | Jay Dobies <jason.dobies> |
| Status: | CLOSED WORKSFORME | QA Contact: | Preethi Thomas <pthomas> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | unspecified | CC: | skarmark |
| Target Milestone: | --- | Keywords: | Triaged |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2011-07-15 22:12:20 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
I'm not 100% sure how to reproduce this. I was using the Summit demo environment about 5 days after it was set up. I had: - A repo on a CDS - Unregister the CDS - Re-register the CDS - Associate the repo back to the CDS That bombed with the following error. Connecting to RHUA [pulp.example.com]... Successfully connected to [pulp.example.com] Unexpected error caught at the shell level Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/rhui/tools/shell.py", line 73, in safe_listen self.listen(clear=first_run) File "/usr/lib/python2.7/site-packages/rhui/tools/shell.py", line 92, in listen Shell.listen(self) File "/usr/lib/python2.7/site-packages/rhui/common/shell.py", line 191, in listen item.func(*args, **item.kwargs) File "/usr/lib/python2.7/site-packages/rhui/tools/screens/cds_repos.py", line 105, in associate create_performed = mode_handlers[mode](unassociated_repos) File "/usr/lib/python2.7/site-packages/rhui/tools/screens/cds_repos.py", line 156, in _associate_by_repo self.pulp.associate_repo(self.cds['hostname'], repo['id']) File "/usr/lib/python2.7/site-packages/rhui/tools/pulp_api.py", line 538, in associate_repo self.cds_api.associate(hostname, repo_id) File "/usr/lib/python2.7/site-packages/pulp/client/api/cds.py", line 68, in associate return self.server.POST(path, data)[1] File "/usr/lib/python2.7/site-packages/pulp/client/server.py", line 294, in POST return self._request('POST', path, body=body) File "/usr/lib/python2.7/site-packages/pulp/client/server.py", line 254, in _request raise ServerRequestError(response.status, response_body, None) ServerRequestError: (409, 'Sync already in process for repo [summit-demo]', None) It looks like it's in the associate code that attempts to trigger the redistribute. For some reason, task is coming back as None. def associate(self, id): data = self.params() repo_id = data.get('repo_id') cds_api.associate_repo(id, repo_id) # Kick off the async task task = self.start_task(cds_api.redistribute, [repo_id], unique=True) # If no task was returned, the uniqueness check was tripped which means # there's already a redistribute running for the given repo if task is None: return self.conflict('Sync already in process for repo [%s]' % repo_id) One approach will be to ignore 409s in RHUI Manager since we know the redistribute isn't being used.