Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
Here is the traceback:
error:
traceback: |2
File "/usr/lib/python3.9/site-packages/pulpcore/tasking/pulpcore_worker.py", line 452, in _perform_task
result = func(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/pulp_ansible/app/tasks/collections.py", line 180, in sync
repo_version = d_version.create()
File "/usr/lib/python3.9/site-packages/pulpcore/plugin/stages/declarative_version.py", line 161, in create
loop.run_until_complete(pipeline)
File "/usr/lib64/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/usr/lib/python3.9/site-packages/pulpcore/plugin/stages/api.py", line 225, in create_pipeline
await asyncio.gather(*futures)
File "/usr/lib/python3.9/site-packages/pulpcore/plugin/stages/api.py", line 43, in __call__
await self.run()
File "/usr/lib/python3.9/site-packages/pulpcore/plugin/stages/content_stages.py", line 198, in run
await sync_to_async(process_batch)()
File "/usr/lib/python3.9/site-packages/asgiref/sync.py", line 435, in __call__
ret = await asyncio.wait_for(future, timeout=None)
File "/usr/lib64/python3.9/asyncio/tasks.py", line 442, in wait_for
return await fut
File "/usr/lib64/python3.9/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/lib/python3.9/site-packages/asgiref/sync.py", line 476, in thread_handler
return func(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/pulpcore/plugin/stages/content_stages.py", line 106, in process_batch
self._pre_save(batch)
File "/usr/lib/python3.9/site-packages/pulp_ansible/app/tasks/collections.py", line 1042, in _pre_save
collection, created = Collection.objects.get_or_create(
File "/usr/lib/python3.9/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/django/db/models/query.py", line 581, in get_or_create
return self.get(**kwargs), False
File "/usr/lib/python3.9/site-packages/django/db/models/query.py", line 439, in get
raise self.model.MultipleObjectsReturned(
description: get() returned more than one Collection -- it returned 2!
Not exactly sure how the duplicates happened. It could be a race condition during the sync? Nevertheless, it'd be good to have a foreman-rake task to clean up the duplicates.
Version-Release number of selected component (if applicable):
How reproducible:
Hard
Steps to Reproduce:
Sync all the collections from galaxy
Actual results:
Failed
Expected results:
Success
Additional info:
Daniel, I just had a look at the linked bugzilla and it appears to be the same issue:
```
[root@katello ~]# su -ls /usr/bin/bash -c 'reindexdb -a' postgres
load average: 0.57 1.38 0.69
load average: 0.57 1.38 0.69
/etc/profile: line 88: TMOUT: readonly variable
reindexdb: reindexing database "candlepin"
reindexdb: reindexing database "foreman"
reindexdb: error: reindexing of database "foreman" failed: ERROR: could not create unique index "index_fact_names_on_name_and_type"
DETAIL: Key (name, type)=(ssh::rsa::key, PuppetFactName) is duplicated.
```
I just wonder why it fails over now since this has never been an issue with foreman 3.7 and katello 4.9
I did the in-place upgrade on foreman 3.3 and katello 4.5 (roughly 14 months ago), I'd think the entries in the database would've been accessed earlier as the collections have been synced regularly.
Anyway, is there a proposed way forward?
I just checked back on it and sure, the facts can be easily deleted but this is sadly not where it stops. The next issues arise in the 'katello_erratum_packages' which has multiple references in 'katello_module_stream_erratum_packages'. I'm really not sure that's a feasible way forward 'just deleting the duplicates'. Any thoughts?
Description of problem: Here is the traceback: error: traceback: |2 File "/usr/lib/python3.9/site-packages/pulpcore/tasking/pulpcore_worker.py", line 452, in _perform_task result = func(*args, **kwargs) File "/usr/lib/python3.9/site-packages/pulp_ansible/app/tasks/collections.py", line 180, in sync repo_version = d_version.create() File "/usr/lib/python3.9/site-packages/pulpcore/plugin/stages/declarative_version.py", line 161, in create loop.run_until_complete(pipeline) File "/usr/lib64/python3.9/asyncio/base_events.py", line 647, in run_until_complete return future.result() File "/usr/lib/python3.9/site-packages/pulpcore/plugin/stages/api.py", line 225, in create_pipeline await asyncio.gather(*futures) File "/usr/lib/python3.9/site-packages/pulpcore/plugin/stages/api.py", line 43, in __call__ await self.run() File "/usr/lib/python3.9/site-packages/pulpcore/plugin/stages/content_stages.py", line 198, in run await sync_to_async(process_batch)() File "/usr/lib/python3.9/site-packages/asgiref/sync.py", line 435, in __call__ ret = await asyncio.wait_for(future, timeout=None) File "/usr/lib64/python3.9/asyncio/tasks.py", line 442, in wait_for return await fut File "/usr/lib64/python3.9/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/usr/lib/python3.9/site-packages/asgiref/sync.py", line 476, in thread_handler return func(*args, **kwargs) File "/usr/lib/python3.9/site-packages/pulpcore/plugin/stages/content_stages.py", line 106, in process_batch self._pre_save(batch) File "/usr/lib/python3.9/site-packages/pulp_ansible/app/tasks/collections.py", line 1042, in _pre_save collection, created = Collection.objects.get_or_create( File "/usr/lib/python3.9/site-packages/django/db/models/manager.py", line 85, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/usr/lib/python3.9/site-packages/django/db/models/query.py", line 581, in get_or_create return self.get(**kwargs), False File "/usr/lib/python3.9/site-packages/django/db/models/query.py", line 439, in get raise self.model.MultipleObjectsReturned( description: get() returned more than one Collection -- it returned 2! Not exactly sure how the duplicates happened. It could be a race condition during the sync? Nevertheless, it'd be good to have a foreman-rake task to clean up the duplicates. Version-Release number of selected component (if applicable): How reproducible: Hard Steps to Reproduce: Sync all the collections from galaxy Actual results: Failed Expected results: Success Additional info: