RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2142270 - Locale change caused by RHEL upgrade results in database index corruption "get() returned more than one Modulemd -- it returned 2!"
Summary: Locale change caused by RHEL upgrade results in database index corruption "ge...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: leapp-repository
Version: 7.9
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: ---
Assignee: Evgeni Golov
QA Contact: Lukas Pramuk
URL:
Whiteboard:
: 2154289 (view as bug list)
Depends On:
Blocks: 2161929 2167984
TreeView+ depends on / blocked
 
Reported: 2022-11-12 13:33 UTC by Anand Jambhulkar
Modified: 2023-12-13 04:25 UTC (History)
24 users (show)

Fixed In Version: leapp-repository-0.17.0-8.el7_9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2161413 2161929 (view as bug list)
Environment:
Last Closed: 2023-03-14 13:56:57 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github oamg leapp-repository pull 1007 0 None Merged BZ#2142270 - run reindexdb to fix issues due to new locales in RHEL8 2023-01-20 14:04:45 UTC
Github oamg leapp-repository pull 1018 0 None open BZ#2142270 - reindex the database *before* the installer runs 2023-01-27 14:00:24 UTC
Red Hat Issue Tracker OAMG-8310 0 None None None 2023-01-20 14:02:43 UTC
Red Hat Issue Tracker RHELPLAN-145907 0 None None None 2023-01-20 14:02:35 UTC
Red Hat Issue Tracker SAT-13956 0 None None None 2022-11-18 09:41:47 UTC
Red Hat Knowledge Base (Solution) 6988715 0 None None None 2023-01-05 14:59:27 UTC
Red Hat Product Errata RHBA-2023:1228 0 None None None 2023-03-14 13:57:07 UTC

Description Anand Jambhulkar 2022-11-12 13:33:29 UTC
Description of problem:
----------------------

Repository synchronization on Red Hat Satellite 6.11.3 on RHEL-8 fails with the following error :

Katello::Errors::Pulp3Error "get() returned more than one Modulemd -- it returned 2!"



Version-Release number of selected component (if applicable):
------------------------------------------------------------

Red Hat Satellite 6.11.3 on RHEL-8



Steps to Reproduce:
------------------

1. Sync the repositories such as RHEL/Oracle Linux 8 AppStream which should fail with the above error.



Actual results:
--------------

In the task-export -
------------------

Id: 2e0ce7f7-50f7-408e-8fba-7cac4820d8ac
Label: Actions::Katello::Repository::Sync
Status: stopped
Result: warning
Started at: 2022-11-11 14:25:42 UTC
Ended at: 2022-11-11 14:59:57 UTC 


---
pulp_tasks:
- pulp_href: "/pulp/api/v3/tasks/849094e6-6061-46f3-84fb-d37300621153/"
  pulp_created: '2022-11-11T14:25:45.994+00:00'
  state: failed
  name: pulp_rpm.app.tasks.synchronizing.synchronize
  logging_cid: 07bc9425-2d23-4a66-be44-540e3465bd0c
  started_at: '2022-11-11T14:25:46.194+00:00'
  finished_at: '2022-11-11T14:59:38.721+00:00'
  error:
    traceback: !ruby/string:Sequel::SQL::Blob |2
        File "/usr/lib/python3.8/site-packages/pulpcore/tasking/pulpcore_worker.py", line 380, in _perform_task
          result = func(*args, **kwargs)
        File "/usr/lib/python3.8/site-packages/pulp_rpm/app/tasks/synchronizing.py", line 556, in synchronize
          repo_version = dv.create() or repo.latest_version()
        File "/usr/lib/python3.8/site-packages/pulpcore/plugin/stages/declarative_version.py", line 161, in create
          loop.run_until_complete(pipeline)
.....
        File "/usr/lib/python3.8/site-packages/django/db/models/query.py", line 439, in get
          raise self.model.MultipleObjectsReturned(
    description: get() returned more than one Modulemd -- it returned 2!
  worker: "/pulp/api/v3/workers/29399c10-ede9-40b1-94d8-4f790145cc28/"
  child_tasks: []
  progress_reports:
  - message: Downloading Metadata Files
    code: sync.downloading.metadata
    state: completed
    done: 11
  - message: Downloading Artifacts
    code: sync.downloading.artifacts
    state: completed
    done: 0
  - message: Associating Content
    code: associating.content
    state: canceled
    done: 31500
  - message: Parsed Modulemd
    code: sync.parsing.modulemds
    state: completed
    total: 619
    done: 619
  - message: Parsed Modulemd-defaults
    code: sync.parsing.modulemd_defaults
    state: completed
    total: 47
    done: 47
  - message: Skipping Packages
    code: sync.skipped.packages
    state: completed
    total: 0
    done: 0
  - message: Parsed Packages
    code: sync.parsing.packages
    state: completed
    total: 28329
    done: 28329
  - message: Parsed Comps
    code: sync.parsing.comps
    state: completed
    total: 68
    done: 68
  - message: Parsed Advisories
    code: sync.parsing.advisories
    state: completed
    total: 2689
    done: 2689
  created_resources: []
  reserved_resources_record:
  - "/pulp/api/v3/repositories/rpm/rpm/bee4531a-42af-4dba-8ae4-da297cab938e/"
  - shared:/pulp/api/v3/remotes/rpm/rpm/708f730b-6ca3-4681-a67b-42e6c73c7245/
create_version: true
task_groups: []
poll_attempts:
  total: 145
  failed: 1


Error:
Katello::Errors::Pulp3Error
get() returned more than one Modulemd -- it returned 2!
---
- "/usr/share/gems/gems/katello-4.3.0.49/app/lib/actions/pulp3/abstract_async_task.rb:108:in
  `block in check_for_errors'"
- "/usr/share/gems/gems/katello-4.3.0.49/app/lib/actions/pulp3/abstract_async_task.rb:106:in
  `each'"
- "/usr/share/gems/gems/katello-4.3.0.49/app/lib/actions/pulp3/abstract_async_task.rb:106:in
  `check_for_errors'"
.....
- "/usr/share/gems/gems/sidekiq-5.2.10/lib/sidekiq/processor.rb:71:in `run'"
- "/usr/share/gems/gems/sidekiq-5.2.10/lib/sidekiq/util.rb:16:in `watchdog'"
- "/usr/share/gems/gems/sidekiq-5.2.10/lib/sidekiq/util.rb:25:in `block in safe_thread'"
- "/usr/share/gems/gems/logging-2.3.0/lib/logging/diagnostic_context.rb:474:in `block
  in create_with_logging_context'"




In /var/log/messages File:  
-------------------------


Nov 11 06:21:01 m-lsat01 pulpcore-worker-6[84096]: pulp [9fc2be7f-9479-40ad-9980-6a56c21fcd28]: pulpcore.tasking.pulpcore_worker:INFO: Starting task 87cd2759-a183-4419-81af-3464a92f34a1
Nov 11 06:21:01 m-lsat01 pulpcore-worker-6[84096]: pulp [9fc2be7f-9479-40ad-9980-6a56c21fcd28]: pulp_rpm.app.tasks.synchronizing:INFO: Synchronizing: repository=Red_Hat_Enterprise_Linux_8_for_x86_64_-_AppStream_RPMs_8-136526562 remote=Red_Hat_Enterprise_Linux_8_for_x86_64_-_AppStream_RPMs_8-136521087
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]: pulp [9fc2be7f-9479-40ad-9980-6a56c21fcd28]: pulpcore.tasking.pulpcore_worker:INFO: Task 87cd2759-a183-4419-81af-3464a92f34a1 failed (get() returned more than one Modulemd -- it returned 2!)
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]: pulp [9fc2be7f-9479-40ad-9980-6a56c21fcd28]: pulpcore.tasking.pulpcore_worker:INFO:   File "/usr/lib/python3.8/site-packages/pulpcore/tasking/pulpcore_worker.py", line 380, in _perform_task
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:    result = func(*args, **kwargs)
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:  File "/usr/lib/python3.8/site-packages/pulp_rpm/app/tasks/synchronizing.py", line 556, in synchronize
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:    repo_version = dv.create() or repo.latest_version()
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:  File "/usr/lib/python3.8/site-packages/pulpcore/plugin/stages/declarative_version.py", line 161, in create
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:    loop.run_until_complete(pipeline)
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:  File "/usr/lib64/python3.8/asyncio/base_events.py", line 616, in run_until_complete
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:    return future.result()
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:  File "/usr/lib/python3.8/site-packages/pulpcore/plugin/stages/api.py", line 225, in create_pipeline
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:    await asyncio.gather(*futures)
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:  File "/usr/lib/python3.8/site-packages/pulpcore/plugin/stages/api.py", line 43, in __call__
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:    await self.run()
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:  File "/usr/lib/python3.8/site-packages/pulpcore/plugin/stages/content_stages.py", line 200, in run
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:    await sync_to_async(process_batch)()
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:  File "/usr/lib/python3.8/site-packages/asgiref/sync.py", line 444, in __call__
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:    ret = await asyncio.wait_for(future, timeout=None)
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:  File "/usr/lib64/python3.8/asyncio/tasks.py", line 455, in wait_for
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:    return await fut
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:  File "/usr/lib64/python3.8/concurrent/futures/thread.py", line 57, in run
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:    result = self.fn(*self.args, **self.kwargs)
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:  File "/usr/lib/python3.8/site-packages/asgiref/sync.py", line 486, in thread_handler
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:    return func(*args, **kwargs)
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:  File "/usr/lib/python3.8/site-packages/pulpcore/plugin/stages/content_stages.py", line 120, in process_batch
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:    d_content.content = d_content.content.__class__.objects.get(
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:  File "/usr/lib/python3.8/site-packages/django/db/models/manager.py", line 85, in manager_method
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:    return getattr(self.get_queryset(), name)(*args, **kwargs)
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:  File "/usr/lib/python3.8/site-packages/django/db/models/query.py", line 439, in get
Nov 11 06:56:55 m-lsat01 pulpcore-worker-6[84096]:    raise self.model.MultipleObjectsReturned(




In /var/log/foreman/production.log File:
---------------------------------------


2022-11-11T06:20:57 [I|app|9fc2be7f] Started POST "/katello/api/v2/repositories/13652/sync?organization_id=1" for 10.230.225.11 at 2022-11-11 06:20:57 +0100
2022-11-11T06:20:57 [I|app|9fc2be7f] Processing by Katello::Api::V2::RepositoriesController#sync as HTML
2022-11-11T06:20:57 [I|app|9fc2be7f]   Parameters: {"id"=>"13652", "skip_metadata_check"=>true, "organization_id"=>"1", "api_version"=>"v2", "repository"=>{"id"=>"13652"}}
2022-11-11T06:20:57 [I|bac|9fc2be7f] Task {label: , id: e88495af-3cc3-4d83-9719-6e6e4b64555e, execution_plan_id: f4259d60-b36f-4360-9f1c-d1d154606fa3} state changed: pending 
2022-11-11T06:20:57 [I|bac|9fc2be7f] Task {label: Actions::Katello::Repository::Sync, id: e88495af-3cc3-4d83-9719-6e6e4b64555e, execution_plan_id: f4259d60-b36f-4360-9f1c-d1d154606fa3} state changed: planning 
2022-11-11T06:20:58 [I|bac|9fc2be7f] Task {label: Actions::Katello::Repository::Sync, id: e88495af-3cc3-4d83-9719-6e6e4b64555e, execution_plan_id: f4259d60-b36f-4360-9f1c-d1d154606fa3} state changed: planned 
2022-11-11T06:20:58 [I|app|9fc2be7f]   Rendering /usr/share/gems/gems/katello-4.3.0.49/app/views/katello/api/v2/repositories/sync.json.rabl within katello/api/v2/layouts/resource
2022-11-11T06:20:58 [I|app|9fc2be7f]   Rendered /usr/share/gems/gems/katello-4.3.0.49/app/views/katello/api/v2/repositories/sync.json.rabl within katello/api/v2/layouts/resource (Duration: 80.9ms | Allocations: 22502)
2022-11-11T06:20:58 [I|app|9fc2be7f] Completed 202 Accepted in 1105ms (Views: 216.4ms | ActiveRecord: 65.5ms | Allocations: 130509)
2022-11-11T06:20:58 [I|bac|9fc2be7f] Task {label: Actions::Katello::Repository::Sync, id: e88495af-3cc3-4d83-9719-6e6e4b64555e, execution_plan_id: f4259d60-b36f-4360-9f1c-d1d154606fa3} state changed: running 
2022-11-11T06:57:08 [E|bac|9fc2be7f] get() returned more than one Modulemd -- it returned 2! (Katello::Errors::Pulp3Error)
 9fc2be7f | /usr/share/gems/gems/katello-4.3.0.49/app/lib/actions/pulp3/abstract_async_task.rb:108:in `block in check_for_errors'
 9fc2be7f | /usr/share/gems/gems/katello-4.3.0.49/app/lib/actions/pulp3/abstract_async_task.rb:106:in `each'
 9fc2be7f | /usr/share/gems/gems/katello-4.3.0.49/app/lib/actions/pulp3/abstract_async_task.rb:106:in `check_for_errors'
 9fc2be7f | /usr/share/gems/gems/katello-4.3.0.49/app/lib/actions/pulp3/abstract_async_task.rb:160:in `poll_external_task'
 9fc2be7f | /usr/share/gems/gems/dynflow-1.6.4/lib/dynflow/action/polling.rb:100:in `poll_external_task_with_rescue'
 9fc2be7f | /usr/share/gems/gems/dynflow-1.6.4/lib/dynflow/action/polling.rb:22:in `run'
 9fc2be7f | /usr/share/gems/gems/dynflow-1.6.4/lib/dynflow/action/cancellable.rb:14:in `run'
.....
 9fc2be7f | /usr/share/gems/gems/dynflow-1.6.4/lib/dynflow/executors/sidekiq/worker_jobs.rb:11:in `block (2 levels) in perform'
 9fc2be7f | /usr/share/gems/gems/dynflow-1.6.4/lib/dynflow/executors.rb:18:in `run_user_code'
 9fc2be7f | /usr/share/gems/gems/dynflow-1.6.4/lib/dynflow/executors/sidekiq/worker_jobs.rb:9:in `block in perform'
 9fc2be7f | /usr/share/gems/gems/dynflow-1.6.4/lib/dynflow/executors/sidekiq/worker_jobs.rb:25:in `with_telemetry'
 9fc2be7f | /usr/share/gems/gems/dynflow-1.6.4/lib/dynflow/executors/sidekiq/worker_jobs.rb:8:in `perform'
 9fc2be7f | /usr/share/gems/gems/dynflow-1.6.4/lib/dynflow/executors/sidekiq/serialization.rb:27:in `perform'
 9fc2be7f | [ sidekiq ]
 9fc2be7f | [ concurrent-ruby ]
2022-11-11T06:57:13 [I|app|9fc2be7f] CONTENT_INDEX type=Katello::Rpm success=true duration=423.93
2022-11-11T06:57:14 [I|app|9fc2be7f] CONTENT_INDEX type=Katello::ModuleStream success=true duration=439.83
2022-11-11T06:57:14 [I|app|9fc2be7f] CONTENT_INDEX type=Katello::Erratum success=true duration=381.13
2022-11-11T06:57:15 [I|app|9fc2be7f] CONTENT_INDEX type=Katello::PackageGroup success=true duration=431.98
2022-11-11T06:57:15 [I|app|9fc2be7f] CONTENT_INDEX type=Katello::Srpm success=true duration=380.19
2022-11-11T06:57:15 [I|aud|9fc2be7f] Katello::Repository (13652) update event on last_indexed 1970-01-01 01:00:00 +0100, 2022-11-11 06:57:15 +0100
2022-11-11T06:57:16 [I|bac|9fc2be7f] Task {label: Actions::Katello::Repository::Sync, id: e88495af-3cc3-4d83-9719-6e6e4b64555e, execution_plan_id: f4259d60-b36f-4360-9f1c-d1d154606fa3} state changed: stopped  result: warning



Expected results:
----------------

No error and successful repository synchronization.



Additional info:
---------------

- We previously had a similar bugzilla which was filed for errata. This issue has occurred for modules. The link to previous bugzilla is given down below -

	https://bugzilla.redhat.com/show_bug.cgi?id=2013320


- The customer has confirmed that the issue occurs even after upgrading to 6.11.4. However, they are running Satellite 6.11.3.

Comment 6 Daniel Alley 2022-11-18 15:23:51 UTC
We've been investigating, it's just being discussed on Matrix instead of Bugzilla.  I will mark it as assigned.

We have yet to be able to reproduce this failure, so if support is able to help us get a reliable reproducer that would be very helpful.

Comment 7 Jayant Bhatia 2022-11-18 15:46:23 UTC
I tried to setup a reproducer at my end but could not reproduce it. Please let me know how I can help you in setting up the reproducer? If you need any specific conf files/data from customer's Satellite, I can get it for you to debug the issue.

Comment 24 Pavel Moravec 2022-12-16 14:27:51 UTC
Great finding! I can confirm the uniq index "rpm_modulemd_name_stream_version_context_arch_f6598e4e_uniq" violation on our Satellite (https://bugzilla.redhat.com/show_bug.cgi?id=2044631#c52) is timely aligned with / after the OS upgrade from RHEL7.

Now the questions come:
- how to prevent such index violation?
- some customers already got into the broken index state - is the orphan cleanup a valid workaround / remedy every time?
- can I close mine https://bugzilla.redhat.com/show_bug.cgi?id=2154289 as duplicate BZ? :)

Comment 25 Pavel Moravec 2022-12-16 14:44:26 UTC
(In reply to Pavel Moravec from comment #24)
> Great finding! I can confirm the uniq index
> "rpm_modulemd_name_stream_version_context_arch_f6598e4e_uniq" violation on
> our Satellite (https://bugzilla.redhat.com/show_bug.cgi?id=2044631#c52) is
> timely aligned with / after the OS upgrade from RHEL7.
> 
> Now the questions come:
> - how to prevent such index violation?
> - some customers already got into the broken index state - is the orphan
> cleanup a valid workaround / remedy every time?
> - can I close mine https://bugzilla.redhat.com/show_bug.cgi?id=2154289 as
> duplicate BZ? :)

Needinfo on Daniel to confirm the action plan.

Comment 26 Pavel Moravec 2022-12-18 21:20:12 UTC
(In reply to Pavel Moravec from comment #24)
> Great finding! I can confirm the uniq index
> "rpm_modulemd_name_stream_version_context_arch_f6598e4e_uniq" violation on
> our Satellite (https://bugzilla.redhat.com/show_bug.cgi?id=2044631#c52) is
> timely aligned with / after the OS upgrade from RHEL7.
> 
> Now the questions come:
> - how to prevent such index violation?

Self-answering this question by a proposal: during the Leapp upgrade of Satellite from RHEL7 to RHEL8, enter a new mandatory step to reindex the table (or whole/all DBs..? just to be sure

Comment 27 Daniel Alley 2023-01-05 14:26:42 UTC
@pmoravec Yes I think the other issue (#2154289) can be closed as a duplicate.  I'll defer to Hao on the prevention / workaround since he knows way more on this topic than me :)  I assume that it will require some kind of change to the upgrade procedure and that the component should change?

Comment 28 Pavel Moravec 2023-01-05 14:59:28 UTC
*** Bug 2154289 has been marked as a duplicate of this bug. ***

Comment 29 Daniel Alley 2023-01-10 19:55:53 UTC
Moving this back to NEW and switching the component since it is an upgrade / installer issue rather than a code issue, and changing the title to better reflect what we know about the problem.

Comment 30 Evgeni Golov 2023-01-12 11:22:24 UTC
I agree, this is https://wiki.postgresql.org/wiki/Locale_data_changes and we should run a re-index during the upgrade automatically.

Comment 31 Evgeni Golov 2023-01-12 11:42:45 UTC
Running "runuser -u postgres --  reindexdb -a" should fix all indexes

Comment 65 Lukas Pramuk 2023-02-15 17:12:26 UTC
VERIFIED.

@Satellite 6.11.4.1 CDN
leapp-0.15.0-2.el7_9.noarch
leapp-upgrade-el7toel8-0.17.0-8.el7_9.noarch (brew)

by the following manual reproducer:

1) Have a Satellite 6.11.z (mine was 6.11.4.1) on RHEL7

2) Perform LEAPP Upgrade 

3) Wait till leapp_resume.service is finished (=inactive)

4) Run DB re-index on RHEL8 Satellite to detect a possible corruption of unique indices

# runuser - postgres -c "reindexdb -a"
reindexdb: reindexing database "candlepin"
reindexdb: reindexing database "foreman"
reindexdb: reindexing database "postgres"
reindexdb: reindexing database "pulpcore"
reindexdb: reindexing database "template1"


>>> in-place upgraded Satellite manifests no unique indices corruption on RHEL8

Comment 67 errata-xmlrpc 2023-03-14 13:56:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (leapp-repository bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1228

Comment 69 Red Hat Bugzilla 2023-12-13 04:25:09 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.