Bug 1981225 - Unable to sync docker images
Summary: Unable to sync docker images
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Pulp
Version: 6.10.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: 6.10.0
Assignee: Justin Sherrill
QA Contact: Lai
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-07-12 05:49 UTC by Imaan
Modified: 2021-11-16 14:12 UTC (History)
11 users (show)

Fixed In Version: python-pulp-container-2.8.1
Doc Type: Known Issue
Doc Text:
Cause: synchronizing repositories with the same tags causes a race condition Consequence: one of the tags is not synced correctly Workaround (if any): synchronize such repositories one by one, not in parallel
Clone Of:
Environment:
Last Closed: 2021-11-16 14:12:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
error (43.67 KB, image/png)
2021-07-12 05:49 UTC, Imaan
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Pulp Redmine 9334 0 Normal CLOSED - CURRENTRELEASE Backport 2.8: race condition syncing repositories with the same tags results in bad data within the database 2021-09-07 19:06:36 UTC
Red Hat Product Errata RHSA-2021:4702 0 None None None 2021-11-16 14:12:42 UTC

Description Imaan 2021-07-12 05:49:00 UTC
Created attachment 1800652 [details]
error

Description of problem:

During the performance testing of pulp3, we are unable to sync docker images. Tried to sync 20 docker images and out of 20 , 7 got synced but 13 were failed with the following error :

"undefined method `schema_version' for nil:NilClass."

Version-Release number of selected component (if applicable):

Red Hat Satellite (build: 6.10.0 Beta)


How reproducible:


Steps to Reproduce:

1. Sync docker 20 images using this playbook https://github.com/redhat-performance/satperf/blob/master/playbooks/tests/sync-docker.yaml


Actual results: only 7 got synced, 13 were stopped


Expected results: It should sync all the 20 docker images.


Additional info:

Comment 1 Ina Panova 2021-07-13 11:48:41 UTC
You seem to sync repos from dockerhub, can you confirm? They have introduced rate pull limits.

Comment 2 Tanya Tereshchenko 2021-07-23 15:47:03 UTC
Also, what is 6.10.0 Beta? To my knowledge, Beta is not out yet.
Please share which pulp packages you have installed on your system.
Thank you.

Comment 3 Tanya Tereshchenko 2021-07-26 07:43:28 UTC
Imaan, please reply to the comments.
Currently it looks like an issue of using dockerhub in testing.

Comment 4 Imaan 2021-07-27 14:53:01 UTC
Hi Tanya, 

I have tested on current snap 9 - Red Hat Satellite (build: 6.10.0 Beta)Version 6.10 © 2021 Red Hat Inc.

I tried to sync 20 repositories of docker type, synced them in parallel, assigned each of them to one CV ,published them in parallel and promoted them in parallel. Every repo contains almost 4000 packages. Observation - out of 20, only 10 got successful and rest are failing with same error. while syncing where promote and published successfully completed. 

Let me check about docker hub. will update.

Comment 8 Ina Panova 2021-08-03 11:18:04 UTC
I logged into the instance and I could not find anything pulp related. All tasks were in the completed state as well repo versions were created, content was having properly associated artifacts to it. Logs also did not show anything.
This is a katello error and it seems like it's being surfaced not for pulp reasons.
@justin can you take a look please?

Note: the repos were created and synced *not* from docker hub.

Comment 10 Justin Sherrill 2021-08-18 00:43:16 UTC
relevant traceback:

 NoMethodError

undefined method `schema_version' for nil:NilClass

---
- "/opt/theforeman/tfm/root/usr/share/gems/gems/katello-4.1.1/app/models/katello/docker_meta_tag.rb:177:in
  `block in get_tag_table_values'"
- "/opt/theforeman/tfm/root/usr/share/gems/gems/katello-4.1.1/app/models/katello/docker_meta_tag.rb:176:in
  `each'"
- "/opt/theforeman/tfm/root/usr/share/gems/gems/katello-4.1.1/app/models/katello/docker_meta_tag.rb:176:in
  `map'"
- "/opt/theforeman/tfm/root/usr/share/gems/gems/katello-4.1.1/app/models/katello/docker_meta_tag.rb:176:in
  `get_tag_table_values'"
- "/opt/theforeman/tfm/root/usr/share/gems/gems/katello-4.1.1/app/models/katello/docker_meta_tag.rb:138:in
  `block in import_meta_tags'"
- "/opt/theforeman/tfm/root/usr/share/gems/gems/katello-4.1.1/app/models/katello/docker_meta_tag.rb:137:in
  `each'"
- "/opt/theforeman/tfm/root/usr/share/gems/gems/katello-4.1.1/app/models/katello/docker_meta_tag.rb:137:in
  `import_meta_tags'"
- "/opt/theforeman/tfm/root/usr/share/gems/gems/katello-4.1.1/app/models/katello/docker_tag.rb:63:in
  `import_for_repository'"
- "/opt/theforeman/tfm/root/usr/share/gems/gems/katello-4.1.1/app/models/katello/repository.rb:904:in
  `block (2 levels) in index_content'"
- "/opt/theforeman/tfm/root/usr/share/gems/gems/katello-4.1.1/app/lib/katello/logging.rb:8:in
  `time'"

Comment 11 Justin Sherrill 2021-08-18 00:56:50 UTC
Resyncing the repo seems to result in a successful sync, so this is likely a race condition.

Comment 13 Justin Sherrill 2021-08-24 02:01:32 UTC
Created redmine issue https://projects.theforeman.org/issues/33326 from this bug

Comment 14 Justin Sherrill 2021-08-24 14:27:36 UTC
I am proposing we remove this from a beta blocker.  Syncing 10 very large docker repos (1000 tags each) at the same time i could not reproduce the issue, but i was able to reproduce syncing 20 very large docker repos at the same time.

Comment 16 pulp-infra@redhat.com 2021-08-25 16:11:30 UTC
The Pulp upstream bug status is at NEW. Updating the external tracker on this bug.

Comment 17 pulp-infra@redhat.com 2021-08-25 16:11:32 UTC
The Pulp upstream bug priority is at Normal. Updating the external tracker on this bug.

Comment 18 Justin Sherrill 2021-08-25 16:30:19 UTC
moving to the pulp component as the underlying data issues i see are quite bad, and could be the cause of the katello error, although its not clear.  I can re-test once we have a fix in pulp

Comment 19 Bryan Kearney 2021-08-25 20:05:25 UTC
Upstream bug assigned to jsherril

Comment 21 Bryan Kearney 2021-08-26 00:05:18 UTC
Upstream bug assigned to jsherril

Comment 23 pulp-infra@redhat.com 2021-08-30 15:08:24 UTC
The Pulp upstream bug status is at ASSIGNED. Updating the external tracker on this bug.

Comment 24 pulp-infra@redhat.com 2021-08-30 17:14:52 UTC
The Pulp upstream bug status is at POST. Updating the external tracker on this bug.

Comment 25 pulp-infra@redhat.com 2021-09-01 13:15:32 UTC
The Pulp upstream bug status is at MODIFIED. Updating the external tracker on this bug.

Comment 26 pulp-infra@redhat.com 2021-09-03 21:07:15 UTC
The Pulp upstream bug status is at MODIFIED. Updating the external tracker on this bug.

Comment 27 pulp-infra@redhat.com 2021-09-03 21:07:17 UTC
The Pulp upstream bug priority is at Normal. Updating the external tracker on this bug.

Comment 28 pulp-infra@redhat.com 2021-09-07 19:06:37 UTC
The Pulp upstream bug status is at CLOSED - CURRENTRELEASE. Updating the external tracker on this bug.

Comment 29 Lai 2021-09-27 18:56:34 UTC
Steps to retest

1. Create docker repo with 20+ images
2. sync repo
3. Check status of sync

Expected:
Sync should complete successfully

Actual
Sync completes successfully.


I did a docker repo that has 20+ images and I was able to sync successfully without issues.

Verified on 6.10 snap 20 with python3-pulp-container-2.8.1-0.1.el7pc.noarch

Comment 32 errata-xmlrpc 2021-11-16 14:12:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Satellite 6.10 Release), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:4702


Note You need to log in before you can comment on or make changes to this bug.