Bug 1223023 - [RFE] Allow Pulp to force sync and verify/repair corrupted packages in a repository
Summary: [RFE] Allow Pulp to force sync and verify/repair corrupted packages in a repo...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Pulp
Version: 6.1.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: Unspecified
Assignee: Justin Sherrill
QA Contact: Sachin Ghai
URL:
Whiteboard:
: 1330042 1380765 1421232 1421250 1427165 (view as bug list)
Depends On:
Blocks: 1353215 1399395 1316897 1317008 CEE_Sat6_Top_BZs, GSS_Sat6_Top_Bugs 1427618 1430879 1530485 1686971
TreeView+ depends on / blocked
 
Reported: 2015-05-19 15:43 UTC by Dirk Herrmann
Modified: 2021-04-06 18:07 UTC (History)
54 users (show)

Fixed In Version: tfm-rubygem-runcible-1.9.3-1 pulp-2.8.7.10-2 pulp-rpm-2.8.7.12-1 rubygem-katello-3.0.0.123-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
patch adding force_full to yum importer (1.87 KB, patch)
2017-02-21 15:58 UTC, Michael Hrivnak
no flags Details | Diff
Screenshot of new UI option for verification options during a sync operation (73.34 KB, image/png)
2017-02-28 18:23 UTC, Mike McCune
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 17941 0 Normal Closed allow for force generation of repo metadata 2021-02-03 06:56:19 UTC
Foreman Issue Tracker 18558 0 Normal Closed Add option to force sync repository 2021-02-03 06:56:19 UTC
Foreman Issue Tracker 19250 0 Normal Closed Repo sync: several parameters not passed to pulp 2021-02-03 06:56:20 UTC
Pulp Redmine 1158 0 Normal CLOSED - CURRENTRELEASE As a user, I can force full/fresh publish of rpms and not do an incremental publish 2017-03-07 21:34:44 UTC
Pulp Redmine 1823 0 High CLOSED - CURRENTRELEASE RPMs partially downloaded 2017-02-01 21:03:11 UTC
Pulp Redmine 1982 0 Normal CLOSED - CURRENTRELEASE As a user, I can force a full sync 2016-10-14 19:01:17 UTC
Pulp Redmine 2621 0 Normal CLOSED - CURRENTRELEASE Syncing an immediate repo with 'on_demand' overridden no longer populates the catalog 2017-04-12 14:33:23 UTC
Pulp Redmine 2660 0 High CLOSED - COMPLETE Backport #1158, As a user, I can force full/fresh publish of rpms 2017-03-30 23:05:21 UTC
Pulp Redmine 2661 0 High CLOSED - COMPLETE Backport #1823, Verify RPM/SRPM/DRPM unit at its final location 2017-03-30 23:04:40 UTC
Pulp Redmine 2662 0 High CLOSED - COMPLETE Backport #1982, force full sync 2017-03-30 23:03:51 UTC
Pulp Redmine 2663 0 High CLOSED - COMPLETE Backport #2621, Syncing an immediate repo with 'on_demand' overridden no longer populates the catalog 2017-03-30 23:02:44 UTC
Pulp Redmine 2704 0 High CLOSED - CURRENTRELEASE Catalog entries for existing content units with old-style storage path get created with new-style storage path. 2017-05-24 18:34:22 UTC
Red Hat Bugzilla 1330042 0 medium CLOSED allow pulp to force re-sync of a repository 2021-08-30 13:40:09 UTC
Red Hat Bugzilla 1344524 0 high CLOSED [RFE] Provide the ability to force a verification and synchronization of Capsule repository content 2021-12-10 14:54:32 UTC
Red Hat Knowledge Base (Solution) 2598981 0 None None None 2016-09-02 08:44:29 UTC
Red Hat Knowledge Base (Solution) 2653831 0 None None None 2016-09-28 16:24:40 UTC

Internal Links: 1330042 1344524

Description Dirk Herrmann 2015-05-19 15:43:09 UTC
Description of problem:

We've synced varios Red Hat repositories. Some repo sync ran into error and had to be resumed. After that the nightly sync status was "Sync complete". While debugging a provisioning problem we've seen that there have been 183 packages with size of 0 bytes. Even if the repo sync failed we might should compare the md5sum of packages to ensure that they are re-synced again during the next repo sync.

Version-Release number of selected component (if applicable):

Currently 6.1 but the same outcome we've seen already with 6.0 even not sure if it has been based on the same root cause.

How reproducible:

Don't know exactly what has been the root cause of this issue, maybe the repo sync went to stopped with failures.

Steps to Reproduce:
1.
2.
3.

Actual results:

Broken Packages (size 0 bytes) will not automatically re-synched again.

Expected results:

Check if the package in pulp is really the package which won't be synched again (md5sum) to ensure that it is re-synched if sync failed and the package is not there or unusable.

Additional info:

Comment 1 RHEL Program Management 2015-05-19 15:52:31 UTC
Since this issue was entered in Red Hat Bugzilla, the release flag has been
set to ? to ensure that it is properly evaluated for this release.

Comment 3 Michael Hrivnak 2015-07-07 18:38:07 UTC
Please verify that the importer setting "validate" was set to True when this occurred.

https://pulp-rpm.readthedocs.org/en/2.6-release/tech-reference/yum-plugins.html#configuration-parameters

The installer should set that in /etc/pulp/server/plugins.conf.d/yum_importer.json

If it is there, please provide the output of "ls -lZ" so we can make sure that file is readable by the right processes.

Just as a hunch, is it possible that the system ran out of disk space during the sync?

FWIW, the expected behavior is that the sync will verify each RPM's checksum, and if verification fails, the sync will throw it away instead of adding it to the repo (and of course report that error). We should not end up with corrupt RPMs in the repo.

Comment 4 Dirk Herrmann 2015-07-08 11:35:03 UTC
Hi Michael,

unfortunately I've had to proceed with my setup and therefore reinstalled and reconfigured my Sat6. Means that I'm neither able to check the settings you've mentioned nor to reproduce it. Definitely we've never ran out of disk space or inodes. And since the sync has been running multiple times before we've figured out this issues the behavior was not as you've described it. Maybe it has been fixed with one of the updates we've shipped in the meantime. Never happened again.

Dirk

Comment 5 Michael Hrivnak 2015-07-14 18:23:00 UTC
Let me know if you want any further investigation from the pulp team. If so, answers to the above questions about the "validate" setting would be helpful.

Comment 6 Sean O'Keeffe 2015-09-11 10:44:34 UTC
Hi,

I've also had a similar issue, so far we have found 1 rpm that only 2 thirds of it was downloaded but it was added to our repos. Ours is a disconnected install, if that make any difference. My /etc/pulp/server/plugins.conf.d/yum_importer.json file only has proxy settings as per https://access.redhat.com/documentation/en-US/Red_Hat_Satellite/6.0/html/Release_Notes/Disconnected.html. No validate option.

Sean

Comment 8 Justin Sherrill 2015-12-04 04:44:06 UTC
Michael,

We do not set 'validate' in that yum_importer.json.  Instead we set it at sync time when initiating the sync.  This change was made as part of https://bugzilla.redhat.com/show_bug.cgi?id=1139896

with the change being:  https://github.com/Katello/katello/pull/4747/files

So all satellite 6.1 Installations should have this set by default.

-Justin

Comment 9 Justin Sherrill 2015-12-04 04:54:07 UTC
And here is an example of the request:

RestClient.post "https://katello-devbox.example.com/pulp/api/v2/repositories/Default_Organization-TestProduct-TestRepo/actions/sync/", 

"{\"override_config\":{\"num_threads\":4,\"validate\":true}}"

"Accept"=>"*/*; q=0.5, application/xml", "Accept-Encoding"=>"gzip, deflate", "Authorization"=>"OAuth oauth_body_hash=\"2jmj7l5rSw0yVb%2FvlWAYkK%2FYBwk%3D\", oauth_consumer_key=\"katello\", oauth_nonce=\"D7IO753MImqP2YeCKp1AcdYngVbG04OBOtUuSiAlIQ\", oauth_signature=\"ahRXoqPLKfyj0VAZl9RQ%2BwWaTzs%3D\", oauth_signature_method=\"HMAC-SHA1\", oauth_timestamp=\"1449204704\", oauth_version=\"1.0\"", "Content-Length"=>"53", "accept"=>"application/json", "content_type"=>"application/json", "pulp-user"=>"admin"

Comment 13 Michael Hrivnak 2016-04-06 19:18:52 UTC
I created an upstream bug, but any additional info on how to reproduce this would be helpful. Looking at the pulp code that shipped in 6.1, it's hard to see how this is happening.

I see that the bug was originally seen before the 6.1 release, presumably with an early build? Would that build have had the katello change that turned on validation? If not, that could definitely explain everything.

Comment 14 pulp-infra@redhat.com 2016-04-06 19:33:43 UTC
The Pulp upstream bug status is at NEW. Updating the external tracker on this bug.

Comment 15 pulp-infra@redhat.com 2016-04-06 19:33:47 UTC
The Pulp upstream bug priority is at High. Updating the external tracker on this bug.

Comment 17 Pavel Moravec 2016-04-28 08:01:40 UTC
Is this really fixed in 6.2 (or anywhere)? My understanding is that:

- when enabling validate=true in pulp repo sync, any unit already present on the disk should have been verified against its checksum
- validate=true is set by default in 6.1 and 6.2beta (I have verified sync_options contains :validate=>true)
- BUT it apparently does not work:

- sync Sat6.1 tools for RHEL7
- ensure /var/lib/pulp/content/units/rpm/37/3f7ae5ab5cbe8b82529071c0b0892f1a75e8bc0f691f3cf123e739531a4a86/qpid-proton-c-0.9-4.el7.x86_64.rpm is only refered by that repo (few mongo or pulp-admin queries)
- damage the file
- sync Sat6.1 tools for RHEL7 again
- check the file (rpm -K $file)

by "damage", I tried all these ways:
1) rm -rf $file; touch $file; chown apache:apache $file
2) rm -rf $file
3) echo > $file
4) echo >> $file

After damaging the file by _either_ way and re-syncing the repo, the file was untouched, still in the same damaged state.


Michael, could you confirm this bug (I tested it mainly in 6.2Beta /  pulp-server-2.8.1.3-1.el7sat.noarch)?


Further, the same problem can appear also on Capsules where Sat->Caps sync ends with a repo damaged on Capsule. Again, same pain in fixing it there (even bigger pain due to another pulp issue, though). Please fix this bug such that also capsule sync does the verification.

Comment 18 Michael Hrivnak 2016-05-06 13:56:11 UTC
"validate=true" as an importer setting does not cause pulp to re-verify the checksum of every file that is already in the repo. It does verify the size and checksum of each file that gets downloaded during a sync.

Put another way, every rpm has its checksum verified on the way in, before it gets saved to the database. But once it's been added to pulp, nothing goes back to periodically look for corruption.

Comment 20 Pavel Moravec 2016-05-09 08:38:22 UTC
See also: https://bugzilla.redhat.com/show_bug.cgi?id=1330042 :

allow pulp to force re-sync of a repository

Or in general:
assume someone accidentally delete a file under /var/lib/pulp/content . How to recover from that?

Comment 21 Brad Buckingham 2016-05-10 17:45:46 UTC
*** Bug 1329334 has been marked as a duplicate of this bug. ***

Comment 27 pulp-infra@redhat.com 2016-07-14 15:00:53 UTC
The Pulp upstream bug status is at ASSIGNED. Updating the external tracker on this bug.

Comment 30 Reartes Guillermo 2016-08-08 15:46:35 UTC
I found one package that was truncated on the Sat 6.1, and could not be installed on any client.

Both yum and yumdownloader failed with: "[Errno -1] El paquete no se corresponde con la descarga pretendida. Sugerimos ejecutar el siguiente comando: yum --enablerepo=rhel-7-server-rpms clean metadata"

I checked the Satellite, and found that the checksum of the package was different from the one i manually downloaded with yumdownloader. It was truncated (5.2mb vs 5.0mb for the bad rpm).

I replaced the faulty file with the correct one manually and then the clients were able to install and apply updates correctly (the rpm was systemd, so it was not avoidable).

A sync plan exists (daily) for that product and it executed correctly for 2 days after the bad rpm date, but it never datected nor corrected the truncated rpm.

I wonder if there are more rpms in that state.

Comment 32 pulp-infra@redhat.com 2016-09-02 16:01:02 UTC
The Pulp upstream bug status is at POST. Updating the external tracker on this bug.

Comment 33 Bryan Kearney 2016-09-28 14:03:12 UTC
*** Bug 1330042 has been marked as a duplicate of this bug. ***

Comment 36 Michael Hrivnak 2016-10-14 18:37:07 UTC
This title change was a little surprising: "Summary: Packages with 0 bytes won't be synced again → Allow Pulp to force resync a repository"

It changed from reporting a problem to requesting a specific solution.

The associated upstream pulp issue, #1823, addresses the original problem, but not with the specific solution in the new title. Instead it adds extra protection to make sure we don't end up with 0 byte files to begin with.

However, you're in luck! We also implemented a force sync option, so I am associating that redmine issue also.

I think the word of caution we can take away is: if you change the scope of a BZ issue, make sure any related upstream issues are still relevant and satisfactory, especially if they're already in POST or beyond.

Comment 37 pulp-infra@redhat.com 2016-10-14 19:01:19 UTC
The Pulp upstream bug status is at CLOSED - CURRENTRELEASE. Updating the external tracker on this bug.

Comment 38 pulp-infra@redhat.com 2016-10-14 19:01:30 UTC
The Pulp upstream bug priority is at Normal. Updating the external tracker on this bug.

Comment 40 Xixi 2016-10-26 20:59:28 UTC
(In reply to Michael Hrivnak from comment #36)
> This title change was a little surprising: "Summary: Packages with 0 bytes
> won't be synced again → Allow Pulp to force resync a repository"
> 
> It changed from reporting a problem to requesting a specific solution.
> 
> The associated upstream pulp issue, #1823, addresses the original problem,
> but not with the specific solution in the new title. Instead it adds extra
> protection to make sure we don't end up with 0 byte files to begin with.
> 
> However, you're in luck! We also implemented a force sync option, so I am
> associating that redmine issue also.
> 
Thank you Michael, will this force sync option include the functionality described in https://access.redhat.com/solutions/2038473 "[Satellite6] How to forcefully regenerate metadata for a particular repository?" 

> I think the word of caution we can take away is: if you change the scope of
> a BZ issue, make sure any related upstream issues are still relevant and
> satisfactory, especially if they're already in POST or beyond.
+1 there seems to be multiple issues in this single BZ

Comment 41 Michael Hrivnak 2016-10-31 13:03:29 UTC
(In reply to Xixi from comment #40)
> Thank you Michael, will this force sync option include the functionality
> described in https://access.redhat.com/solutions/2038473 "[Satellite6] How
> to forcefully regenerate metadata for a particular repository?" 

Short answer: no. Pulp separates the idea of a sync (content coming in) vs. a publish (content going out). But Satellite puts them together in one workflow, so that isn't obvious to users. The sync can sometimes take a shortcut to speed things up when it has an opportunity to do so; this issue is about forcing it to not take any shortcuts during sync.

The issue you linked to is about forcing a publish to happen.

Comment 44 pulp-infra@redhat.com 2016-11-14 13:01:58 UTC
The Pulp upstream bug status is at MODIFIED. Updating the external tracker on this bug.

Comment 45 Brad Buckingham 2016-11-21 16:56:53 UTC
Created redmine issue http://projects.theforeman.org/issues/17418 from this bug

Comment 46 Michael Hrivnak 2016-12-09 18:24:18 UTC
Oops, moving back to NEW since there is a foreman issue.

Comment 47 Bryan Kearney 2016-12-16 19:18:29 UTC
Upstream bug component is Repositories

Comment 53 pulp-infra@redhat.com 2017-01-18 00:03:23 UTC
The Pulp upstream bug status is at ON_QA. Updating the external tracker on this bug.

Comment 54 pulp-infra@redhat.com 2017-02-01 21:03:12 UTC
The Pulp upstream bug status is at CLOSED - CURRENTRELEASE. Updating the external tracker on this bug.

Comment 56 Justin Sherrill 2017-02-17 22:34:33 UTC
Michael,

To clarify, In order to repair a repository with an immediate download policy, we would need to:

1) Change the repository to on_demand
2) Sync the repository (Presumably with force_full = true)
3) Kick off a download_repo task with verify_all_units = true
4) Change the repository back to immediate

Questions:

1) If the repository is already on_demand, (skipping steps 1 and 4), would step 3 cause the entire repo to be downloaded? 

2) If the repository is set to 'background' download policy, i assume steps 1 and 4 wouldn't be needed?

3) This BZ is marked against 6.2.z, but step 2 requires a pulp newer than 2.8.7 (for force_full), is that easily backportable?

Comment 57 Justin Sherrill 2017-02-17 22:40:15 UTC
*** Bug 1421232 has been marked as a duplicate of this bug. ***

Comment 58 Justin Sherrill 2017-02-20 14:24:31 UTC
One more question:

4)  is it possible to kick off a sync with an download_policy override so that we can skip steps 1 and 4

Comment 59 Michael Hrivnak 2017-02-20 14:31:38 UTC
(In reply to Justin Sherrill from comment #56)
> Michael,
> 
> To clarify, In order to repair a repository with an immediate download
> policy, we would need to:
> 
> 1) Change the repository to on_demand
> 2) Sync the repository (Presumably with force_full = true)
> 3) Kick off a download_repo task with verify_all_units = true
> 4) Change the repository back to immediate
> 
> Questions:
> 
> 1) If the repository is already on_demand, (skipping steps 1 and 4), would
> step 3 cause the entire repo to be downloaded?

No. It verifies each file by size and checksum, and only re-downloads ones that are missing or corrupt.

> 
> 2) If the repository is set to 'background' download policy, i assume steps
> 1 and 4 wouldn't be needed?

Correct.

> 
> 3) This BZ is marked against 6.2.z, but step 2 requires a pulp newer than
> 2.8.7 (for force_full), is that easily backportable?

I'm not actually sure if it needs force_full. Jeff, what do you think?

> 4)  is it possible to kick off a sync with an download_policy override so that we can skip steps 1 and 4

Yes, you should be able to pass it as an argument when you queue the task. All importer settings can be overridden that way.

Comment 60 Jeff Ortel 2017-02-20 17:08:19 UTC
Yes, I think the force_full flag is needed.  Otherwise, the yum importer may determine that the metadata has not changed causing the sync to stop before generating the catalog entries.

Comment 61 Michael Hrivnak 2017-02-21 15:58:32 UTC
Created attachment 1256187 [details]
patch adding force_full to yum importer

Looks like the diff for adding force_full is super simple. I tweaked it a touch, and I'm attaching a patch that should apply to 2.8.7.

Origin: https://github.com/pulp/pulp_rpm/pull/935/files

Comment 62 Justin Sherrill 2017-02-21 16:27:31 UTC
Thanks Michael!

Comment 63 Satellite Program 2017-02-21 17:18:06 UTC
Upstream bug assigned to mmccune

Comment 66 Mike McCune 2017-02-28 18:23:35 UTC
Created attachment 1258472 [details]
Screenshot of new UI option for verification options during a sync operation

Attached is a screenshot of the new options we will present in an 'advanced' mode for initiating a synchronization.

These new options will also be available with hammer.

Comment 67 Peter Vreman 2017-03-01 15:34:22 UTC
I checked the new UI for sync operations. I think the texts can use a bit of improvement:

- the second option with "skip ..." does not really trigger me as an user that is in fact does more work. Skip means normally being faster by not doing things.
- inconsistent use of "standard sync" and "normal sync"

Suggestion to name the options in "increasing" wording:

- optimized sync
- normal sync
- full sync - including content validation

Comment 70 pulp-infra@redhat.com 2017-03-06 21:01:41 UTC
The Pulp upstream bug status is at NEW. Updating the external tracker on this bug.

Comment 71 pulp-infra@redhat.com 2017-03-06 21:01:53 UTC
The Pulp upstream bug priority is at Normal. Updating the external tracker on this bug.

Comment 74 Justin Sherrill 2017-03-07 16:22:15 UTC
*** Bug 1421250 has been marked as a duplicate of this bug. ***

Comment 75 Satellite Program 2017-03-07 17:17:51 UTC
Upstream bug assigned to jsherril

Comment 76 Satellite Program 2017-03-07 17:18:05 UTC
Upstream bug assigned to jsherril

Comment 77 pulp-infra@redhat.com 2017-03-07 21:34:45 UTC
The Pulp upstream bug status is at CLOSED - CURRENTRELEASE. Updating the external tracker on this bug.

Comment 78 pulp-infra@redhat.com 2017-03-07 21:35:00 UTC
The Pulp upstream bug priority is at Normal. Updating the external tracker on this bug.

Comment 79 Satellite Program 2017-03-07 23:17:39 UTC
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/17941 has been resolved.

Comment 81 Satellite Program 2017-03-08 17:18:05 UTC
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/17941 has been resolved.

Comment 82 Satellite Program 2017-03-08 19:18:00 UTC
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/17941 has been resolved.

Comment 83 pulp-infra@redhat.com 2017-03-10 03:31:52 UTC
The Pulp upstream bug status is at POST. Updating the external tracker on this bug.

Comment 84 pulp-infra@redhat.com 2017-03-10 17:31:58 UTC
The Pulp upstream bug status is at MODIFIED. Updating the external tracker on this bug.

Comment 85 Michael Hrivnak 2017-03-12 17:08:30 UTC
*** Bug 1427165 has been marked as a duplicate of this bug. ***

Comment 86 Dennis Kliban 2017-03-14 02:23:23 UTC
Requesting needsinfo from upstream developer jortel because the 'FailedQA' flag is set.

Comment 87 Dennis Kliban 2017-03-14 02:23:53 UTC
Requesting needsinfo from upstream developer fdobrovo because the 'FailedQA' flag is set.

Comment 88 Dennis Kliban 2017-03-14 02:24:25 UTC
Requesting needsinfo from upstream developer ttereshc because the 'FailedQA' flag is set.

Comment 89 Dennis Kliban 2017-03-14 02:24:55 UTC
Requesting needsinfo from upstream developer ttereshc because the 'FailedQA' flag is set.

Comment 90 Jeff Ortel 2017-03-14 15:02:54 UTC
Pulp fix merged.

Comment 91 Justin Sherrill 2017-03-14 16:03:40 UTC
Moving to POST

Comment 92 Satellite Program 2017-03-14 16:17:29 UTC
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/18558 has been resolved.

Comment 96 pulp-infra@redhat.com 2017-03-30 02:04:29 UTC
The Pulp upstream bug status is at POST. Updating the external tracker on this bug.

Comment 97 pulp-infra@redhat.com 2017-03-30 02:04:42 UTC
The Pulp upstream bug priority is at High. Updating the external tracker on this bug.

Comment 98 pulp-infra@redhat.com 2017-03-30 02:05:00 UTC
The Pulp upstream bug status is at POST. Updating the external tracker on this bug.

Comment 99 pulp-infra@redhat.com 2017-03-30 02:05:16 UTC
The Pulp upstream bug priority is at High. Updating the external tracker on this bug.

Comment 100 pulp-infra@redhat.com 2017-03-30 02:05:34 UTC
The Pulp upstream bug status is at POST. Updating the external tracker on this bug.

Comment 101 pulp-infra@redhat.com 2017-03-30 02:05:49 UTC
The Pulp upstream bug priority is at High. Updating the external tracker on this bug.

Comment 102 pulp-infra@redhat.com 2017-03-30 02:06:07 UTC
The Pulp upstream bug status is at POST. Updating the external tracker on this bug.

Comment 103 pulp-infra@redhat.com 2017-03-30 02:06:23 UTC
The Pulp upstream bug priority is at High. Updating the external tracker on this bug.

Comment 104 pulp-infra@redhat.com 2017-03-30 23:02:46 UTC
The Pulp upstream bug status is at CLOSED - COMPLETE. Updating the external tracker on this bug.

Comment 105 pulp-infra@redhat.com 2017-03-30 23:03:52 UTC
The Pulp upstream bug status is at CLOSED - COMPLETE. Updating the external tracker on this bug.

Comment 106 pulp-infra@redhat.com 2017-03-30 23:04:42 UTC
The Pulp upstream bug status is at CLOSED - COMPLETE. Updating the external tracker on this bug.

Comment 107 pulp-infra@redhat.com 2017-03-30 23:05:22 UTC
The Pulp upstream bug status is at CLOSED - COMPLETE. Updating the external tracker on this bug.

Comment 108 pulp-infra@redhat.com 2017-04-05 19:03:18 UTC
The Pulp upstream bug status is at ON_QA. Updating the external tracker on this bug.

Comment 111 Sachin Ghai 2017-04-10 09:38:23 UTC
Thanks Justin for details steps.

At first place, the 6.2.9 snap2 doesn't fix the problem of fixing the corrupt package. To double check, I followed all 1) in comment 110.

here is md5sum of package:  
===========================
[root@qe-sat6-upgrade-rhel6 ~]# md5sum /var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm

0df34b6066be26c358b2df1a03268aac  /var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm


corrupted the package: You see checksum changed
=====================

[root@qe-sat6-upgrade-rhel6 ~]# echo "FOO" >> /var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm
[root@qe-sat6-upgrade-rhel6 ~]# md5sum /var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm
00f60a580fa33aa54325d64532b3d3bc  /var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm



resync repo rhel6 6Server x86_64:
=============

[root@qe-sat6-upgrade-rhel6 ~]# hammer -u admin -p changeme repository synchronize --validate-contents=true --id=2
[............................................................................................................................................] [100%]
No new packages.


after syncing, md5sum is same as it was of corrupted package
=======================================================================

 ~]# md5sum /var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm
00f60a580fa33aa54325d64532b3d3bc  /var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm


since corrupted package from from 6.6 so I re-synced 6.6 repo
===================================================================
 ~]# yum list python-lxml
Installed Packages
python-lxml.x86_64                                2.2.3-1.1.el6                                @anaconda-RedHatEnterpriseLinux-201409260744.x86_64/6.6
[root@qe-sat6-upgrade-rhel6 ~]# hammer -u admin -p changeme repository synchronize --validate-contents=true --id=54
[............................................................................................................................................] [100%]


but still md5sum of package remains as of corrupted rpm:

# md5sum python-lxml-2.2.3-1.1.el6.x86_64.rpm 
00f60a580fa33aa54325d64532b3d3bc  python-lxml-2.2.3-1.1.el6.x86_64.rpm


Not sure If I missed anything but I don't see expected result as per comment 110.

Comment 112 pulp-infra@redhat.com 2017-04-10 10:03:41 UTC
Requesting needsinfo from upstream developer jortel because the 'FailedQA' flag is set.

Comment 113 pulp-infra@redhat.com 2017-04-10 10:04:17 UTC
Requesting needsinfo from upstream developer fdobrovo because the 'FailedQA' flag is set.

Comment 114 pulp-infra@redhat.com 2017-04-10 10:04:53 UTC
Requesting needsinfo from upstream developer ttereshc because the 'FailedQA' flag is set.

Comment 115 pulp-infra@redhat.com 2017-04-10 10:05:27 UTC
Requesting needsinfo from upstream developer ttereshc because the 'FailedQA' flag is set.

Comment 116 Tanya Tereshchenko 2017-04-10 23:56:19 UTC
Checking Pulp side.
I am not sure what pulp operations --validate-contents option performs.

From the comment #56, I expect it to do the following:
> 1) Change the repository to on_demand
> 2) Sync the repository (Presumably with force_full = true)
> 3) Kick off a download_repo task with verify_all_units = true
> 4) Change the repository back to immediate

Can someone confirm that^?

I checked 6.2.9 branch that all related pulp cherry-picks are in place, so if the steps described above are performed I would expect corrupted packages to be fixed.

It is important that those steps are applied to the original repo which downloaded content from some feed/url and not to the copies of this repo (CVs?).


At least one thing can be checked though (that would be just an indication that we are on a right track) - to see that catalog entry for the package of interest is correct in mongodb:

$ mongo pulp_database
> db.lazy_content_catalog.find({"path": "/var/lib/pulp/content/path_to_the_corrupted_rpm"}).pretty()

Among other values, correct url from which package can be downloaded should be printed.

Comment 117 Justin Sherrill 2017-04-11 01:10:42 UTC
the list of steps is almost correct, changed slightly:


1) Sync the repository (force_full = true and overriding download_policy to on_demand, overriding auto_publish to false)
2) Kick off a download_repo task with verify_all_units = true
3) Publish the repository explicitly with (force_full = true)

I'm on PTO, but I checked the reproducer and everything seemed okay from the katello side (although I didn't do a complete investigation).  From the behavior it almost seemed like https://pulp.plan.io/issues/2663 wasn't backported, but we'd need to investigate of the catalog entries are being created as you suggested.

Comment 118 Sachin Ghai 2017-04-11 12:30:32 UTC
As per comment116, I tried to check catalog entry as below but didn't get anything:

mongo pulp_database
> db.lazy_content_catalog.find({"path": "var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm"}).pretty()
>

Comment 119 Sachin Ghai 2017-04-11 14:05:15 UTC
Just to correct previous comment where I missed the slash before 'var' that was typo while pasting here. sorry for that.

Here is the correct path but no change in result:


> db.lazy_content_catalog.find({"path": "/var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm"}).pretty()

Comment 120 Tanya Tereshchenko 2017-04-11 14:17:02 UTC
That explains why this file was not fixed.

So far the first step from comment #117 is unsuccessful.

Before looking for a bug you can check:
1. If the 6.2.9 snap2 contains this fix https://github.com/pulp/pulp_rpm/commit/8058ec8ee34736e83243a8d5096b2caa7626dc61 . It is a one-line change, so it should be easy to check.

2. If the repo you are syncing is a correct one as well as the file which you change.
In mongo shell:

a) find _id for the rpm of interest
> db.units_rpm.find({_storage_path: "/var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm"}, {_storage_path: 1}).pretty()

# output would be something like this 
{
	"_id" : "300292bd-2b82-4060-87f5-09deefbdbd4b",
	"_storage_path" : "/var/lib/pulp/..."
}

b) look if this rpm (by its _id from previous step) is in the repo which you are trying to re-sync
> db.repo_content_units.find({unit_id: "300292bd-2b82-4060-87f5-09deefbdbd4b"}, {repo_id: 1}).pretty()


c) Take the repo_id which you were trying to sync and check that it has a feed to sync from. It should be in the feed section of config.
> db.repo_importers.find({repo_id: "put_repo_id_here"}).pretty()

Comment 121 Sachin Ghai 2017-04-11 14:38:55 UTC
(In reply to Tanya Tereshchenko from comment #120)
> That explains why this file was not fixed.
> 
> So far the first step from comment #117 is unsuccessful.
> 
> Before looking for a bug you can check:
> 1. If the 6.2.9 snap2 contains this fix
> https://github.com/pulp/pulp_rpm/commit/
> 8058ec8ee34736e83243a8d5096b2caa7626dc61 . It is a one-line change, so it
> should be easy to check.
> 

Yes, I can see one-line change.

> 2. If the repo you are syncing is a correct one as well as the file which
> you change.
> In mongo shell:
> 
> a) find _id for the rpm of interest
> > db.units_rpm.find({_storage_path: "/var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm"}, {_storage_path: 1}).pretty()
> 
> # output would be something like this 
> {
> 	"_id" : "300292bd-2b82-4060-87f5-09deefbdbd4b",
> 	"_storage_path" : "/var/lib/pulp/..."
> }
> 


got the id:

> db.units_rpm.find({_storage_path: "/var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm"}, {_storage_path: 1}).pretty()
{
	"_id" : "89476887-8b3f-455a-aadf-a8e5007c25a1",
	"_storage_path" : "/var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm"
}


> b) look if this rpm (by its _id from previous step) is in the repo which you
> are trying to re-sync
> > db.repo_content_units.find({unit_id: "300292bd-2b82-4060-87f5-09deefbdbd4b"}, {repo_id: 1}).pretty()

Yes..

> db.repo_content_units.find({unit_id: "89476887-8b3f-455a-aadf-a8e5007c25a1"}, {repo_id: 1}).pretty()
{
	"_id" : ObjectId("58401d32b32a5d0a97754283"),
	"repo_id" : "Default_Organization-Red_Hat_Enterprise_Linux_Server-Red_Hat_Enterprise_Linux_6_Server_RPMs_x86_64_6_7"
}
{
	"_id" : ObjectId("58323790b32a5d0a8e7987ea"),
	"repo_id" : "Default_Organization-Red_Hat_Enterprise_Linux_Server-Red_Hat_Enterprise_Linux_6_Server_RPMs_x86_64_6Server"
}


> 
> 
> c) Take the repo_id which you were trying to sync and check that it has a
> feed to sync from. It should be in the feed section of config.
> > db.repo_importers.find({repo_id: "put_repo_id_here"}).pretty()

yeah..


> db.repo_importers.find({repo_id: "Default_Organization-Red_Hat_Enterprise_Linux_Server-Red_Hat_Enterprise_Linux_6_Server_RPMs_x86_64_6Server"})
{ "_id" : ObjectId("58322dfcb32a5d07e519fd8a"), "_ns" : "repo_importers", "config" : { "feed" : "https://cdn.redhat.com/content/dist/rhel/server/6/6Server/x86_64/os", "ssl_ca_cert" : "-----BEGIN CERTIFICATE-----

Comment 122 Brian Bouterse 2017-04-11 18:03:20 UTC
Requesting needsinfo from upstream developer jortel because the 'FailedQA' flag is set.

Comment 123 Brian Bouterse 2017-04-11 18:03:50 UTC
Requesting needsinfo from upstream developer fdobrovo because the 'FailedQA' flag is set.

Comment 124 Brian Bouterse 2017-04-11 18:04:23 UTC
Requesting needsinfo from upstream developer ttereshc because the 'FailedQA' flag is set.

Comment 125 Brian Bouterse 2017-04-11 18:04:56 UTC
Requesting needsinfo from upstream developer ttereshc because the 'FailedQA' flag is set.

Comment 128 pulp-infra@redhat.com 2017-04-12 14:33:24 UTC
The Pulp upstream bug status is at CLOSED - CURRENTRELEASE. Updating the external tracker on this bug.

Comment 129 Satellite Program 2017-04-12 18:17:44 UTC
Upstream bug assigned to bbuckingham

Comment 130 Satellite Program 2017-04-12 18:18:01 UTC
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/19250 has been resolved.

Comment 131 pulp-infra@redhat.com 2017-04-12 20:02:44 UTC
The Pulp upstream bug status is at NEW. Updating the external tracker on this bug.

Comment 132 pulp-infra@redhat.com 2017-04-12 20:03:00 UTC
The Pulp upstream bug priority is at High. Updating the external tracker on this bug.

Comment 133 pulp-infra@redhat.com 2017-04-12 20:32:44 UTC
The Pulp upstream bug status is at POST. Updating the external tracker on this bug.

Comment 134 Satellite Program 2017-04-12 22:22:04 UTC
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/19250 has been resolved.

Comment 136 Satellite Program 2017-04-17 14:17:58 UTC
Upstream bug assigned to jsherril

Comment 137 Satellite Program 2017-04-17 14:18:14 UTC
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/17941 has been resolved.

Comment 138 pulp-infra@redhat.com 2017-04-17 14:32:31 UTC
The Pulp upstream bug status is at MODIFIED. Updating the external tracker on this bug.

Comment 139 Sachin Ghai 2017-04-24 11:39:36 UTC
Verified w/ sat6.2.9 snap4 and looks like now we can see the corrupted package getting repaired.

original package's md5sum
=================================
[root@qe-sat6-upgrade-rhel6 custom_repo_zoo]# md5sum zebra-0.1-2.noarch.rpm 
74bfce0606aadbb5d3e7762192742e94  zebra-0.1-2.noarch.rpm

Corrupted the package and md5sum changed
===========================================
[root@qe-sat6-upgrade-rhel6 custom_repo_zoo]# echo "FOO" >> zebra-0.1-2.noarch.rpm 
[root@qe-sat6-upgrade-rhel6 custom_repo_zoo]# md5sum zebra-0.1-2.noarch.rpm 
f66b4322e1835d5a117cf5eabbb30597  zebra-0.1-2.noarch.rpm

Changed the download_policy to ON_demand and resync the repo
==============================================================
[root@qe-sat6-upgrade-rhel6 custom_repo_zoo]#  hammer -u admin -p changeme repository synchronize --validate-contents=true --id=5
[............................................................................................................................................] [100%]
No new packages.

Again checked the md5sum of package and now its matches to original md5sum.
===========================================================================
_repo_zoo]# md5sum zebra-0.1-2.noarch.rpm 
74bfce0606aadbb5d3e7762192742e94  zebra-0.1-2.noarch.rpm

Comment 140 Sachin Ghai 2017-04-24 11:53:24 UTC
As per comment https://bugzilla.redhat.com/show_bug.cgi?id=1223023#c110, tried 2) scenario where we deleted the rpm and checked after re-sync and it re-appears


[root@qe-sat6-upgrade-rhel6 3ce65e74a3028e76d022d0d974a172a00b4a7f3599b40a6b0ae553bb20e5efe9]# pwd
/var/lib/pulp/content/rpm/tiger/1.0/4/noarch/3ce65e74a3028e76d022d0d974a172a00b4a7f3599b40a6b0ae553bb20e5efe9
[root@qe-sat6-upgrade-rhel6 3ce65e74a3028e76d022d0d974a172a00b4a7f3599b40a6b0ae553bb20e5efe9]# ll
total 4
-rw-r--r--. 1 apache apache 2833 Apr 24 03:50 tiger-1.0-4.noarch.rpm

removed the rpm:
====================
[root@qe-sat6-upgrade-rhel6 3ce65e74a3028e76d022d0d974a172a00b4a7f3599b40a6b0ae553bb20e5efe9]# rm -rf tiger-1.0-4.noarch.rpm 

re-sync the repo
===================
[root@qe-sat6-upgrade-rhel6 3ce65e74a3028e76d022d0d974a172a00b4a7f3599b40a6b0ae553bb20e5efe9]# cd
[root@qe-sat6-upgrade-rhel6 ~]#  hammer -u admin -p changeme repository synchronize --validate-contents=true --id=5
[............................................................................................................................................] [100%]
No new packages.


package re-appears:
=====================
~]# cd /var/lib/pulp/content/rpm/tiger/1.0/4/noarch/3ce65e74a3028e76d022d0d974a172a00b4a7f3599b40a6b0ae553bb20e5efe9/
[root@qe-sat6-upgrade-rhel6 3ce65e74a3028e76d022d0d974a172a00b4a7f3599b40a6b0ae553bb20e5efe9]# ll
total 4
-rw-r--r--. 1 apache apache 2833 Apr 24 07:47 tiger-1.0-4.noarch.rpm
[root@qe-sat6-upgrade-rhel6 3ce65e74a3028e76d022d0d974a172a00b4a7f3599b40a6b0ae553bb20e5efe9]#

Comment 141 Bryan Kearney 2017-05-01 14:29:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1191

Comment 142 Bryan Kearney 2017-05-01 14:29:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1191

Comment 143 pulp-infra@redhat.com 2017-05-17 13:34:06 UTC
The Pulp upstream bug status is at ON_QA. Updating the external tracker on this bug.

Comment 144 pulp-infra@redhat.com 2017-05-24 18:34:24 UTC
The Pulp upstream bug status is at CLOSED - CURRENTRELEASE. Updating the external tracker on this bug.

Comment 148 François Cami 2017-08-29 11:35:48 UTC
*** Bug 1380765 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.