Description of problem: We've synced varios Red Hat repositories. Some repo sync ran into error and had to be resumed. After that the nightly sync status was "Sync complete". While debugging a provisioning problem we've seen that there have been 183 packages with size of 0 bytes. Even if the repo sync failed we might should compare the md5sum of packages to ensure that they are re-synced again during the next repo sync. Version-Release number of selected component (if applicable): Currently 6.1 but the same outcome we've seen already with 6.0 even not sure if it has been based on the same root cause. How reproducible: Don't know exactly what has been the root cause of this issue, maybe the repo sync went to stopped with failures. Steps to Reproduce: 1. 2. 3. Actual results: Broken Packages (size 0 bytes) will not automatically re-synched again. Expected results: Check if the package in pulp is really the package which won't be synched again (md5sum) to ensure that it is re-synched if sync failed and the package is not there or unusable. Additional info:
Since this issue was entered in Red Hat Bugzilla, the release flag has been set to ? to ensure that it is properly evaluated for this release.
Please verify that the importer setting "validate" was set to True when this occurred. https://pulp-rpm.readthedocs.org/en/2.6-release/tech-reference/yum-plugins.html#configuration-parameters The installer should set that in /etc/pulp/server/plugins.conf.d/yum_importer.json If it is there, please provide the output of "ls -lZ" so we can make sure that file is readable by the right processes. Just as a hunch, is it possible that the system ran out of disk space during the sync? FWIW, the expected behavior is that the sync will verify each RPM's checksum, and if verification fails, the sync will throw it away instead of adding it to the repo (and of course report that error). We should not end up with corrupt RPMs in the repo.
Hi Michael, unfortunately I've had to proceed with my setup and therefore reinstalled and reconfigured my Sat6. Means that I'm neither able to check the settings you've mentioned nor to reproduce it. Definitely we've never ran out of disk space or inodes. And since the sync has been running multiple times before we've figured out this issues the behavior was not as you've described it. Maybe it has been fixed with one of the updates we've shipped in the meantime. Never happened again. Dirk
Let me know if you want any further investigation from the pulp team. If so, answers to the above questions about the "validate" setting would be helpful.
Hi, I've also had a similar issue, so far we have found 1 rpm that only 2 thirds of it was downloaded but it was added to our repos. Ours is a disconnected install, if that make any difference. My /etc/pulp/server/plugins.conf.d/yum_importer.json file only has proxy settings as per https://access.redhat.com/documentation/en-US/Red_Hat_Satellite/6.0/html/Release_Notes/Disconnected.html. No validate option. Sean
Michael, We do not set 'validate' in that yum_importer.json. Instead we set it at sync time when initiating the sync. This change was made as part of https://bugzilla.redhat.com/show_bug.cgi?id=1139896 with the change being: https://github.com/Katello/katello/pull/4747/files So all satellite 6.1 Installations should have this set by default. -Justin
And here is an example of the request: RestClient.post "https://katello-devbox.example.com/pulp/api/v2/repositories/Default_Organization-TestProduct-TestRepo/actions/sync/", "{\"override_config\":{\"num_threads\":4,\"validate\":true}}" "Accept"=>"*/*; q=0.5, application/xml", "Accept-Encoding"=>"gzip, deflate", "Authorization"=>"OAuth oauth_body_hash=\"2jmj7l5rSw0yVb%2FvlWAYkK%2FYBwk%3D\", oauth_consumer_key=\"katello\", oauth_nonce=\"D7IO753MImqP2YeCKp1AcdYngVbG04OBOtUuSiAlIQ\", oauth_signature=\"ahRXoqPLKfyj0VAZl9RQ%2BwWaTzs%3D\", oauth_signature_method=\"HMAC-SHA1\", oauth_timestamp=\"1449204704\", oauth_version=\"1.0\"", "Content-Length"=>"53", "accept"=>"application/json", "content_type"=>"application/json", "pulp-user"=>"admin"
I created an upstream bug, but any additional info on how to reproduce this would be helpful. Looking at the pulp code that shipped in 6.1, it's hard to see how this is happening. I see that the bug was originally seen before the 6.1 release, presumably with an early build? Would that build have had the katello change that turned on validation? If not, that could definitely explain everything.
The Pulp upstream bug status is at NEW. Updating the external tracker on this bug.
The Pulp upstream bug priority is at High. Updating the external tracker on this bug.
Is this really fixed in 6.2 (or anywhere)? My understanding is that: - when enabling validate=true in pulp repo sync, any unit already present on the disk should have been verified against its checksum - validate=true is set by default in 6.1 and 6.2beta (I have verified sync_options contains :validate=>true) - BUT it apparently does not work: - sync Sat6.1 tools for RHEL7 - ensure /var/lib/pulp/content/units/rpm/37/3f7ae5ab5cbe8b82529071c0b0892f1a75e8bc0f691f3cf123e739531a4a86/qpid-proton-c-0.9-4.el7.x86_64.rpm is only refered by that repo (few mongo or pulp-admin queries) - damage the file - sync Sat6.1 tools for RHEL7 again - check the file (rpm -K $file) by "damage", I tried all these ways: 1) rm -rf $file; touch $file; chown apache:apache $file 2) rm -rf $file 3) echo > $file 4) echo >> $file After damaging the file by _either_ way and re-syncing the repo, the file was untouched, still in the same damaged state. Michael, could you confirm this bug (I tested it mainly in 6.2Beta / pulp-server-2.8.1.3-1.el7sat.noarch)? Further, the same problem can appear also on Capsules where Sat->Caps sync ends with a repo damaged on Capsule. Again, same pain in fixing it there (even bigger pain due to another pulp issue, though). Please fix this bug such that also capsule sync does the verification.
"validate=true" as an importer setting does not cause pulp to re-verify the checksum of every file that is already in the repo. It does verify the size and checksum of each file that gets downloaded during a sync. Put another way, every rpm has its checksum verified on the way in, before it gets saved to the database. But once it's been added to pulp, nothing goes back to periodically look for corruption.
See also: https://bugzilla.redhat.com/show_bug.cgi?id=1330042 : allow pulp to force re-sync of a repository Or in general: assume someone accidentally delete a file under /var/lib/pulp/content . How to recover from that?
*** Bug 1329334 has been marked as a duplicate of this bug. ***
The Pulp upstream bug status is at ASSIGNED. Updating the external tracker on this bug.
I found one package that was truncated on the Sat 6.1, and could not be installed on any client. Both yum and yumdownloader failed with: "[Errno -1] El paquete no se corresponde con la descarga pretendida. Sugerimos ejecutar el siguiente comando: yum --enablerepo=rhel-7-server-rpms clean metadata" I checked the Satellite, and found that the checksum of the package was different from the one i manually downloaded with yumdownloader. It was truncated (5.2mb vs 5.0mb for the bad rpm). I replaced the faulty file with the correct one manually and then the clients were able to install and apply updates correctly (the rpm was systemd, so it was not avoidable). A sync plan exists (daily) for that product and it executed correctly for 2 days after the bad rpm date, but it never datected nor corrected the truncated rpm. I wonder if there are more rpms in that state.
The Pulp upstream bug status is at POST. Updating the external tracker on this bug.
*** Bug 1330042 has been marked as a duplicate of this bug. ***
This title change was a little surprising: "Summary: Packages with 0 bytes won't be synced again → Allow Pulp to force resync a repository" It changed from reporting a problem to requesting a specific solution. The associated upstream pulp issue, #1823, addresses the original problem, but not with the specific solution in the new title. Instead it adds extra protection to make sure we don't end up with 0 byte files to begin with. However, you're in luck! We also implemented a force sync option, so I am associating that redmine issue also. I think the word of caution we can take away is: if you change the scope of a BZ issue, make sure any related upstream issues are still relevant and satisfactory, especially if they're already in POST or beyond.
The Pulp upstream bug status is at CLOSED - CURRENTRELEASE. Updating the external tracker on this bug.
The Pulp upstream bug priority is at Normal. Updating the external tracker on this bug.
(In reply to Michael Hrivnak from comment #36) > This title change was a little surprising: "Summary: Packages with 0 bytes > won't be synced again → Allow Pulp to force resync a repository" > > It changed from reporting a problem to requesting a specific solution. > > The associated upstream pulp issue, #1823, addresses the original problem, > but not with the specific solution in the new title. Instead it adds extra > protection to make sure we don't end up with 0 byte files to begin with. > > However, you're in luck! We also implemented a force sync option, so I am > associating that redmine issue also. > Thank you Michael, will this force sync option include the functionality described in https://access.redhat.com/solutions/2038473 "[Satellite6] How to forcefully regenerate metadata for a particular repository?" > I think the word of caution we can take away is: if you change the scope of > a BZ issue, make sure any related upstream issues are still relevant and > satisfactory, especially if they're already in POST or beyond. +1 there seems to be multiple issues in this single BZ
(In reply to Xixi from comment #40) > Thank you Michael, will this force sync option include the functionality > described in https://access.redhat.com/solutions/2038473 "[Satellite6] How > to forcefully regenerate metadata for a particular repository?" Short answer: no. Pulp separates the idea of a sync (content coming in) vs. a publish (content going out). But Satellite puts them together in one workflow, so that isn't obvious to users. The sync can sometimes take a shortcut to speed things up when it has an opportunity to do so; this issue is about forcing it to not take any shortcuts during sync. The issue you linked to is about forcing a publish to happen.
The Pulp upstream bug status is at MODIFIED. Updating the external tracker on this bug.
Created redmine issue http://projects.theforeman.org/issues/17418 from this bug
Oops, moving back to NEW since there is a foreman issue.
Upstream bug component is Repositories
The Pulp upstream bug status is at ON_QA. Updating the external tracker on this bug.
Michael, To clarify, In order to repair a repository with an immediate download policy, we would need to: 1) Change the repository to on_demand 2) Sync the repository (Presumably with force_full = true) 3) Kick off a download_repo task with verify_all_units = true 4) Change the repository back to immediate Questions: 1) If the repository is already on_demand, (skipping steps 1 and 4), would step 3 cause the entire repo to be downloaded? 2) If the repository is set to 'background' download policy, i assume steps 1 and 4 wouldn't be needed? 3) This BZ is marked against 6.2.z, but step 2 requires a pulp newer than 2.8.7 (for force_full), is that easily backportable?
*** Bug 1421232 has been marked as a duplicate of this bug. ***
One more question: 4) is it possible to kick off a sync with an download_policy override so that we can skip steps 1 and 4
(In reply to Justin Sherrill from comment #56) > Michael, > > To clarify, In order to repair a repository with an immediate download > policy, we would need to: > > 1) Change the repository to on_demand > 2) Sync the repository (Presumably with force_full = true) > 3) Kick off a download_repo task with verify_all_units = true > 4) Change the repository back to immediate > > Questions: > > 1) If the repository is already on_demand, (skipping steps 1 and 4), would > step 3 cause the entire repo to be downloaded? No. It verifies each file by size and checksum, and only re-downloads ones that are missing or corrupt. > > 2) If the repository is set to 'background' download policy, i assume steps > 1 and 4 wouldn't be needed? Correct. > > 3) This BZ is marked against 6.2.z, but step 2 requires a pulp newer than > 2.8.7 (for force_full), is that easily backportable? I'm not actually sure if it needs force_full. Jeff, what do you think? > 4) is it possible to kick off a sync with an download_policy override so that we can skip steps 1 and 4 Yes, you should be able to pass it as an argument when you queue the task. All importer settings can be overridden that way.
Yes, I think the force_full flag is needed. Otherwise, the yum importer may determine that the metadata has not changed causing the sync to stop before generating the catalog entries.
Created attachment 1256187 [details] patch adding force_full to yum importer Looks like the diff for adding force_full is super simple. I tweaked it a touch, and I'm attaching a patch that should apply to 2.8.7. Origin: https://github.com/pulp/pulp_rpm/pull/935/files
Thanks Michael!
Upstream bug assigned to mmccune
Created attachment 1258472 [details] Screenshot of new UI option for verification options during a sync operation Attached is a screenshot of the new options we will present in an 'advanced' mode for initiating a synchronization. These new options will also be available with hammer.
I checked the new UI for sync operations. I think the texts can use a bit of improvement: - the second option with "skip ..." does not really trigger me as an user that is in fact does more work. Skip means normally being faster by not doing things. - inconsistent use of "standard sync" and "normal sync" Suggestion to name the options in "increasing" wording: - optimized sync - normal sync - full sync - including content validation
*** Bug 1421250 has been marked as a duplicate of this bug. ***
Upstream bug assigned to jsherril
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/17941 has been resolved.
*** Bug 1427165 has been marked as a duplicate of this bug. ***
Requesting needsinfo from upstream developer jortel because the 'FailedQA' flag is set.
Requesting needsinfo from upstream developer fdobrovo because the 'FailedQA' flag is set.
Requesting needsinfo from upstream developer ttereshc because the 'FailedQA' flag is set.
Pulp fix merged.
Moving to POST
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/18558 has been resolved.
The Pulp upstream bug status is at CLOSED - COMPLETE. Updating the external tracker on this bug.
Thanks Justin for details steps. At first place, the 6.2.9 snap2 doesn't fix the problem of fixing the corrupt package. To double check, I followed all 1) in comment 110. here is md5sum of package: =========================== [root@qe-sat6-upgrade-rhel6 ~]# md5sum /var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm 0df34b6066be26c358b2df1a03268aac /var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm corrupted the package: You see checksum changed ===================== [root@qe-sat6-upgrade-rhel6 ~]# echo "FOO" >> /var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm [root@qe-sat6-upgrade-rhel6 ~]# md5sum /var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm 00f60a580fa33aa54325d64532b3d3bc /var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm resync repo rhel6 6Server x86_64: ============= [root@qe-sat6-upgrade-rhel6 ~]# hammer -u admin -p changeme repository synchronize --validate-contents=true --id=2 [............................................................................................................................................] [100%] No new packages. after syncing, md5sum is same as it was of corrupted package ======================================================================= ~]# md5sum /var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm 00f60a580fa33aa54325d64532b3d3bc /var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm since corrupted package from from 6.6 so I re-synced 6.6 repo =================================================================== ~]# yum list python-lxml Installed Packages python-lxml.x86_64 2.2.3-1.1.el6 @anaconda-RedHatEnterpriseLinux-201409260744.x86_64/6.6 [root@qe-sat6-upgrade-rhel6 ~]# hammer -u admin -p changeme repository synchronize --validate-contents=true --id=54 [............................................................................................................................................] [100%] but still md5sum of package remains as of corrupted rpm: # md5sum python-lxml-2.2.3-1.1.el6.x86_64.rpm 00f60a580fa33aa54325d64532b3d3bc python-lxml-2.2.3-1.1.el6.x86_64.rpm Not sure If I missed anything but I don't see expected result as per comment 110.
Checking Pulp side. I am not sure what pulp operations --validate-contents option performs. From the comment #56, I expect it to do the following: > 1) Change the repository to on_demand > 2) Sync the repository (Presumably with force_full = true) > 3) Kick off a download_repo task with verify_all_units = true > 4) Change the repository back to immediate Can someone confirm that^? I checked 6.2.9 branch that all related pulp cherry-picks are in place, so if the steps described above are performed I would expect corrupted packages to be fixed. It is important that those steps are applied to the original repo which downloaded content from some feed/url and not to the copies of this repo (CVs?). At least one thing can be checked though (that would be just an indication that we are on a right track) - to see that catalog entry for the package of interest is correct in mongodb: $ mongo pulp_database > db.lazy_content_catalog.find({"path": "/var/lib/pulp/content/path_to_the_corrupted_rpm"}).pretty() Among other values, correct url from which package can be downloaded should be printed.
the list of steps is almost correct, changed slightly: 1) Sync the repository (force_full = true and overriding download_policy to on_demand, overriding auto_publish to false) 2) Kick off a download_repo task with verify_all_units = true 3) Publish the repository explicitly with (force_full = true) I'm on PTO, but I checked the reproducer and everything seemed okay from the katello side (although I didn't do a complete investigation). From the behavior it almost seemed like https://pulp.plan.io/issues/2663 wasn't backported, but we'd need to investigate of the catalog entries are being created as you suggested.
As per comment116, I tried to check catalog entry as below but didn't get anything: mongo pulp_database > db.lazy_content_catalog.find({"path": "var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm"}).pretty() >
Just to correct previous comment where I missed the slash before 'var' that was typo while pasting here. sorry for that. Here is the correct path but no change in result: > db.lazy_content_catalog.find({"path": "/var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm"}).pretty()
That explains why this file was not fixed. So far the first step from comment #117 is unsuccessful. Before looking for a bug you can check: 1. If the 6.2.9 snap2 contains this fix https://github.com/pulp/pulp_rpm/commit/8058ec8ee34736e83243a8d5096b2caa7626dc61 . It is a one-line change, so it should be easy to check. 2. If the repo you are syncing is a correct one as well as the file which you change. In mongo shell: a) find _id for the rpm of interest > db.units_rpm.find({_storage_path: "/var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm"}, {_storage_path: 1}).pretty() # output would be something like this { "_id" : "300292bd-2b82-4060-87f5-09deefbdbd4b", "_storage_path" : "/var/lib/pulp/..." } b) look if this rpm (by its _id from previous step) is in the repo which you are trying to re-sync > db.repo_content_units.find({unit_id: "300292bd-2b82-4060-87f5-09deefbdbd4b"}, {repo_id: 1}).pretty() c) Take the repo_id which you were trying to sync and check that it has a feed to sync from. It should be in the feed section of config. > db.repo_importers.find({repo_id: "put_repo_id_here"}).pretty()
(In reply to Tanya Tereshchenko from comment #120) > That explains why this file was not fixed. > > So far the first step from comment #117 is unsuccessful. > > Before looking for a bug you can check: > 1. If the 6.2.9 snap2 contains this fix > https://github.com/pulp/pulp_rpm/commit/ > 8058ec8ee34736e83243a8d5096b2caa7626dc61 . It is a one-line change, so it > should be easy to check. > Yes, I can see one-line change. > 2. If the repo you are syncing is a correct one as well as the file which > you change. > In mongo shell: > > a) find _id for the rpm of interest > > db.units_rpm.find({_storage_path: "/var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm"}, {_storage_path: 1}).pretty() > > # output would be something like this > { > "_id" : "300292bd-2b82-4060-87f5-09deefbdbd4b", > "_storage_path" : "/var/lib/pulp/..." > } > got the id: > db.units_rpm.find({_storage_path: "/var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm"}, {_storage_path: 1}).pretty() { "_id" : "89476887-8b3f-455a-aadf-a8e5007c25a1", "_storage_path" : "/var/lib/pulp/content/rpm/python-lxml/2.2.3/1.1.el6/x86_64/19f25fe1d72f88d14f75952414b991c67ca5ee9e/python-lxml-2.2.3-1.1.el6.x86_64.rpm" } > b) look if this rpm (by its _id from previous step) is in the repo which you > are trying to re-sync > > db.repo_content_units.find({unit_id: "300292bd-2b82-4060-87f5-09deefbdbd4b"}, {repo_id: 1}).pretty() Yes.. > db.repo_content_units.find({unit_id: "89476887-8b3f-455a-aadf-a8e5007c25a1"}, {repo_id: 1}).pretty() { "_id" : ObjectId("58401d32b32a5d0a97754283"), "repo_id" : "Default_Organization-Red_Hat_Enterprise_Linux_Server-Red_Hat_Enterprise_Linux_6_Server_RPMs_x86_64_6_7" } { "_id" : ObjectId("58323790b32a5d0a8e7987ea"), "repo_id" : "Default_Organization-Red_Hat_Enterprise_Linux_Server-Red_Hat_Enterprise_Linux_6_Server_RPMs_x86_64_6Server" } > > > c) Take the repo_id which you were trying to sync and check that it has a > feed to sync from. It should be in the feed section of config. > > db.repo_importers.find({repo_id: "put_repo_id_here"}).pretty() yeah.. > db.repo_importers.find({repo_id: "Default_Organization-Red_Hat_Enterprise_Linux_Server-Red_Hat_Enterprise_Linux_6_Server_RPMs_x86_64_6Server"}) { "_id" : ObjectId("58322dfcb32a5d07e519fd8a"), "_ns" : "repo_importers", "config" : { "feed" : "https://cdn.redhat.com/content/dist/rhel/server/6/6Server/x86_64/os", "ssl_ca_cert" : "-----BEGIN CERTIFICATE-----
Upstream bug assigned to bbuckingham
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/19250 has been resolved.
Verified w/ sat6.2.9 snap4 and looks like now we can see the corrupted package getting repaired. original package's md5sum ================================= [root@qe-sat6-upgrade-rhel6 custom_repo_zoo]# md5sum zebra-0.1-2.noarch.rpm 74bfce0606aadbb5d3e7762192742e94 zebra-0.1-2.noarch.rpm Corrupted the package and md5sum changed =========================================== [root@qe-sat6-upgrade-rhel6 custom_repo_zoo]# echo "FOO" >> zebra-0.1-2.noarch.rpm [root@qe-sat6-upgrade-rhel6 custom_repo_zoo]# md5sum zebra-0.1-2.noarch.rpm f66b4322e1835d5a117cf5eabbb30597 zebra-0.1-2.noarch.rpm Changed the download_policy to ON_demand and resync the repo ============================================================== [root@qe-sat6-upgrade-rhel6 custom_repo_zoo]# hammer -u admin -p changeme repository synchronize --validate-contents=true --id=5 [............................................................................................................................................] [100%] No new packages. Again checked the md5sum of package and now its matches to original md5sum. =========================================================================== _repo_zoo]# md5sum zebra-0.1-2.noarch.rpm 74bfce0606aadbb5d3e7762192742e94 zebra-0.1-2.noarch.rpm
As per comment https://bugzilla.redhat.com/show_bug.cgi?id=1223023#c110, tried 2) scenario where we deleted the rpm and checked after re-sync and it re-appears [root@qe-sat6-upgrade-rhel6 3ce65e74a3028e76d022d0d974a172a00b4a7f3599b40a6b0ae553bb20e5efe9]# pwd /var/lib/pulp/content/rpm/tiger/1.0/4/noarch/3ce65e74a3028e76d022d0d974a172a00b4a7f3599b40a6b0ae553bb20e5efe9 [root@qe-sat6-upgrade-rhel6 3ce65e74a3028e76d022d0d974a172a00b4a7f3599b40a6b0ae553bb20e5efe9]# ll total 4 -rw-r--r--. 1 apache apache 2833 Apr 24 03:50 tiger-1.0-4.noarch.rpm removed the rpm: ==================== [root@qe-sat6-upgrade-rhel6 3ce65e74a3028e76d022d0d974a172a00b4a7f3599b40a6b0ae553bb20e5efe9]# rm -rf tiger-1.0-4.noarch.rpm re-sync the repo =================== [root@qe-sat6-upgrade-rhel6 3ce65e74a3028e76d022d0d974a172a00b4a7f3599b40a6b0ae553bb20e5efe9]# cd [root@qe-sat6-upgrade-rhel6 ~]# hammer -u admin -p changeme repository synchronize --validate-contents=true --id=5 [............................................................................................................................................] [100%] No new packages. package re-appears: ===================== ~]# cd /var/lib/pulp/content/rpm/tiger/1.0/4/noarch/3ce65e74a3028e76d022d0d974a172a00b4a7f3599b40a6b0ae553bb20e5efe9/ [root@qe-sat6-upgrade-rhel6 3ce65e74a3028e76d022d0d974a172a00b4a7f3599b40a6b0ae553bb20e5efe9]# ll total 4 -rw-r--r--. 1 apache apache 2833 Apr 24 07:47 tiger-1.0-4.noarch.rpm [root@qe-sat6-upgrade-rhel6 3ce65e74a3028e76d022d0d974a172a00b4a7f3599b40a6b0ae553bb20e5efe9]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1191
*** Bug 1380765 has been marked as a duplicate of this bug. ***