Bug 2127537
| Summary: | [RFE] If an artifact gone missing from filesystem for any reason there is no way to easily redownload the same in Capsule 6.12 | ||
|---|---|---|---|
| Product: | Red Hat Satellite | Reporter: | Sayan Das <saydas> |
| Component: | Capsule - Content | Assignee: | Samir Jha <sajha> |
| Status: | CLOSED MIGRATED | QA Contact: | Satellite QE Team <sat-qe-bz-list> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 6.10.0 | CC: | ahumbe, dalley, dsinglet, iballou, paji, rlavi, sajha |
| Target Milestone: | Unspecified | Keywords: | FutureFeature, MigratedToJIRA, Triaged |
| Target Release: | Unused | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2024-06-06 12:34:34 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Bulk setting Target Milestone = 6.15.0 where sat-6.15.0+ is set. Moving this bug to POST for triage into Satellite since the upstream issue https://projects.theforeman.org/issues/36803 has been resolved. This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there. Due to differences in account names between systems, some fields were not replicated. Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information. To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "SAT-" followed by an integer. You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like: "Bugzilla Bug" = 1234567 In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information. |
Description of problem: For any reason if the artifacts representing one or more rpms from a repo, went missing from inside /var/lib/pulp filesystem of satellite, we do have UI and hammer options to run Verify Checksum and get them back but if the same thing happens for Capsule's pulp, there is no easy or normal way present to download them back ( without manually using the pulp3 api ). Version-Release number of selected component (if applicable): Red Hat Satellite\Capsule 6.12 How reproducible: Always Steps to Reproduce and Actual Results: 1. Install a Satellite and Capsule 6.12 2. Import a manifest in Satellite and Set the download policy of the capsule to immediate. 3. Enable the following repository "Red Hat Satellite Tools 6.10 for RHEL 8 x86_64 RPMs" and sync it with immediate download policy 4. Register a RHEL 8 system with Capsule, enable "Red Hat Satellite Tools 6.10 for RHEL 8 x86_64 RPMs" on that system and install katello-host-tools package. 5. Come back to Capsule and identify the artifact name of the katello-host-tools package # echo "select ca.pulp_id, ca.file, cca.relative_path from core_artifact ca LEFT JOIN core_contentartifact cca on cca.artifact_id = ca.pulp_id where cca.relative_path = 'katello-host-tools-3.5.4-1.el8sat.noarch.rpm';" | su - postgres -c "psql pulpcore" pulp_id | file | relative_path --------------------------------------+----------------------------------------------------------------------------+---------------------------------------------- 9d34ecf3-c89a-49f2-95b4-e1b6b6cc80d4 | artifact/bb/592c8ada2ee35346cbccebb3df7916f512e39995712389aa05a3d6ed35c60a | katello-host-tools-3.5.4-1.el8sat.noarch.rpm (1 row) 6. Check the artifact on the filesystem and then delete it from /var/lib/pulp of Capsule # file /var/lib/pulp/media/artifact/bb/592c8ada2ee35346cbccebb3df7916f512e39995712389aa05a3d6ed35c60a /var/lib/pulp/media/artifact/bb/592c8ada2ee35346cbccebb3df7916f512e39995712389aa05a3d6ed35c60a: RPM v3.0 bin noarch katello-host-tools-3.5.4-1.el8sat # sha256sum /var/lib/pulp/media/artifact/bb/592c8ada2ee35346cbccebb3df7916f512e39995712389aa05a3d6ed35c60a bb592c8ada2ee35346cbccebb3df7916f512e39995712389aa05a3d6ed35c60a /var/lib/pulp/media/artifact/bb/592c8ada2ee35346cbccebb3df7916f512e39995712389aa05a3d6ed35c60a # rm -f /var/lib/pulp/media/artifact/bb/592c8ada2ee35346cbccebb3df7916f512e39995712389aa05a3d6ed35c60a 7. Re-sync the Capsule in any possible manner i.e. Optimized Sync\Complete Sync and everything will be successful but nothing downloads back the package on the capsule. # file /var/lib/pulp/media/artifact/bb/592c8ada2ee35346cbccebb3df7916f512e39995712389aa05a3d6ed35c60a /var/lib/pulp/media/artifact/bb/592c8ada2ee35346cbccebb3df7916f512e39995712389aa05a3d6ed35c60a: cannot open (No such file or directory) 8. Try yum command on the client system to re-install the package , hoping pulp will do some magic but that is not the case: # yum reinstall katello-host-tools -y [MIRROR] katello-host-tools-3.5.4-1.el8sat.noarch.rpm: Status code: 502 for https://capsule612.test.lan/pulp/content/RedHat/Library/RHEL8/content/dist/layered/rhel8/x86_64/sat-tools/6.10/os/Packages/k/katello-host-tools-3.5.4-1.el8sat.noarch.rpm (IP: 192.168.126.2) From pulp logs of capsule: Jan 21 18:05:24 capsule612 pulpcore-content: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/pulp/media/artifact/bb/592c8ada2ee35346cbccebb3df7916f512e39995712389aa05a3d6ed35c60a' Manual Workaround: ( To be executed on satelllite by targeting the Capsule's Pulp API ) # curl -s --cert /etc/pki/katello/certs/pulp-client.crt --key /etc/pki/katello/private/pulp-client.key -H "Content-Type: application/json" -X POST https://capsule612.example.com/pulp/api/v3/repair/ | json_reformat { "task": "/pulp/api/v3/tasks/36d19d67-6472-410b-ac0c-f87880b23006/" } It runs on all artifacts so need to wait and then monitor the status of the task and upon completion it looks like this: # curl -s --cert /etc/pki/katello/certs/pulp-client.crt --key /etc/pki/katello/private/pulp-client.key https://capsule612.example.com/pulp/api/v3/tasks/36d19d67-6472-410b-ac0c-f87880b23006/ | jq .progress_reports [ { "message": "Identify missing units", "code": "repair.missing", "state": "completed", "total": null, "done": 1, --------------> Downloaded the missing unit "suffix": null }, { "message": "Identify corrupted units", "code": "repair.corrupted", "state": "completed", "total": null, "done": 0, "suffix": null }, { "message": "Repair corrupted units", "code": "repair.repaired", "state": "completed", "total": null, "done": 1, "suffix": null } ] Expectations: * Sat UI --> Infrastructure --> Capsules --> Click open the capsule --> Click on "Synchronize" dropdown and here We should have a "Verify Content Checksum" option present as well that will allow running the repair api on capsule's pulp content. * The same option should be made available to execute via "hammer capsule content synchronize" as well. Feel free to consider it as BUG or RFE as applicable