Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1775154

Summary: The directory modulemd and subdirectories are not re-created for capsules if pulp data was lost
Product: Red Hat Satellite Reporter: Kenny Tordeurs <ktordeur>
Component: PulpAssignee: satellite6-bugs <satellite6-bugs>
Status: CLOSED NOTABUG QA Contact: Bruno Rocha <rochacbruno>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 6.6.0CC: bmbouter, daviddavis, dkliban, ggainey, ipanova, mmccune, rchan, swadeley, ttereshc
Target Milestone: UnspecifiedKeywords: Triaged
Target Release: Unused   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-03-13 15:22:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Kenny Tordeurs 2019-11-21 13:51:53 UTC
Description of problem:
After a restore without pulp data on Capsule and trying a new Complete Sync from Satellite the following is missing:
The directory modulemd and subdirectories /var/lib/pulp/content/units/modulemd/* are not being created on Capsules.

Version-Release number of selected component (if applicable):
satellite-6.6.0-7.el7sat.noarch
satellite-capsule-6.6.0-7.el7sat.noarch

How reproducible:
100%

Steps to Reproduce:
1. Capsule that lost pulp data or basically `rm -rf /var/lib/pulp/*` while mongodb is still intact
2. Launch a complete sync from Satellite 

Actual results:
Sync fails with:
~~~
PLP0000: [Errno 2] No such file or directory: u'/var/lib/pulp/content/units/modulemd/74/e082fd1a9b0fc5324c19b5c6dae731095e7b6ce102ae3b8b480070f6a06493'
PLP0000: [Errno 2] No such file or directory: u'/var/lib/pulp/content/units/modulemd/52/70496a596fa25e3b94a79ce8b1862811d981d029140fd56d0f1054e7ff58e7'
~~~

Directory structure that gets recreated does not contain modulemd dir

# ls -ltr /var/lib/pulp/content/units
~~~
total 4
drwxr-xr-x.  8 apache apache   66 Nov 20 15:48 yum_repo_metadata_file
drwxr-xr-x. 71 apache apache 4096 Nov 20 15:52 modulemd_defaults
drwxr-xr-x.  7 apache apache   56 Nov 20 18:06 distribution
~~~

Expected results:
Sync to work and all data to be re-synced

Additional info:
All the data is available on the Satellite

Comment 7 Kenny Tordeurs 2020-03-11 20:05:18 UTC
This issue can also happen if you encounter full disk space on your Capsule, extend space and try to perform complete capsule sync, end up with same errors.

Comment 13 Kenny Tordeurs 2020-03-13 08:22:58 UTC
@Grant,

Just to let you know that adding the `validate` on the Capsules and launching a `complete Capsule sync` resolved the problem and pulled in the content properly.

Comment 16 pulp-infra@redhat.com 2020-03-13 14:12:44 UTC
The Pulp upstream bug status is at NEW. Updating the external tracker on this bug.

Comment 17 pulp-infra@redhat.com 2020-03-13 14:12:45 UTC
The Pulp upstream bug priority is at Normal. Updating the external tracker on this bug.

Comment 18 Grant Gainey 2020-03-13 15:22:58 UTC
As noted in the upstream, Pulp2 cannot prevent the user from inadvertently destroying /var/lib/pulp, and it already has code available that will try to fix things when that happens (see the attached KCS) That code is not the default, because invoking it has a significant performance penalty on all syncs.

Closing as NOTABUG.

Comment 19 pulp-infra@redhat.com 2020-03-13 18:13:16 UTC
The Pulp upstream bug status is at CLOSED - NOTABUG. Updating the external tracker on this bug.