Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1550553 - [RFE] How to reclain space after migrating repo from immediate to on-demand policy
Summary: [RFE] How to reclain space after migrating repo from immediate to on-demand p...
Keywords:
Status: CLOSED DUPLICATE of bug 1459231
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Pulp
Version: 6.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: Unspecified
Assignee: satellite6-bugs
QA Contact: Katello QA List
URL:
Whiteboard:
Depends On:
Blocks: 1399395
TreeView+ depends on / blocked
 
Reported: 2018-03-01 12:51 UTC by Andrea Perotti
Modified: 2021-09-09 13:21 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-05-21 23:23:54 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Pulp Redmine 2813 0 Normal NEW As a user, I can delete already downloaded files for a particular repo(s) in case of on_demand policy 2018-03-01 15:32:41 UTC
Red Hat Bugzilla 1394013 0 medium CLOSED [RFE] Provide task to clean pulp content cache when repository has switched to on_demand syncing 2023-03-24 13:44:16 UTC
Red Hat Bugzilla 1459231 0 medium CLOSED [RFE] Support 'cleaning' a repo of downloaded on_demand content 2022-07-05 14:28:20 UTC
Red Hat Issue Tracker SAT-5014 0 None None None 2021-09-09 13:21:00 UTC

Internal Links: 1394013 1459231

Description Andrea Perotti 2018-03-01 12:51:38 UTC
Description of problem:
Customers using Satellite prior the introduction of On-Demand policy for repos are consuming a **lot** of space in mirroring not only metadata but also binary rpms on their satellites.

Migrating all the existing repos to On-Demand policy do not affect the previously downloaded content.

In order to get rid of the previously downloaded content, atm is needed to remove the repo synced into sat6. But this require to delete all the CVs making use of that repo.

Version-Release number of selected component (if applicable):
sat6.2.latest
sat6.3

How reproducible:
always

Steps to Reproduce:
1. sync a repo with immediate policy
2. (/var/lib/pulp/content/units/rpm start growning)
2. change the policy to on-demand

Actual results:
/var/lib/pulp/content/units/rpm stop growning but do not delete content not really needed anymore

Expected results:
free up space in: /var/lib/pulp/content/units/rpm

If all the needed bits are in place, this could be just a doc bug, if there are tech elements to be implemented, let's do it: that's the goal

Comment 2 pulp-infra@redhat.com 2018-03-01 15:32:42 UTC
The Pulp upstream bug status is at NEW. Updating the external tracker on this bug.

Comment 3 pulp-infra@redhat.com 2018-03-01 15:32:45 UTC
The Pulp upstream bug priority is at Normal. Updating the external tracker on this bug.


Note You need to log in before you can comment on or make changes to this bug.