Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1459231 - [RFE] Support 'cleaning' a repo of downloaded on_demand content
Summary: [RFE] Support 'cleaning' a repo of downloaded on_demand content
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Repositories
Version: 6.3.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: 6.11.0
Assignee: Ian Ballou
QA Contact: Sam Bible
URL:
Whiteboard:
: 1550553 (view as bug list)
Depends On:
Blocks: 1394013 1399395 1459226
TreeView+ depends on / blocked
 
Reported: 2017-06-06 15:02 UTC by Justin Sherrill
Modified: 2022-07-05 14:28 UTC (History)
20 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-07-05 14:27:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 33919 0 Normal New Support 'cleaning' a repo of downloaded on_demand content 2021-11-15 13:56:48 UTC
Pulp Redmine 2813 0 Normal CLOSED - WONTFIX As a user, I can delete already downloaded files for a particular repo(s) in case of on_demand policy 2020-01-03 15:03:40 UTC
Pulp Redmine 5926 0 Low CLOSED - DUPLICATE As a user, I can clear out downloaded files from on_demand repos 2021-06-07 10:56:14 UTC
Pulp Redmine 8459 0 Normal CLOSED - CURRENTRELEASE As a user I want to reclaim disk space for a list of repositories 2021-08-26 13:16:34 UTC
Red Hat Bugzilla 1394013 0 medium CLOSED [RFE] Provide task to clean pulp content cache when repository has switched to on_demand syncing 2023-03-24 13:44:16 UTC
Red Hat Bugzilla 1550553 0 unspecified CLOSED [RFE] How to reclain space after migrating repo from immediate to on-demand policy 2021-09-09 13:21:08 UTC
Red Hat Product Errata RHSA-2022:5498 0 None None None 2022-07-05 14:28:20 UTC

Internal Links: 1394013 1550553

Description Justin Sherrill 2017-06-06 15:02:43 UTC
Description of problem:
There is a desire to be able to 'clean' a repository of rpms to reduce disk space when rpms aren't actually needed on both the satellite and capsule.  

This would involve being able to delete all the downloaded rpms for a repository so that on_demand could download them again at next request.

In addition we'd likely want to run this clean operation on an entire capsule (if the capsule is set to ondemand)

This would require pulp change and katello changes.

This is a requested 6.3.z feature by dcaplan

Comment 2 pulp-infra@redhat.com 2017-06-13 12:36:37 UTC
The Pulp upstream bug status is at NEW. Updating the external tracker on this bug.

Comment 3 pulp-infra@redhat.com 2017-06-13 12:36:42 UTC
The Pulp upstream bug priority is at Normal. Updating the external tracker on this bug.

Comment 4 Andrea Perotti 2018-05-21 23:23:54 UTC
*** Bug 1550553 has been marked as a duplicate of this bug. ***

Comment 5 Julio Entrena Perez 2019-03-20 16:37:09 UTC
While we wait for this functionality to be developed, is there a command or API call that customers can run to clean an on_demand repository?

Comment 13 pulp-infra@redhat.com 2020-01-03 15:01:37 UTC
The Pulp upstream bug status is at NEW. Updating the external tracker on this bug.

Comment 14 pulp-infra@redhat.com 2020-01-03 15:01:38 UTC
The Pulp upstream bug priority is at Normal. Updating the external tracker on this bug.

Comment 15 pulp-infra@redhat.com 2020-01-03 15:03:41 UTC
The Pulp upstream bug status is at CLOSED - WONTFIX. Updating the external tracker on this bug.

Comment 20 pulp-infra@redhat.com 2020-05-08 19:30:54 UTC
The Pulp upstream bug priority is at Low. Updating the external tracker on this bug.

Comment 21 pulp-infra@redhat.com 2021-04-19 12:16:08 UTC
The Pulp upstream bug status is at NEW. Updating the external tracker on this bug.

Comment 22 pulp-infra@redhat.com 2021-04-19 12:16:10 UTC
The Pulp upstream bug priority is at Normal. Updating the external tracker on this bug.

Comment 23 pulp-infra@redhat.com 2021-06-07 10:56:15 UTC
The Pulp upstream bug status is at CLOSED - DUPLICATE. Updating the external tracker on this bug.

Comment 24 Tanya Tereshchenko 2021-06-07 15:46:04 UTC
Currently, this is planned to be done in the Pulp upstream tentatively by mid-August'21.
It will be the completion of this upstream story https://pulp.plan.io/issues/8459.

Comment 25 pulp-infra@redhat.com 2021-06-25 16:24:00 UTC
The Pulp upstream bug status is at ASSIGNED. Updating the external tracker on this bug.

Comment 26 pulp-infra@redhat.com 2021-07-01 14:07:43 UTC
The Pulp upstream bug status is at POST. Updating the external tracker on this bug.

Comment 27 pulp-infra@redhat.com 2021-08-04 10:09:07 UTC
The Pulp upstream bug status is at MODIFIED. Updating the external tracker on this bug.

Comment 28 pulp-infra@redhat.com 2021-08-26 13:16:36 UTC
The Pulp upstream bug status is at CLOSED - CURRENTRELEASE. Updating the external tracker on this bug.

Comment 29 Tanya Tereshchenko 2021-09-10 10:49:49 UTC
On the Pulp side this feature is supported starting with pulpcore 3.15.
Moving to Katello to add support for it and use the /pulp/api/v3/repositories/reclaim_space/ endpoint.

Comment 30 Justin Sherrill 2021-09-10 11:45:09 UTC
Proposing for 7.0

Comment 33 Justin Sherrill 2021-11-15 13:56:47 UTC
Created redmine issue https://projects.theforeman.org/issues/33919 from this bug

Comment 34 Bryan Kearney 2021-11-24 04:00:56 UTC
Upstream bug assigned to iballou

Comment 35 Bryan Kearney 2021-11-24 04:00:59 UTC
Upstream bug assigned to iballou

Comment 36 Bryan Kearney 2021-12-06 20:00:58 UTC
Moving this bug to POST for triage into Satellite since the upstream issue https://projects.theforeman.org/issues/33919 has been resolved.

Comment 37 Sam Bible 2022-04-28 20:37:10 UTC
Verified on:
6.11 - 18 

Steps to Verify:
1) Enable an on_demand repository and sync it. I used Red Hat Ansible Engine 2.9 RPMs for Red Hat Enterprise Linux 7 Server x86_64.
2) Sync the repo.
3) Register a host, and yum install some of the repo content on that host.
4) Navigate to Products > Red Hat Ansible Engine > Repositories'
5) Select Red Hat Ansible Engine 2.9 RPMs for Red Hat Enterprise Linux 7 Server x86_64. via the checkboxes and click the Reclaim Space button

Expected Results:
When inspecting the DynFlow console, in the Pulp steps, total and done should both equal the number of repositories that you yum installed onto the host.

Actual Results:
When inspecting the DynFlow console, in the Pulp steps, total and done  equaled the number of repositories that were yum installed onto the host.

Comment 40 errata-xmlrpc 2022-07-05 14:27:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Satellite 6.11 Release), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5498


Note You need to log in before you can comment on or make changes to this bug.