Bug 1772887

Summary: [DDF] I haven't looked yet, but I'm hoping there is a way to limit how far it goes back when syncing containers. My
Product: Red Hat OpenStack Reporter: Direct Docs Feedback <ddf-bot>
Component: documentationAssignee: RHOS Documentation Team <rhos-docs>
Status: CLOSED DUPLICATE QA Contact: RHOS Documentation Team <rhos-docs>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 13.0 (Queens)CC: amcleod, astillma, enothen, jbadiapa, kgilliga
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-02-21 02:41:12 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Direct Docs Feedback 2019-11-15 13:06:12 UTC
I haven't looked yet, but I'm hoping there is a way to limit how far it goes back when syncing containers. My Satellite got crushed syncing all versions of RHOSP13 and Ceph3 Containers. And disk consumption went from 59G to 120G

Reported by: kejones

https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html/keeping_red_hat_openstack_platform_updated/updating_your_container_image_source#annotations:39df1486-8835-42a1-b9e4-b25db2746490

Comment 1 Eric Nothen 2021-09-08 11:57:26 UTC
(In reply to Direct Docs Feedback from comment #0)
> I haven't looked yet, but I'm hoping there is a way to limit how far it goes
> back when syncing containers. My Satellite got crushed syncing all versions
> of RHOSP13 and Ceph3 Containers. And disk consumption went from 59G to 120G

There is currently no way, but I have opened BZ #2001517 to allow using download policy "on_demand" for content_type docker, same as you would do with RPM content.

Comment 2 kgilliga 2022-03-24 19:12:28 UTC
*** Bug 1772886 has been marked as a duplicate of this bug. ***