Bug 1021070 - Repositories for removed nodes remain/are referenced in all existing valid nodes.
Repositories for removed nodes remain/are referenced in all existing valid no...
Status: CLOSED NOTABUG
Product: Red Hat Satellite 6
Classification: Red Hat
Component: Content Management (Show other bugs)
6.0.2
Unspecified Unspecified
unspecified Severity medium (vote)
: Unspecified
: --
Assigned To: Justin Sherrill
Katello QA List
: Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-10-18 22:04 EDT by Corey Welton
Modified: 2017-02-23 16:19 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-03-02 14:23:11 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Corey Welton 2013-10-18 22:04:01 EDT
Description of problem:
When a satellite has x nodes implemented and subsequently removed, the repos remain, and are seen in yum queries across other valid nodes.  It is conceivable that this could get unwieldy very quickly.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.  Register three nodes to parent sat
2.  Remove these nodes and register three more
3.  yum install <some package> on a child node

Actual results:

[root@hp-dl320egen8-01 ~]# yum install screen
Loaded plugins: product-id, security, subscription-manager
This system is receiving updates from Red Hat Subscription Management.
Katello_Infrastructure_node-certs_cloud-qe-2_idm_lab_bos_redhat_com                                               | 2.9 kB     00:00     
Katello_Infrastructure_node-certs_hp-dl320egen8-01_rhts_eng_bos_redhat_com                                        | 2.9 kB     00:00     
Katello_Infrastructure_node-certs_ibm-x3250m4-04_lab_eng_rdu2_redhat_com                                          | 2.9 kB     00:00     
Katello_Infrastructure_node-certs_ibm-x3550m3-13_lab_eng_brq_redhat_com                                           | 2.9 kB     00:00     
Katello_Infrastructure_node-certs_mgmt12_rhq_lab_eng_bos_redhat_com                                               | 2.9 kB     00:00     
Katello_Infrastructure_node-certs_mgmt12_rhq_lab_eng_bos_redhat_com/primary_db                                    | 2.7 kB     00:00     
Katello_Infrastructure_node-certs_mgmt8_rhq_lab_eng_bos_redhat_com                                                | 2.9 kB     00:00     
Katello_Infrastructure_node-certs_mgmt8_rhq_lab_eng_bos_redhat_com/primary_db                                     | 2.7 kB     00:00   


This, despite:

[root@mgmt2 ~]# katello -u admin -p admin node list
-----------------------------------------------------------------------------------------------------------------------------------------
                                                                Node List

ID Name                                     Environments                                  
-----------------------------------------------------------------------------------------------------------------------------------------
4  hp-dl320egen8-01.rhts.eng.bos.redhat.com Katello Infrastructure: [Library,DEV,QA,GA]   
5  mgmt12.rhq.lab.eng.bos.redhat.com        Katello Infrastructure: [Library,DEV,QA,GA]   
6  mgmt8.rhq.lab.eng.bos.redhat.com         Katello Infrastructure: [Library,DEV,QA,GA]   

Expected results:

We should probably remove whatever repos are there and getting spread across nodes, if the repos are no longer valid/useful.  If a user has a bunch of nodes, and then, say, gets a big hardware refresh and no longer needs the old nodes, all the stale repo stuff remains for those old nodes.  Yes, the can probably be removed manually but that could be annoying.

Additional info:
Comment 2 Justin Sherrill 2014-06-02 21:32:40 EDT
Reason for moving to 6.0.4:

we don't currently create capsule cert repos since we can't upload RPMS via the api/cli currently.  

This functionality is likely to return, so this bug will probably become applicable again the future.
Comment 3 Justin Sherrill 2015-03-02 14:23:11 EST
We still have not had this functionality return, so i am closing this.  This functionality is not present in 6.0 or 6.1

Note You need to log in before you can comment on or make changes to this bug.