Bug 1819309

Summary: [RFE] Load balanced capsules without using sticky sessions
Product: Red Hat Satellite Reporter: Robin Chan <rchan>
Component: RepositoriesAssignee: satellite6-bugs <satellite6-bugs>
Status: CLOSED ERRATA QA Contact: Akhil Jha <akjha>
Severity: low Docs Contact:
Priority: unspecified    
Version: 6.11.0CC: akjha, bmbouter, dkliban, ggainey, gscarbor, ipanova, jentrena, jsherril, ltran, myarboro, pcreech, rchan, sokeeffe, ttereshc
Target Milestone: 6.11.0Keywords: FutureFeature, Triaged
Target Release: Unused   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-07-05 14:27:59 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Robin Chan 2020-03-31 16:30:06 UTC
Description of problem:
Today load balanced capsules requires sticky sessions enabled on the load balancer and when one capsule goes down, the clients must "yum clean all" to get its metadata from the other Capsule. This probably means the same RPM repository on 2 different Capsules should contain the same metadata path and actual metadata so the client doesn't warn against newer/older repo metadata when requesting something from the other Capsule.

As a user, I should not have to refresh clients yum metadata when a request hits a different Capsule than it has previously so that Capsule fail-overs are transparent to the client.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Here is the doc describing the configuration of load balancer with sticky sessions:
https://access.redhat.com/documentation/en-us/red_hat_satellite/6.6/html-single/load_balancing_guide/index#installing-the-load-balancer

Comment 4 pulp-infra@redhat.com 2020-03-31 17:48:07 UTC
The Pulp upstream bug status is at NEW. Updating the external tracker on this bug.

Comment 5 pulp-infra@redhat.com 2020-03-31 17:48:09 UTC
The Pulp upstream bug priority is at Normal. Updating the external tracker on this bug.

Comment 6 Tanya Tereshchenko 2020-05-01 13:25:56 UTC
FYI, the same request for pulp 2, in case there are more details/customer cases available. https://bugzilla.redhat.com/show_bug.cgi?id=1717456

Comment 7 pulp-infra@redhat.com 2020-05-08 19:30:42 UTC
The Pulp upstream bug priority is at High. Updating the external tracker on this bug.

Comment 8 pulp-infra@redhat.com 2020-05-27 16:58:11 UTC
The Pulp upstream bug priority is at Normal. Updating the external tracker on this bug.

Comment 11 pulp-infra@redhat.com 2021-05-18 04:19:32 UTC
The Pulp upstream bug status is at ASSIGNED. Updating the external tracker on this bug.

Comment 12 pulp-infra@redhat.com 2021-05-27 03:07:25 UTC
The Pulp upstream bug status is at POST. Updating the external tracker on this bug.

Comment 13 pulp-infra@redhat.com 2021-06-02 17:41:45 UTC
The Pulp upstream bug status is at MODIFIED. Updating the external tracker on this bug.

Comment 14 pulp-infra@redhat.com 2021-06-17 21:07:47 UTC
The Pulp upstream bug status is at CLOSED - CURRENTRELEASE. Updating the external tracker on this bug.

Comment 17 Akhil Jha 2022-05-02 06:29:39 UTC
Verified. 
Satellite 6.11.0-18.0

Comment 20 errata-xmlrpc 2022-07-05 14:27:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Satellite 6.11 Release), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5498