Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1406555 - Load Balanced Capsules require either Sticky Sessions or Shared File systems
Summary: Load Balanced Capsules require either Sticky Sessions or Shared File systems
Keywords:
Status: CLOSED DUPLICATE of bug 1579159
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Docs Architecture Guide
Version: 6.2.5
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: Unspecified
Assignee: satellite-doc-list
QA Contact: satellite-doc-list
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-12-20 22:26 UTC by Mike McCune
Modified: 2019-09-04 19:01 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-09-04 19:01:40 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Mike McCune 2016-12-20 22:26:44 UTC
If a user is implementing any kind of Highly Available Capsule setup they are going to need to know that this requires either a shared filesystem for /var/lib/pulp or that clients are granted a 'sticky session' to the Load Balancer so they do not rapidly rotate around to different Capsules during yum operations.

Some background here, when setting up a series of capsules behind a Load Balancer the user can run into the following scenario:

We setup a Satellite 6.2 Server and 6.2 Capsule. After completion of the setup we synchronized the same repository and content to both capsule's Library environment. We then observed the repodata files in /var/lib/pulp on both capsules as containing different metadata.

Capsule A:

9ab1052ece4a7818d385abca3a96e053bb6396a4380ea00df20aeb420c0ae3c7-comps.xml
2a8c1a2296c0cd18070bc15ccc98221ccdd566a3734a0d6ea6b134c2718e4e8b-filelists.xml.gz
6af930e4cb005a0c72e3329336d25b9d3de6b9365d4e001f3d718a1b960fed7f-other.xml.gz
5c4fc48a65f2f5c94ddeb8ffd037948421f5378f59c69326e7dad561545351fa-updateinfo.xml.gz 
4ebbc43a6faa0bf678e30a53fab2d7a091d597824d843b4806ad978b09619549-primary.xml.gz
repomd.xml

Capsule B:

dfc5c7f7414400787ee1ba20e575040bbff84c513cfafbfe56bee8f6d5afae0c-comps.xml
21429ffee8b8599751d511907129cad81d9f9e1db9389bc3782b0f04e67b40c9-filelists.xml.gz
b204bfdbefef3a376777af6c8b26326fe0f72a003b83ffce59c9fb0a290ec5cf-other.xml.gz
ab38eaf3737bf25c1e1ce2be3c868ceef60e912302359d919c96023142e7b22f-updateinfo.xml.gz  
e79280ed37b14b838d41240c07493b529d6b86ce15fdf4322023e1f62ad97ba8-primary.xml.gz
repomd.xml


Because these files are different, we need to require sticky load balancer sessions per yum transaction as this different yum metadata is to be expected across capsules using the same content. We worked directly with the Pulp Engineering team and they confirmed that any sync from Capsule to Satellite will always generate new metadata files and this is working as expected.

If you look at our HA Reference Architecture here:

https://access.redhat.com/articles/2474581

we do mention that a shared filesystem for Capsules is used:

"""
The reference architecture details a capsule environment that has a capsule “cap1” rendering
foreman-proxy services and capsules “cap2”,”cap3” and “cap4” rendering content services.
“cap2/3/4” capsules have identical configuration and offer redundancy via HAProxy load
balancer. “cap1” is unique and is made highly available by an active-passive pacemaker
setup across nodes “cap1a”, “cap1b” and “cap1c” with shared filesystems.
"""

one of the main reasons in this setup that we advocate a shared filesystem in the above configuration is because of the mismatch around metadata filenames.


Note You need to log in before you can comment on or make changes to this bug.