Bug 1406555

Summary: Load Balanced Capsules require either Sticky Sessions or Shared File systems
Product: Red Hat Satellite Reporter: Mike McCune <mmccune>
Component: Docs Architecture GuideAssignee: satellite-doc-list
Status: CLOSED DUPLICATE QA Contact: satellite-doc-list
Severity: medium Docs Contact:
Priority: unspecified    
Version: 6.2.5CC: andrew.schofield, dvoss
Target Milestone: Unspecified   
Target Release: Unused   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-09-04 19:01:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Mike McCune 2016-12-20 22:26:44 UTC
If a user is implementing any kind of Highly Available Capsule setup they are going to need to know that this requires either a shared filesystem for /var/lib/pulp or that clients are granted a 'sticky session' to the Load Balancer so they do not rapidly rotate around to different Capsules during yum operations.

Some background here, when setting up a series of capsules behind a Load Balancer the user can run into the following scenario:

We setup a Satellite 6.2 Server and 6.2 Capsule. After completion of the setup we synchronized the same repository and content to both capsule's Library environment. We then observed the repodata files in /var/lib/pulp on both capsules as containing different metadata.

Capsule A:

9ab1052ece4a7818d385abca3a96e053bb6396a4380ea00df20aeb420c0ae3c7-comps.xml
2a8c1a2296c0cd18070bc15ccc98221ccdd566a3734a0d6ea6b134c2718e4e8b-filelists.xml.gz
6af930e4cb005a0c72e3329336d25b9d3de6b9365d4e001f3d718a1b960fed7f-other.xml.gz
5c4fc48a65f2f5c94ddeb8ffd037948421f5378f59c69326e7dad561545351fa-updateinfo.xml.gz 
4ebbc43a6faa0bf678e30a53fab2d7a091d597824d843b4806ad978b09619549-primary.xml.gz
repomd.xml

Capsule B:

dfc5c7f7414400787ee1ba20e575040bbff84c513cfafbfe56bee8f6d5afae0c-comps.xml
21429ffee8b8599751d511907129cad81d9f9e1db9389bc3782b0f04e67b40c9-filelists.xml.gz
b204bfdbefef3a376777af6c8b26326fe0f72a003b83ffce59c9fb0a290ec5cf-other.xml.gz
ab38eaf3737bf25c1e1ce2be3c868ceef60e912302359d919c96023142e7b22f-updateinfo.xml.gz  
e79280ed37b14b838d41240c07493b529d6b86ce15fdf4322023e1f62ad97ba8-primary.xml.gz
repomd.xml


Because these files are different, we need to require sticky load balancer sessions per yum transaction as this different yum metadata is to be expected across capsules using the same content. We worked directly with the Pulp Engineering team and they confirmed that any sync from Capsule to Satellite will always generate new metadata files and this is working as expected.

If you look at our HA Reference Architecture here:

https://access.redhat.com/articles/2474581

we do mention that a shared filesystem for Capsules is used:

"""
The reference architecture details a capsule environment that has a capsule “cap1” rendering
foreman-proxy services and capsules “cap2”,”cap3” and “cap4” rendering content services.
“cap2/3/4” capsules have identical configuration and offer redundancy via HAProxy load
balancer. “cap1” is unique and is made highly available by an active-passive pacemaker
setup across nodes “cap1a”, “cap1b” and “cap1c” with shared filesystems.
"""

one of the main reasons in this setup that we advocate a shared filesystem in the above configuration is because of the mismatch around metadata filenames.