Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2388477

Summary: haproxy.cfg changes for active-active deployment of NFS Ganesha
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Sachin Punadikar <spunadik>
Component: CephadmAssignee: Shweta Bhosale <shbhosal>
Status: CLOSED ERRATA QA Contact: Manisha Saini <msaini>
Severity: high Docs Contact:
Priority: urgent    
Version: 9.0CC: akane, cephqe-warriors, ngangadh, shbhosal
Target Milestone: ---   
Target Release: 9.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-20.1.0-52 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2026-01-29 06:57:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sachin Punadikar 2025-08-14 07:55:16 UTC
Description of problem:

For NFS protocol to work properly, it is required that a specific NFS client should reach the same NFS server. This will insure that the client will be able to reclaim in case of NFS server restarts.
HAProxy based configuration, diverts incoming traffic to the given set of backend servers based on the load balancing algorithm.
Current configuration used “round-robin” as balancing algorithm, which means that incoming traffic will be diverted to backend servers in round-robin fashion.
This kind of setup may work well till there is no restart/stop of any of the backend server.
What is required is to have session stickiness, wherein the incoming traffic will be sent to the specific backend server if the sender is same.
This needs below configuration in haproxy.cfg.
Please note added section peers and this need to have list of haproxy servers. 

…
…

# ---- Stick-table replication peers (same on all nodes; names & IPs fixed) ----
peers haproxy_peers
    peer haproxy1 <IP_address>:1024
    peer haproxy2 <IP_address>:1024
    peer haproxy3 <IP_address>:1024

backend backend
    mode        tcp
    balance   round robin
    stick-table type ip size 200k expire 30m
    stick on src
    peers haproxy_peers
…
…

In above I shown port number 1024 for haproxy peer communication. This can be changed to any other non-conflicting number. Also need to update firewall settings to open the port.

Current setup may have dynamic haproxy, instead keep it fixed, so that no change required in peers section
In case a backend server is down (node down situation), then backend server is spawned on another node. Due to this haproxy.cfg need an update. On updating haproxy.cfg, do not restart haproxy, rather reload the updated configuration, so that session information will not be lost.

Comment 10 errata-xmlrpc 2026-01-29 06:57:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 9.0 Security and Enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2026:1536