This BZ is to track the work for https://jsw.ibm.com/browse/ISCE-932. Copying the current contents of that here --- service_type: rgw service_id: foo placement: label: rgw count_per_host: 2 spec: rgw_frontend_port: 8080 If you do this today then you will get 2x RGW on each host with the rgw label, one listening on 8080, and another listening on 8081. If you create an ingress service it will deploy haproxy, but the backend for the haproxy will be a list of all RGWs for a particular instance. Apple would like the ability to have a single endpoint per host, with multiple host local RGWs running behind it. I'm using the term "concentrator" to signify a component that runs on a host with multiple RGWs, that provides a single endpoint (IP:port pairing) that balances load across `count_per_hosts` RGWs running locally. My proposed syntax for this sort of configuration would be: service_type: rgw service_id: foo placement: label: rgw count_per_host: 2 spec: concentrator: haproxy cencentrator_port: 80 rgw_frontend_port: 8080 The expected behavior would be the configuration of 2x RGW on each host with the rgw label, one listening on 8080, another listening on 8081, and a haproxy listening on 80 that has a backend configured for those two host local RGWs. By having `haproxy` be the value for `concentrator`, we leave the door open to supporting other concentrator types later (ovs, ipvs, envoy, etc). We do not need to setup keepalived for concentrators. If an ingress service refers to a RGW service with concentrators, we could keep the existing behavior (direct from ingress haproxy to rgw 8080/8081 per host). We could reuse the haproxy templating we use already for ingress, with mode tcp for simplicity (so we don't need https configuration stuffs), and to avoid buffering the HTTP requests. Customer: Apple
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.
Basic feature verification is done. So far,we haven't seen any issues. we will move the BZ to Verified once the feature is validated in detail. The steps have been added to the document: https://docs.google.com/document/d/1VGF-ex6BBBfSX_sLb7d0Wd5TtYOsv93RlBKGe1-OKrU/edit?tab=t.fo7sn1jh8kaf
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2025:9775