This RFE is a follow on from https://bugzilla.redhat.com/show_bug.cgi?id=2188336. Targeting a specific ask to the Ceph RGW team since the RFE above covers multiple requests. The RFE is to implement CephRGW read affinity when responding to 'get object' requests. RBD implemented a similar solution in [1][2]. [1] https://kubernetes.io/docs/concepts/services-networking/topology-aware-hints/ [2] https://blog.rook.io/rook-v1-11-storage-enhancements-8001aa67e10e The CephRGW read affinity was discussed here in June/July [3]. Read affinity can be turned on at a pool level. As RGW uses several pools further investigation is needed to determine whether the flag can be turned on for all pools or a subset. [3] https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/42LBRPAS232JSIBMJ4RDGAUIY2HDKTM2/ The business requirements for this RFE is to reduce WAN / cross-AZ traffic when using Ceph RGW. If implemented this will reduce our customers WAN bills and bandwith used. Note that the customer behind this RFE is using ODF spanned across several zones which have high WAN costs.
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 8.0 security, bug fix, and enhancement updates), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2024:10216