Bug 2187617

Summary: [6.1][rgw-ms] Writing on a bucket with num_shards 0 causes sync issues and rgws to segfault on the replication site.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vidushi Mishra <vimishra>
Component: RGW-MultisiteAssignee: shilpa <smanjara>
Status: CLOSED ERRATA QA Contact: Vidushi Mishra <vimishra>
Severity: urgent Docs Contact: Akash Raj <akraj>
Priority: unspecified    
Version: 6.1CC: akraj, ceph-eng-bugs, cephqe-warriors, mkasturi, smanjara, tserlin
Target Milestone: ---   
Target Release: 6.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-17.2.6-24.el9cp Doc Type: Bug Fix
Doc Text:
.Segmentation fault no longer occurs when bucket has a `num_shards` value of `0` Previously, multi-site sync would result in segmentation faults when a bucket had `num_shards` value of `0`. This resulted in inconsistent sync behavior and segmentation fault. With this fix, `num_shards=0` is properly represented in data sync and buckets with shard value `0` does not have any issues with syncing.
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-06-15 09:17:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2192813    

Comment 12 errata-xmlrpc 2023-06-15 09:17:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:3623