Bug 2238921

Summary: [rhcs7.0][RGW-MS][Notification]: bucket owner not in event record and received object size 0 for s3:ObjectSynced:Create event
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Hemanth Sai <hmaheswa>
Component: RGW-MultisiteAssignee: Yuval Lifshitz <ylifshit>
Status: CLOSED ERRATA QA Contact: Hemanth Sai <hmaheswa>
Severity: high Docs Contact: Rivka Pollack <rpollack>
Priority: unspecified    
Version: 7.0CC: akraj, ceph-eng-bugs, cephqe-warriors, mbenjamin, rpollack, tserlin, vereddy, ylifshit
Target Milestone: ---Keywords: Automation, Regression
Target Release: 7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-18.2.0-39.el9cp Doc Type: Bug Fix
Doc Text:
.Sync notification are sent with the correct object size Previously, when an object was synced between zones, and sync notifications were configured, the notification was sent with zero as the size of the object. With this fix, sync notifications are sent with the correct object size.
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-12-13 15:23:12 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2237662    

Description Hemanth Sai 2023-09-14 10:46:20 UTC
Description of problem:
received object size 0 and bucket owner not in event record under ownerIdentity for ObjectSynced:Create event

these testcases are passing on quincy and failed on rhcs 7.0
please find pass logs for quincy at http://magna002.ceph.redhat.com/cephci-jenkins/test-runs/17.2.6-136/Weekly/rgw/9/tier-2_rgw_ms_test-bucket-notifications/

Version-Release number of selected component (if applicable):
ceph version 18.2.0-27.el9cp

How reproducible:
always

Steps to Reproduce:
1. ceph cluster deployed on rhcs 7.0 and rgw configured on both clusters.
2. configure Multisite between primary and secondary cluster
3. kafka broker configured and running on both sites
3. create a rgw user on primary site
4. create a bucket on secondary site
4. place this json file https://github.com/ceph/ceph/blob/main/examples/boto3/service-2.sdk-extras.json under /usr/local/lib/python3.9/site-packages/botocore/data/s3/2006-03-01 (for ceph extension)
5. create a topic and enable bucket notification with kafka-ack-level=broker for event "s3:ObjectSynced:*" on secondary cluster
7. upload 25 objects on primary cluster and verify notifications received after objects synced on secondary
8. event record has "size":0 and "principalId":""
9. same issue is seen for the same scenario tested on primary site

Actual results:
event record has "size":0 and "principalId":""
event record snippet:
{"Records":[{"eventVersion":"2.2","eventSource":"ceph:s3","awsRegion":"shared","eventTime":"2023-09-12T16:45:11.486758Z","eventName":"ObjectSynced:Create","userIdentity":{"principalId":"rgw sync"},"requestParameters":{"sourceIPAddress":""},"responseElements":{"x-amz-request-id":"0","x-amz-id-2":"604f-secondary-shared"},"s3":{"s3SchemaVersion":"1.0","configurationId":"notification-MultisiteReplication","bucket":{"name":"dorothyw.12-bucky-36-0","ownerIdentity":{"principalId":""},"arn":"arn:aws:s3:shared::dorothyw.12-bucky-36-0","id":"2b5026af-75e3-4baa-bdbe-f0ce4862134d.15213.13"},"object":{"key":"prefix1key_dorothyw.12-bucky-36-0_0","size":0,"eTag":"d0ed5dfb863978d7901e9157de367c13","versionId":"","sequencer":"979500653D6B031D","metadata":[],"tags":[]}},"eventId":"1694537111.486763.d0ed5dfb863978d7901e9157de367c13","opaqueData":""}]}

Expected results:
actual object should be received instead of 0 and "principalId" populated with bucket owner under "ownerIdentity" instead of blank entry

Additional info:
please find automation failure logs below:

http://magna002.ceph.redhat.com/ceph-qe-logs/HemanthSai/bucket_notif_failures_reef/cephci-run-TERKVW_ms_repli_bkt_owner_not_in_record/

http://magna002.ceph.redhat.com/cephci-jenkins/test-runs/18.2.0-6/Weekly/rgw/4/tier-2_rgw_ms_test_bucket_notifications/

Comment 13 errata-xmlrpc 2023-12-13 15:23:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:7780