Bug 2358562 - [RGW][Kafka]: Events not recorded in Kafka topic
Summary: [RGW][Kafka]: Events not recorded in Kafka topic
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 7.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 7.1z4
Assignee: Yuval Lifshitz
QA Contact: Manisha
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2025-04-09 06:44 UTC by Manisha
Modified: 2025-05-07 12:49 UTC (History)
5 users (show)

Fixed In Version: ceph-18.2.1-325.el9cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2025-05-07 12:49:27 UTC
Embargoed:
mkasturi: needinfo+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-11099 0 None None None 2025-04-09 06:44:48 UTC
Red Hat Product Errata RHSA-2025:4664 0 None None None 2025-05-07 12:49:30 UTC

Description Manisha 2025-04-09 06:44:09 UTC
Description of problem:

Version-Release number of selected component (if applicable):
ceph version 18.2.1-312.el9cp (e6a0fe40fa3a3c514974517dccc8bd858d781f0c) reef (stable)

How reproducible:
2/2

Steps to Reproduce:
1.Create user and tenant
2.create topic
3.Put Bucket policy
4.Put Bucket notification
5.Upload Objects
6.Check whether events are generated

Actual results:
sudo /usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server kafka://localhost:9092 --from-beginning --topic topic1 --timeout-ms 30000

[2025-04-08 12:14:01,627] ERROR Error processing message, terminating consumer process:  (kafka.tools.ConsoleConsumer$)
org.apache.kafka.common.errors.TimeoutException
Processed a total of 0 messages

Expected results:
Events should be generated

Additional info:

Create user and tenant

radosgw-admin --tenant tenant_A --uid user1 --display-name "user1" --access_key a123 --secret s123 user create --cluster ceph
 
radosgw-admin --tenant tenant_A --uid user2 --display-name "user2" --access_key a456 --secret s456 user create --cluster ceph
 
radosgw-admin --tenant tenant_B --uid user3 --display-name "user3" --access_key a567 --secret s567 user create --cluster ceph
 
radosgw-admin --tenant tenant_B --uid user4 --display-name "user4" --access_key a789 --secret s789 user create --cluster ceph
 
create topic

aws sns create-topic --name topic1 --profile tenant_A --endpoint-url http://localhost:80  --attributes='{"push-endpoint": "kafka://localhost:9092"}'
{
	"TopicArn": "arn:aws:sns:default:tenant_A:topic1"
}
 
Get topic
 
radosgw-admin topic get --topic topic1 --tenant tenant_A --uid user1
{
	"user": "tenant_A",
	"name": "topic1",
	"dest": {
    	"push_endpoint": "kafka://localhost:9092",
    	"push_endpoint_args": "Version=2010-03-31&push-endpoint=kafka://localhost:9092",
    	"push_endpoint_topic": "topic1",
    	"stored_secret": false,
    	"persistent": false
	},
	"arn": "arn:aws:sns:default:tenant_A:topic1",
	"opaqueData": ""
}
 
Bucket policy

aws --endpoint-url http://localhost:80   s3api get-bucket-policy --bucket bkt1 --profile tenant_A
{
	"Policy": "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n	{\n  	\"Action\": [\n    	\"s3:GetBucketNotification\",\n    	\"s3:PutBucketNotification\",\n    	\"s3:ListBucket\"\n  	],\n  	\"Principal\": {\n    	\"AWS\": [\n      	\"arn:aws:iam::tenant_A:user/user2\",\n      	\"arn:aws:iam::tenant_B:user/user3\",\n      	\"arn:aws:iam::tenant_B:user/user4\"\n    	]\n  	},\n  	\"Effect\": \"Allow\",\n  	\"Resource\": [\n    	\"arn:aws:s3:::bkt1\",\n        \"arn:aws:s3:::bkt1/*\"\n  	]\n	}\n  ]\n}\n\n"
}
 
 
Bucket notification

aws s3api get-bucket-notification-configuration \
  --bucket bkt1 \
  --endpoint-url http://localhost:80 \
  --profile tenant_A
{
	"TopicConfigurations": [
    	{
        	"Id": "notif2",
        	"TopicArn": "arn:aws:sns:default:tenant_A:topic1",
        	"Events": [
            	"s3:ObjectCreated:*",
            	"s3:ObjectRemoved:*"
        	]
    	}
	]
}


Put objects in buckets

s3cmd put obj-1 s3://bkt1/
upload: 'obj-1' -> 's3://bkt1/obj-1'  [1 of 1]
 5242880 of 5242880   100% in	6s   845.69 KB/s  done

s3cmd put obj-2 s3://bkt1/
upload: 'obj-2' -> 's3://bkt1/obj-2'  [1 of 1]
 5242880 of 5242880   100% in    6s   832.41 KB/s  done

s3cmd put obj-3 s3://bkt1/
upload: 'obj-3' -> 's3://bkt1/obj-3'  [1 of 1]
 5242880 of 5242880   100% in    6s   767.42 KB/s  done


Accessing bucket from other tenant user
s3cmd -c ~/tenantB ls s3://tenant_A:bkt1
2025-04-08 12:13      5242880  s3://tenant_A:bkt1/obj-1
2025-04-08 12:12      5242880  s3://tenant_A:bkt1/obj-2
2025-04-08 12:13      5242880  s3://tenant_A:bkt1/obj-3


Events not getting recorded

sudo /usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server kafka://localhost:9092 --from-beginning --topic topic1 --timeout-ms 30000

[2025-04-08 12:14:01,627] ERROR Error processing message, terminating consumer process:  (kafka.tools.ConsoleConsumer$)
org.apache.kafka.common.errors.TimeoutException
Processed a total of 0 messages

Automation logs:
http://magna002.ceph.redhat.com/ceph-qe-logs/mreddem/cephci-run-ZTBI8X/Test_BucketNotification_with_users_in_same_tenant_and_different_tenant_0.log

Comment 13 errata-xmlrpc 2025-05-07 12:49:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 7.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:4664


Note You need to log in before you can comment on or make changes to this bug.