Bug 1701030 - [RFE] introduce S3 notifications as tech preview
Summary: [RFE] introduce S3 notifications as tech preview
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RGW
Version: 3.2
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: 4.0
Assignee: Yuval Lifshitz
QA Contact: Tejas
Ranjini M N
Depends On:
Blocks: 1730176
TreeView+ depends on / blocked
Reported: 2019-04-17 20:17 UTC by Matt Benjamin (redhat)
Modified: 2020-01-31 12:46 UTC (History)
11 users (show)

Fixed In Version: ceph-14.2.4-14.el8cp, ceph-14.2.4-2.el7cp
Doc Type: Technology Preview
Doc Text:
.S3 bucket notifications S3 bucket notifications are now supported as a Technology Preview. When certain events are triggered on an S3 bucket, the notifications can be sent from the Ceph Object Gateway to HTTP, Advanced Message Queuing Protocol (AMQP) 9.1, and Kafka endpoints. Additionally, the notifications can be stored in a “PubSub” zone instead of, or in addition to sending them to the endpoints. “PubSub” is a publish-subscribe model that enables recipients to pull notifications from Ceph. To use the S3 notifications, install the `librabbitmq` and `librdkafka` packages.
Clone Of:
Last Closed: 2020-01-31 12:45:57 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:0312 0 None None None 2020-01-31 12:46:10 UTC

Description Matt Benjamin (redhat) 2019-04-17 20:17:36 UTC
Description of problem:
Backport and introduce as tech preview the currently S3 notification functionality as tech preview.

Comment 1 Sherard Griffin 2019-04-18 13:42:32 UTC
This feature would be fantastic.  We (the AI Center of Excellence) have been pushing Ceph as a critical piece of a reference architecture for Big Data, AI, and ML.  We have a need to be able to receive notifications of new data as it lands in Ceph so that ETL jobs can grab that data and process it.  This can be a big part of doing batch processing of streaming data as well, and would certainly work its way into our reference architecture.  One such use case is when a user uploads a new training data set, being able to fire off the training job immediately instead of having to poll constantly for changing data.  Kafka (AMQ Streams) is currently the message bus of choice for the reference architecture.

Comment 3 Yuval Lifshitz 2019-05-01 15:11:42 UTC
Currently AMQP0.9.1 (the version used by the rabbitmq message broker) is implemented in the pubsub feature.

There are two options for supporting Kafka:
(1) native, by embedding a C/C++ kafka client library into our code (e.g. librdkafka seems well maintained)
(2) by using a translation broker like ActiveMQ, which translate AMQP1.0 to Kafka. This will still require dev effort of adding native AMQP1.0 client (e.g. Qpid Proton which is maintained by Redhat)

Both options would be similar effort (since, sadly, AMQP1.0 would require a different client than AMQP0.9.1). If the end goal is Kafka, I would recommend (1).
We may add AMQP1.0 later to support ActiveMQ regardless of Kafka/AMQ-Streams.

Comment 4 Kyle Bader 2019-05-11 03:34:59 UTC
Approach #1 sounds the most attractive to me. Kafka support would allow the triggering of Knative functions using KafkaSource:


Comment 5 Giridhar Ramaraju 2019-08-05 13:11:27 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 


Comment 6 Giridhar Ramaraju 2019-08-05 13:12:26 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 


Comment 20 errata-xmlrpc 2020-01-31 12:45:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.