Bug 1701030
Summary: | [RFE] introduce S3 notifications as tech preview | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Matt Benjamin (redhat) <mbenjamin> |
Component: | RGW | Assignee: | Yuval Lifshitz <ylifshit> |
Status: | CLOSED ERRATA | QA Contact: | Tejas <tchandra> |
Severity: | medium | Docs Contact: | Ranjini M N <rmandyam> |
Priority: | low | ||
Version: | 3.2 | CC: | bancinco, cbodley, ceph-eng-bugs, kbader, mbenjamin, rmandyam, shgriffi, sweil, tchandra, tserlin, ylifshit |
Target Milestone: | rc | Keywords: | FutureFeature |
Target Release: | 4.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-14.2.4-14.el8cp, ceph-14.2.4-2.el7cp | Doc Type: | Technology Preview |
Doc Text: |
.S3 bucket notifications
S3 bucket notifications are now supported as a Technology Preview. When certain events are triggered on an S3 bucket, the notifications can be sent from the Ceph Object Gateway to HTTP, Advanced Message Queuing Protocol (AMQP) 9.1, and Kafka endpoints. Additionally, the notifications can be stored in a “PubSub” zone instead of, or in addition to sending them to the endpoints. “PubSub” is a publish-subscribe model that enables recipients to pull notifications from Ceph.
To use the S3 notifications, install the `librabbitmq` and `librdkafka` packages.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2020-01-31 12:45:57 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1730176 |
Description
Matt Benjamin (redhat)
2019-04-17 20:17:36 UTC
This feature would be fantastic. We (the AI Center of Excellence) have been pushing Ceph as a critical piece of a reference architecture for Big Data, AI, and ML. We have a need to be able to receive notifications of new data as it lands in Ceph so that ETL jobs can grab that data and process it. This can be a big part of doing batch processing of streaming data as well, and would certainly work its way into our reference architecture. One such use case is when a user uploads a new training data set, being able to fire off the training job immediately instead of having to poll constantly for changing data. Kafka (AMQ Streams) is currently the message bus of choice for the reference architecture. Currently AMQP0.9.1 (the version used by the rabbitmq message broker) is implemented in the pubsub feature. There are two options for supporting Kafka: (1) native, by embedding a C/C++ kafka client library into our code (e.g. librdkafka seems well maintained) (2) by using a translation broker like ActiveMQ, which translate AMQP1.0 to Kafka. This will still require dev effort of adding native AMQP1.0 client (e.g. Qpid Proton which is maintained by Redhat) Both options would be similar effort (since, sadly, AMQP1.0 would require a different client than AMQP0.9.1). If the end goal is Kafka, I would recommend (1). We may add AMQP1.0 later to support ActiveMQ regardless of Kafka/AMQ-Streams. Approach #1 sounds the most attractive to me. Kafka support would allow the triggering of Knative functions using KafkaSource: https://knative.dev/docs/eventing/ Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. Regards, Giri Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. Regards, Giri Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0312 |