Bug 1912538 - [rgw] S3 PUT objects using Camel throws signature of last chunk does not match
Summary: [rgw] S3 PUT objects using Camel throws signature of last chunk does not match
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 4.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 4.2z1
Assignee: Mark Kogan
QA Contact: Tejas
URL:
Whiteboard:
Depends On:
Blocks: 1890121 1919956
TreeView+ depends on / blocked
 
Reported: 2021-01-04 17:07 UTC by Shon Paz
Modified: 2021-07-12 11:45 UTC (History)
13 users (show)

Fixed In Version: ceph-14.2.11-112.el8cp, ceph-14.2.11-112.el7cp
Doc Type: Bug Fix
Doc Text:
.KafkaConnect sends objects from a Kafka topic to the RGW S3 bucket Previously, sending objects from a Kafka topic to the RGW S3 bucket failed because the chunked-encoding object signature was not calculated correctly. This produced the following error in the RADOS Gateway log: `20 AWSv4ComplMulti: ERROR: chunk signature mismatch` With this release, the chunked-encoding object signature is calculated correctly, allowing KafkaConnect to send objects successfully.
Clone Of:
: 1919956 (view as bug list)
Environment:
Last Closed: 2021-04-28 20:12:33 UTC
Embargoed:
mkogan: needinfo-
mkogan: needinfo-


Attachments (Terms of Use)
RGW Log Trace (34.53 KB, text/plain)
2021-01-04 17:07 UTC, Shon Paz
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2021:1452 0 None None None 2021-04-28 20:13:06 UTC

Description Shon Paz 2021-01-04 17:07:10 UTC
Created attachment 1744348 [details]
RGW Log Trace

Description of problem:

I'm Running an AMQ Kafka cluster using Openshift 4.6, I have and external Ceph cluster that is in version RHCS4.1z3. 

When I'm trying to configure Camel to Sink objects from a Kafka topic to the RGW S3 bucket (using KafkaConnect), I get the following error: 

69c03411437d70f526efa7c2ad67e150ea5a547cb89f83563db8e86aaea39419
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
3bfd41f6bb2716df9398697bab6ee419b55a46c657f8ffde144316a5bfef06d8
2021-01-04 17:35:52.507 7fbb667da700 20 AWSv4ComplMulti: ERROR: chunk signature mismatch
2021-01-04 17:35:52.507 7fbb667da700 20 AWSv4ComplMulti: declared signature=0bb61a0806d8c0b9c1c8d8d64079dab9e7c495b45903b42237ab5fb210e062d0
2021-01-04 17:35:52.507 7fbb667da700 20 AWSv4ComplMulti: calculated signature=ee0e7b39b249e1bd96154eb7a3a99bde42b0143144ca62ed6669cc525f665ef9
2021-01-04 17:35:52.507 7fbb667da700 10 ERROR: signature of last chunk does not match
2021-01-04 17:35:52.507 7fbb667da700 20 req 646 0.002s s3:put_obj get_data() returned ret=-2040
2021-01-04 17:35:52.507 7fbb667da700  2 req 646 0.002s s3:put_obj completing
2021-01-04 17:35:52.507 7fbb667da700  2 req 646 0.002s s3:put_obj op status=-2040
2021-01-04 17:35:52.507 7fbb667da700  2 req 646 0.002s s3:put_obj http status=400
2021-01-04 17:35:52.507 7fbb667da700  1 ====== req done req=0x7fbca3992680 op status=-2040 http_status=400 latency=0.00199996s ======
2021-01-04 17:35:52.507 7fbb667da700  1 beast: 0x7fbca3992680: 192.168.1.72 - - [2021-01-04 17:35:52.0.507207s] "PUT /s3-bucket-a0221101-94e3-4132-818c-a7d43b2737e0/20210104-153552476-1E0D4A4324F2B02-0000000000000000 HTTP/1.1" 400 276 - "aws-sdk-java/2.15.43 Linux/4.18.0-193.29.1.el8_2.x86_64 OpenJDK_64-Bit_Server_VM/11.0.9+11-LTS Java/11.0.9 scala/2.12.10 kotlin/1.3.20-release-116 (1.3.20) vendor/Red_Hat__Inc. io/sync http/Apache" -
2021-01-04 17:35:52.558 7fbb89820700 20 failed to read header: end of stream

It seems like I get error 400 as the last chunk that is being uploaded by the client has a signature mismatch. 

I've tested other S3 endpoints besides RGW and AWS, and it seems to be working there. 

Version-Release number of selected component (if applicable):

RHCS4.1.z3

How reproducible:

Basically, it can be reproduced easily by running the specific RHCS and Camel versions I'm running. 

Steps to Reproduce:
1. Run an RHCS4.1z3 cluster with rgw beast frontend 
2. Deploy a Camel-Kafka-S3-Sink service using KafkaConnect 
3. Use the SinkConnector to write data to the RGW S3 bucket 
4. Verify that this error occurs once again 

Actual results:


Expected results:

Expected to get HTTP 200 response, and that the objects will be written to the S3 bucket

Additional info:

I have tested other S3 endpoints (not AWS based) and it seems to be working there. Objects are being written as expected to the S3 bucket.

Comment 1 Matt Benjamin (redhat) 2021-01-04 17:31:25 UTC
Hi Shon,

What -are- the Kafka and Camel versions, by chance?

Matt

Comment 2 Matt Benjamin (redhat) 2021-01-04 17:41:54 UTC
(In reply to Matt Benjamin (redhat) from comment #1)
> Hi Shon,
> 
> What -are- the Kafka and Camel versions, by chance?
> 
> Matt

from Shon:

The Kafka version is 2.6
and the CamelAws2s3SinkConnector version is 0.7

Comment 3 Shon Paz 2021-01-04 17:46:31 UTC
I'm using CamelAws2s3SinkConnectorin version 0.7, and Kafka 2.6 using the AMW Streams operator. 
The KafkaConnector config: 

apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
  name: s3-sink-connector
  labels:
    strimzi.io/cluster: my-connect-cluster
spec:
  class: org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SinkConnector
  tasksMax: 1
  config:
    key.converter: org.apache.kafka.connect.storage.StringConverter
    value.converter: org.apache.kafka.connect.storage.StringConverter
    topics: to-s3
    camel.sink.maxPollDuration: 10000
    camel.sink.endpoint.keyName: ${date:now:yyyyMMdd-HHmmssSSS}-${exchangeId}
    camel.component.aws2-s3.accessKey: xxxx
    camel.component.aws2-s3.secretKey: xxxxxx
    camel.component.aws2-s3.region: us-east-1
    camel.component.aws2-s3.overrideEndpoint: true 
    camel.component.aws2-s3.uriEndpointOverride: http://x.x.x.x
    camel.sink.path.bucketNameOrArn: processed-data

Comment 16 errata-xmlrpc 2021-04-28 20:12:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage security, bug fix, and enhancement Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:1452


Note You need to log in before you can comment on or make changes to this bug.