Bug 2370002 - [rgw][checksum]: with aws-go-sdk-v2, chunked object upload with trailing checksum of SHA1 or SHA256 is failing with 400 error
Summary: [rgw][checksum]: with aws-go-sdk-v2, chunked object upload with trailing chec...
Keywords:
Status: VERIFIED
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 8.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 8.1
Assignee: Matt Benjamin (redhat)
QA Contact: Hemanth Sai
Rivka Pollack
URL:
Whiteboard:
Depends On:
Blocks: 2351689
TreeView+ depends on / blocked
 
Reported: 2025-06-03 13:03 UTC by Hemanth Sai
Modified: 2025-06-11 14:19 UTC (History)
5 users (show)

Fixed In Version: ceph-19.2.1-218.el9cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:
mkasturi: needinfo+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-11527 0 None None None 2025-06-03 13:05:05 UTC

Description Hemanth Sai 2025-06-03 13:03:05 UTC
Description of problem:
with aws-go-sdk-v2, chunked object upload with trailing checksum of SHA1 or SHA256 is failing with 400 error
the chunked upload is successful if we use crc32 , crc32c or crc64nvme as checksum algorithm



log output:

[root@ceph-hsm-cksm-81-kbqga2-node6 repro_go_scripts]# ./s3example 
SDK 2025/06/03 11:26:55 DEBUG Request
PUT /go-bkt1/chunked-upload-example?x-id=PutObject HTTP/1.1
Host: 10.0.67.186
User-Agent: aws-sdk-go-v2/1.36.3 ua/2.1 os/linux lang/go#1.24.3 md/GOOS#linux md/GOARCH#amd64 api/s3#1.79.3 m/E,Z,e
Transfer-Encoding: chunked
Accept-Encoding: identity
Amz-Sdk-Invocation-Id: a9173d35-b1bf-438d-8ceb-29eb92279930
Amz-Sdk-Request: attempt=1; max=3
Authorization: AWS4-HMAC-SHA256 Credential=abc/20250603/us-east-2/s3/aws4_request, SignedHeaders=accept-encoding;amz-sdk-invocation-id;amz-sdk-request;content-encoding;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-sdk-checksum-algorithm;x-amz-trailer, Signature=6eded4519cfdbd99509b4d51eda13feb45abc35d87746d0f9a1b37a130dab22e
Content-Encoding: aws-chunked
Content-Type: application/octet-stream
Expect: 100-continue
X-Amz-Content-Sha256: STREAMING-UNSIGNED-PAYLOAD-TRAILER
X-Amz-Date: 20250603T112655Z
X-Amz-Sdk-Checksum-Algorithm: SHA256
X-Amz-Trailer: x-amz-checksum-sha256

81
32
This is some example chunked data to upload to S3.
0
x-amz-checksum-sha256:CdXsVw36b5FPIF2kk/Wen/dyR2x82gykYgAZZ/3Dy4U=


0

SDK 2025/06/03 11:26:55 DEBUG Response
HTTP/1.1 400 Bad Request
Content-Length: 241
Accept-Ranges: bytes
Connection: Keep-Alive
Content-Type: application/xml
Date: Tue, 03 Jun 2025 11:26:55 GMT
Server: Ceph Object Gateway (squid)
X-Amz-Request-Id: tx000006c56d96fc260a019-00683edbff-34236-default

<?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidArgument</Code><Message></Message><BucketName>go-bkt1</BucketName><RequestId>tx000006c56d96fc260a019-00683edbff-34236-default</RequestId><HostId>34236-default-default</HostId></Error>
Uploaded chunked data to S3 bucket go-bkt1 with key chunked-upload-example
Error: operation error S3: PutObject, https response error StatusCode: 400, RequestID: tx000006c56d96fc260a019-00683edbff-34236-default, HostID: 34236-default-default, api error InvalidArgument: UnknownError
panic: operation error S3: PutObject, https response error StatusCode: 400, RequestID: tx000006c56d96fc260a019-00683edbff-34236-default, HostID: 34236-default-default, api error InvalidArgument: UnknownError

goroutine 1 [running]:
main.demonstrateChunkedUpload({0xa32890, 0xd6c980}, 0xc000124800, {0x93f5d4, 0x7})
	/root/repro_go_scripts/main.go:157 +0x3cf
main.main()
	/root/repro_go_scripts/main.go:37 +0x85
[root@ceph-hsm-cksm-81-kbqga2-node6 repro_go_scripts]# 





code snipppet:

package main

import (
	"bytes"
	"context"
	"fmt"
	"io"
	"os"

	"github.com/aws/aws-sdk-go-v2/aws"
	"github.com/aws/aws-sdk-go-v2/credentials"
	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/service/s3"
	"github.com/aws/aws-sdk-go-v2/service/s3/types"
)


func main() {
	// Setup
	ctx := context.Background()
	s3Client := setupS3Client(ctx)

	// Config
	bucketName := "go-bkt1"
	if envBucketName, ok := os.LookupEnv("BUCKET_NAME"); ok {
		bucketName = envBucketName
	}

	// Tests
	demonstrateChunkedUpload(ctx, s3Client, bucketName)
	fmt.Println()
	fmt.Println()
	fmt.Println()
	demonstrateFixedLengthUpload(ctx, s3Client, bucketName)
	fmt.Println()
	fmt.Println()
	fmt.Println()

}


func setupS3Client(ctx context.Context) *s3.Client {
	awsConfig, err := config.LoadDefaultConfig(ctx, config.WithClientLogMode(aws.LogRequestWithBody|aws.LogResponseWithBody), config.WithRegion("us-east-2"), config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider("abc", "abc", "")) )
	if err != nil {
		fmt.Printf("failed to load AWS config: %v\n", err)
		os.Exit(1)
	}

	return s3.NewFromConfig(awsConfig, func (o *s3.Options) {o.BaseEndpoint = aws.String("https://10.0.67.186:443/")} )
}


func demonstrateChunkedUpload(ctx context.Context, s3Client *s3.Client, bucketName string) {
	// Create an IO pipe. The total amount of data read isn't known to the
	// reader (S3 PutObject), so the PutObject call will use a chunked upload.
	pipeReader, pipeWriter := io.Pipe()

	dataToUpload := []byte("This is some example chunked data to upload to S3.")
	key := "chunked-upload-example"

	// Start a goroutine to write data to the pipe
	go func() {
		pipeWriter.Write(dataToUpload)
		pipeWriter.Close()
	}()

	// Upload the data from the pipe to S3 using a chunked upload
	_, err := s3Client.PutObject(ctx, &s3.PutObjectInput{
		Bucket: &bucketName,
		Key:    &key,
		Body:   pipeReader,
		ChecksumAlgorithm: types.ChecksumAlgorithmSha256,
	})

	fmt.Printf("Uploaded chunked data to S3 bucket %s with key %s\n", bucketName, key)
	fmt.Printf("Error: %v\n", err)
	if err != nil {
        panic(err)
	}
}


func demonstrateFixedLengthUpload(ctx context.Context, s3Client *s3.Client, bucketName string) {
	// Create a fixed-length byte slice to upload
	dataToUpload := []byte("This is some example fixed-length data to upload to S3.")
	key := "fixed-length-upload-example"

	// Using a reader-seeker ensures that the data will be uploaded as fixed length, with the
	// content length set to the size of the byte slice.
	var readerSeeker io.ReadSeeker = bytes.NewReader(dataToUpload)

	// Upload the data directly to S3
	_, err := s3Client.PutObject(ctx, &s3.PutObjectInput{
		Bucket: &bucketName,
		Key:    &key,
		Body:   readerSeeker,
		ChecksumAlgorithm: types.ChecksumAlgorithmSha256,
	})

	fmt.Printf("Uploaded fixed-length data to S3 bucket %s with key %s\n", bucketName, key)
	fmt.Printf("Error: %v\n", err)
	if err != nil {
        panic(err)
	}
}


Version-Release number of selected component (if applicable):
ceph version 19.2.1-217.el9cp

How reproducible:
always

Steps to Reproduce:
1.
2.
3.

Actual results:
chunked object upload with trailing checksum of SHA1 or SHA256 is failing with 400 error, but with crc family checksum the request is successful

Expected results:
expected chunked object upload with trailing checksum of SHA1 or SHA256 should also pass

Additional info:


Note You need to log in before you can comment on or make changes to this bug.