Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2394196

Summary: Image uploads to S3-compatible storage (RGW) fail due to boto3 checksum behavior changes
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Francesco Pantano <fpantano>
Component: RGWAssignee: Matt Benjamin (redhat) <mbenjamin>
Status: CLOSED ERRATA QA Contact: Hemanth Sai <hmaheswa>
Severity: high Docs Contact: Rivka Pollack <rpollack>
Priority: unspecified    
Version: 8.1CC: akekane, ceph-eng-bugs, cephqe-warriors, cyril, gfidente, hmaheswa, rpollack, tserlin
Target Milestone: ---   
Target Release: 9.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-20.1.0-26 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2026-01-29 06:58:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Francesco Pantano 2025-09-09 18:23:22 UTC
Description of problem:

This bug tracks the upstream issue [1] where Glance fails to upload an image using Ceph RGW.
Starting in boto3/botocore 1.36.0, the SDK enables new S3 “data integrity” protections by default (request checksum calculation and response checksum validation).
S3-compatible storage that does not implement these features rejects the new headers or fails validation, causing Glance uploads to fail (see also https://github.com/boto/boto3/issues/4392)

Workaround:

Set the following environment variables for the Glance API process so that checksum behavior is only used when required by the API;

```
export AWS_REQUEST_CHECKSUM_CALCULATION=WHEN_REQUIRED
export AWS_RESPONSE_CHECKSUM_VALIDATION=WHEN_REQUIRED
```

Looks like a known issue for Ceph [2][3], and recently fixed upstream by [4][5].

However, as per [3], backports still need to happen on both Squid and Reef, and the purpose of this bugzilla is to make sure we track the downstream backports to both RHCS 8 and RHCS 7, deployed in combination with RHOSP 17.1 and RHOSO 18.


[1] https://bugs.launchpad.net/glance/+bug/2121144/
[2] https://tracker.ceph.com/issues/70614
[3] https://tracker.ceph.com/issues/70040
[4] https://github.com/ceph/ceph/pull/61878
[5] https://github.com/ceph/ceph/commit/ee03b5054147afcf6f174efe71e976394ede7715

Comment 1 Storage PM bot 2025-09-09 18:23:33 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 9 errata-xmlrpc 2026-01-29 06:58:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 9.0 Security and Enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2026:1536