Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2139641

Summary: RGW cloud transition. When using an MCG bucket with an Azure backingstore random objects don't get transitioned to the cloud provider
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: daniel parkes <dparkes>
Component: RGWAssignee: Soumya Koduri <skoduri>
Status: CLOSED ERRATA QA Contact: Tejas <tchandra>
Severity: high Docs Contact:
Priority: unspecified    
Version: 6.0CC: akraj, cbodley, ceph-eng-bugs, cephqe-warriors, ekristov, kbader, kkeithle, mbenjamin, mkasturi, rmandyam, skoduri, vimishra
Target Milestone: ---   
Target Release: 6.0   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-17.2.5-26.el9cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-03-20 18:58:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description daniel parkes 2022-11-03 07:40:11 UTC
Description of problem:

When using an MCG bucket with an Azure backingstore random objects don't get transitioned to the cloud provider.

Normally one object from a bucket that has the LC applied to transition to the cloud doesn't get moved:

On-premise:

aws s3 --ca-bundle  /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw  --endpoint https://ceph-bos-2.makestoragegreatagain.com:8043 --region default ls  s3://transition/
2022-11-02 13:08:42       3847 transition1
2022-11-02 12:53:31          0 transition2
2022-11-02 12:53:31          0 transition3
2022-11-02 12:54:27          0 transition4
2022-11-02 12:53:31          0 transition5
2022-11-02 13:12:28          0 transition6
2022-11-02 13:10:15          0 transition7
2022-11-02 13:12:28          0 transition


The object remains in SC STANDARD


aws s3api --ca-bundle  /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://ceph-bos-2.makestoragegreatagain.com:8043 --region default  head-object --bucket transition --key transition1
{
    "AcceptRanges": "bytes",
    "LastModified": "2022-11-02T11:48:58+00:00",
    "ContentLength": 3847,
    "ETag": "\"46ecb42fd0def0e42f85922d62d06766\"",
    "ContentType": "binary/octet-stream",
    "Metadata": {}


The rest of the objects are transitioned fine to the cloud provider.

aws s3api --ca-bundle  /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://ceph-bos-2.makestoragegreatagain.com:8043 --region default  head-object --bucket transition --key transition2
{
    "AcceptRanges": "bytes",
    "LastModified": "2022-11-02T11:53:31+00:00",
    "ContentLength": 0,
    "ETag": "\"46ecb42fd0def0e42f85922d62d06766\"",
    "ContentType": "binary/octet-stream",
    "Metadata": {},
    "StorageClass": "CLOUDTIER"


Version-Release number of selected component (if applicable):

ceph version 17.2.3-55.el9cp (e57fd6f8008c472ddf2115482308a726e8f4fc0b) quincy (stable)

How reproducible:

Create an MCG/Azure backing store.
Configure RGW Storage class with MCG credentials
Configure an LC for a bucket on-prem
Watch that randomly, usually the first object in the bucket doesn't get transitioned to MCG



Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:

All objects get transitioned


Additional info:

Comment 1 RHEL Program Management 2022-11-03 07:40:25 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 6 Matt Benjamin (redhat) 2022-12-15 12:52:16 UTC
Per discussion w/Scott, these changes are ready-to-go, and can be for rhcs-6.0.

Matt

Comment 21 errata-xmlrpc 2023-03-20 18:58:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1360