Bug 2176045

Summary: Mirroring between clusters with MCG is not working
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Sabina Aledort <saledort>
Component: documentationAssignee: Kusuma <kbg>
Status: ASSIGNED --- QA Contact: Neha Berry <nberry>
Severity: high Docs Contact:
Priority: high    
Version: 4.12CC: asriram, bradyjoh, kbg, lmauda, nbecker, odf-bz-bot
Target Milestone: ---Flags: nbecker: needinfo? (saledort)
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sabina Aledort 2023-03-07 09:11:39 UTC
Description of problem (please be detailed as possible and provide log
snippests):
Mirroring between clusters with MCG is not working.

Version of all relevant components (if applicable): odf-operator.v4.12.0

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes, the mirroring is needed for a partner use case.

Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
2

Can this issue reproducible?
Yes

Can this issue reproduce from the UI?
Yes

Steps to Reproduce:
1. Configure a BackingStore with an S3 storage endpoint from another cluster:
apiVersion: noobaa.io/v1alpha1
kind: BackingStore
metadata:
  finalizers:
  - noobaa.io/finalizer
  labels:
    app: noobaa
  name: bs-staging01
  namespace: openshift-storage
spec:
  s3Compatible:
    endpoint: http://s3-openshift-storage.apps.staging01.esmb.bos2.lab
    secret:
      name: backingstore-staging-secret
      namespace: openshift-storage
    signatureVersion: v4
    targetBucket: oadp-bucket-1a0b629c-b9d4-451c-b073-012f6e701f5d
  type: s3-compatible

$ oc get backingstore -A
NAMESPACE           NAME                           TYPE            PHASE   AGE
openshift-storage   bs-staging                     s3-compatible   Ready   3h26m
openshift-storage   noobaa-default-backing-store   s3-compatible   Ready   3d17h

2. Configure a BucketClass with a mirroring policy:
apiVersion: noobaa.io/v1alpha1
kind: BucketClass
metadata:
  labels:
    app: noobaa
  name: bucket-class-test
  namespace: openshift-storage
spec:
  placementPolicy:
    tiers:
    - backingStores:
      - noobaa-default-backing-store
      - bs-staging
      placement: Mirror

$ oc get bucketclass -A
NAMESPACE           NAME                          PLACEMENT                                                                                                                                   NAMESPACEPOLICY   QUOTA   PHASE   AGE
openshift-storage   bucket-class-test             {"tiers":[{"backingStores":["noobaa-default-backing-store","bs-staging"],"placement":"Mirror"}]}                                                                      Ready   169m
openshift-storage   noobaa-default-bucket-class   {"tiers":[{"backingStores":["noobaa-default-backing-store"]}]}      

3. Add the following lines to the ObjectBucketClaim:
spec:
  additionalConfig:
    bucketclass: bucket-class-test

Actual results:
The mirroring is not done and Noobaa log showing:
Feb-27 11:22:08.536 [BGWorkers/37]    [L0] core.server.bg_services.mirror_writer:: no buckets with mirror policy. nothing to do

Expected results:
The mirroring should be done.

Additional info:
Followed this doc to do the configuration: https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.12/html/managing_hybrid_and_multicloud_resources/mirroring-data-for-hybrid-and-multicloud-buckets

Comment 5 Brady Johnson 2023-03-08 11:58:35 UTC
Just to add some extra context: We have a telco partner waiting for information about this, and Sabina is blocked since this is not working.