Bug 2264445 - [4.15 clone][ODF Hackathon] Regional DR cephfs based application failover show warning about subscription
Summary: [4.15 clone][ODF Hackathon] Regional DR cephfs based application failover sho...
Keywords:
Status: ASSIGNED
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.15
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: ---
Assignee: Benamar Mekhissi
QA Contact: krishnaram Karthick
URL:
Whiteboard:
Depends On:
Blocks: 2257820 2246375
TreeView+ depends on / blocked
 
Reported: 2024-02-15 16:39 UTC by Raghavendra Talur
Modified: 2024-09-12 13:17 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
.Regional DR Cephfs based application failover show warning about subscription After the application is failed over or relocated, the hub subscriptions show up errors stating, "Some resources failed to deploy. Use View status YAML link to view the details." This is because the application persistent volume claims (PVCs) that use CephFS as the backing storage provisioner, deployed using Red Hat Advanced Cluster Management for Kubernetes (RHACM) subscriptions, and are DR protected are owned by the respective DR controllers. Workaround: There are no workarounds to rectify the errors in the subscription status. However, the subscription resources that failed to deploy can be checked to make sure they are PVCs. This ensures that the other resources do not have problems. If the only resources in the Subscription that fail to deploy are the ones that are DR protected, the error can be ignored.
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)

Description Raghavendra Talur 2024-02-15 16:39:05 UTC
This bug was initially created as a copy of Bug #2257820

I am copying this bug because: 



Description of problem (please be detailed as possible and provide log
snippests):

Used sample application busybox  https://github.com/red-hat-storage/ocm-ramen-samples
Failover and fallback work as expected. We observed one minor warning as below

Error: Some resources failed to deploy. Use View status YAML link to view the 
details.

apiVersion: apps.open-cluster-management.io/v1alpha1
kind: SubscriptionStatus
metadata:
  creationTimestamp: '2024-01-10T15:48:55Z'
  generation: 1
  labels:
    apps.open-cluster-management.io/cluster: primary
    apps.open-cluster-management.io/hosting-subscription: busybox-cephfs.busybox-cephfs-subscription-1
  managedFields:
    - apiVersion: apps.open-cluster-management.io/v1alpha1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:labels:
            .: {}
            f:apps.open-cluster-management.io/cluster: {}
            f:apps.open-cluster-management.io/hosting-subscription: {}
        f:statuses:
          .: {}
          f:packages: {}
      manager: multicluster-operators-subscription
      operation: Update
      time: '2024-01-10T15:48:55Z'
  name: busybox-cephfs-subscription-1
  namespace: busybox-cephfs
  resourceVersion: '774893'
  uid: 3fa3d5b0-3518-4d9f-8ec9-6f16e8d4a442
statuses:
  packages:
    - apiVersion: v1
      kind: PersistentVolumeClaim
      lastUpdateTime: '2024-01-10T15:48:55Z'
      message: Obj busybox-cephfs/busybox-pvc exists and owned by others, backoff
      name: busybox-pvc
      namespace: busybox-cephfs
      phase: Failed
    - apiVersion: apps/v1
      kind: Deployment
      lastUpdateTime: '2024-01-10T15:48:55Z'
      name: busybox
      namespace: busybox-cephfs
      phase: Deployed


Type: Subscription
API Version: apps.open-cluster-management.io/v1
Namespace: busybox-cephfs
Labels: app=busybox-cephfs,app.kubernetes.io/part-of=busybox-cephfs,apps.open-cluster-management.io/reconcile-rate=medium
Channel: ggithubcom-red-hat-storage-ocm-ramen-samples-ns/ggithubcom-red-hat-storage-ocm-ramen-samples
Placement Ref: kind=Placement,name=busybox-cephfs-placement-1
Git branch: release-4.14
Git path: busybox-odr-cephfs
Cluster deploy status
primary: Subscribed
Error: Some resources failed to deploy. Use View status YAML link to view the details.

Version of all relevant components (if applicable):
ODF 4.14.3-rhodf
ACM 2.9.1
ODF Multicluster Orchestrator 4.14.3


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
No

Is there any workaround available to the best of your knowledge?
NO

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?

Yes
Can this issue reproduce from the UI?
Yes

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Followed instruction in customer documentation 
https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.14/html-single/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/index#subscription-based-apps_manage-rdr

cephfs subscription based application

2.
3.


Actual results:
Minor warning shown

Expected results:
No Warning

Additional info:


Note You need to log in before you can comment on or make changes to this bug.