Bug 2148214 - [Tracker ACM] [RDR][CEPHFS] When the previous Primary cluster(C2) is powered on post-failover operation workloads are not removed from that cluster
Summary: [Tracker ACM] [RDR][CEPHFS] When the previous Primary cluster(C2) is powered ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.12
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.12.0
Assignee: Benamar Mekhissi
QA Contact: Pratik Surve
URL:
Whiteboard:
Depends On: 2151310
Blocks: 2155042
TreeView+ depends on / blocked
 
Reported: 2022-11-24 16:05 UTC by Pratik Surve
Modified: 2023-08-09 17:00 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 2151310 (view as bug list)
Environment:
Last Closed: 2023-02-08 14:06:28 UTC
Embargoed:


Attachments (Terms of Use)

Description Pratik Surve 2022-11-24 16:05:19 UTC
Description of problem (please be detailed as possible and provide log
snippets):

[RDR][CEPHFS] When the prevous Primary cluster(C2) is powered on post-failover operation workloads are not removed from that cluster 


Version of all relevant components (if applicable):

OCP version:- 4.12.0-0.nightly-2022-11-16-003434
ODF version:- 4.12.0-114
CEPH version:- ceph version 16.2.10-72.el8cp (3311949c2d1edf5cabcc20ba0f35b4bfccbf021e) pacific (stable)
ACM version:- v2.7.0
SUBMARINER version:- 0.14.0-rc4
VOLSYNC version:- volsync-product.v0.6.0


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes

Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
2

Can this issue reproducible?
Yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy RDR cluster 
2. Run the CEPHFS workload
3. Power off the cluster where workload are running
4. After sometime perform failover operation 
5. After 1,2 hr power on the cluster
6. Check workloads are running on both cluster

Actual results:

# Current primary

oc get pods                                                                                                            
NAME                                  READY   STATUS    RESTARTS   AGE
dd-io-1-5857bfdcd9-h26fw              1/1     Running   0          4h29m
dd-io-2-bcd6d9f65-pnzv5               1/1     Running   0          4h29m
dd-io-3-5d6b4b84df-ffgl7              1/1     Running   0          4h29m
dd-io-4-6f6db89fbf-sx4q5              1/1     Running   0          4h29m
dd-io-5-7868bc6b5c-cld2f              1/1     Running   0          4h29m
dd-io-6-58c98598d5-57tjt              1/1     Running   0          4h29m
dd-io-7-694958ff97-skvqd              1/1     Running   0          4h29m
volsync-rsync-src-dd-io-pvc-1-fq8rw   0/1     Error     0          3m16s
volsync-rsync-src-dd-io-pvc-1-h6mwk   1/1     Running   0          75s
volsync-rsync-src-dd-io-pvc-2-n5fr4   1/1     Running   0          73s
volsync-rsync-src-dd-io-pvc-3-p4pwh   1/1     Running   0          64s
volsync-rsync-src-dd-io-pvc-4-qdbvf   1/1     Running   0          47s
volsync-rsync-src-dd-io-pvc-5-hk8xq   1/1     Running   0          73s
volsync-rsync-src-dd-io-pvc-6-7tv85   1/1     Running   0          73s
volsync-rsync-src-dd-io-pvc-7-5c5cp   1/1     Running   0          74s
volsync-rsync-src-dd-io-pvc-7-dwfwg   0/1     Error     0          3m1s


# Old Primary 

oc get pods 
NAME                                  READY   STATUS    RESTARTS   AGE
dd-io-1-5857bfdcd9-2rtzm              1/1     Running   1          24h
dd-io-2-bcd6d9f65-96762               1/1     Running   1          24h
dd-io-3-5d6b4b84df-w87w8              1/1     Running   1          24h
dd-io-4-6f6db89fbf-jdlbx              1/1     Running   1          24h
dd-io-5-7868bc6b5c-bzsfh              1/1     Running   1          24h
dd-io-6-58c98598d5-9897j              1/1     Running   1          24h
dd-io-7-694958ff97-vfv2t              1/1     Running   1          24h
volsync-rsync-dst-dd-io-pvc-1-5rcf5   1/1     Running   0          26m
volsync-rsync-dst-dd-io-pvc-2-2jjh9   1/1     Running   0          26m
volsync-rsync-dst-dd-io-pvc-3-7dtcl   1/1     Running   0          26m
volsync-rsync-dst-dd-io-pvc-4-sxz7h   1/1     Running   0          26m
volsync-rsync-dst-dd-io-pvc-5-cb5r9   1/1     Running   0          26m
volsync-rsync-dst-dd-io-pvc-6-mg56r   1/1     Running   0          26m
volsync-rsync-dst-dd-io-pvc-7-zp592   1/1     Running   0          26m


Expected results:
There should not be any workload pod running on that cluster

Additional info:

This was the Scenario 

#. The workload was running on C2 initially 
#. Power off C2
#. wait for 5-10 min
#. failover workload to C1
#. after 1,2 hr power on C2
#. and we see that workload are still running on C2


Note You need to log in before you can comment on or make changes to this bug.