Bug 2148214

Summary: [Tracker ACM] [RDR][CEPHFS] When the previous Primary cluster(C2) is powered on post-failover operation workloads are not removed from that cluster
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Pratik Surve <prsurve>
Component: odf-drAssignee: Benamar Mekhissi <bmekhiss>
odf-dr sub component: ramen QA Contact: Pratik Surve <prsurve>
Status: CLOSED CURRENTRELEASE Docs Contact:
Severity: urgent    
Priority: unspecified CC: bmekhiss, ebenahar, muagarwa, ocs-bugs, odf-bz-bot, srangana
Version: 4.12Keywords: TestBlocker
Target Milestone: ---   
Target Release: ODF 4.12.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of:
: 2151310 (view as bug list) Environment:
Last Closed: 2023-02-08 14:06:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2151310    
Bug Blocks: 2155042    

Description Pratik Surve 2022-11-24 16:05:19 UTC
Description of problem (please be detailed as possible and provide log
snippets):

[RDR][CEPHFS] When the prevous Primary cluster(C2) is powered on post-failover operation workloads are not removed from that cluster 


Version of all relevant components (if applicable):

OCP version:- 4.12.0-0.nightly-2022-11-16-003434
ODF version:- 4.12.0-114
CEPH version:- ceph version 16.2.10-72.el8cp (3311949c2d1edf5cabcc20ba0f35b4bfccbf021e) pacific (stable)
ACM version:- v2.7.0
SUBMARINER version:- 0.14.0-rc4
VOLSYNC version:- volsync-product.v0.6.0


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes

Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
2

Can this issue reproducible?
Yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy RDR cluster 
2. Run the CEPHFS workload
3. Power off the cluster where workload are running
4. After sometime perform failover operation 
5. After 1,2 hr power on the cluster
6. Check workloads are running on both cluster

Actual results:

# Current primary

oc get pods                                                                                                            
NAME                                  READY   STATUS    RESTARTS   AGE
dd-io-1-5857bfdcd9-h26fw              1/1     Running   0          4h29m
dd-io-2-bcd6d9f65-pnzv5               1/1     Running   0          4h29m
dd-io-3-5d6b4b84df-ffgl7              1/1     Running   0          4h29m
dd-io-4-6f6db89fbf-sx4q5              1/1     Running   0          4h29m
dd-io-5-7868bc6b5c-cld2f              1/1     Running   0          4h29m
dd-io-6-58c98598d5-57tjt              1/1     Running   0          4h29m
dd-io-7-694958ff97-skvqd              1/1     Running   0          4h29m
volsync-rsync-src-dd-io-pvc-1-fq8rw   0/1     Error     0          3m16s
volsync-rsync-src-dd-io-pvc-1-h6mwk   1/1     Running   0          75s
volsync-rsync-src-dd-io-pvc-2-n5fr4   1/1     Running   0          73s
volsync-rsync-src-dd-io-pvc-3-p4pwh   1/1     Running   0          64s
volsync-rsync-src-dd-io-pvc-4-qdbvf   1/1     Running   0          47s
volsync-rsync-src-dd-io-pvc-5-hk8xq   1/1     Running   0          73s
volsync-rsync-src-dd-io-pvc-6-7tv85   1/1     Running   0          73s
volsync-rsync-src-dd-io-pvc-7-5c5cp   1/1     Running   0          74s
volsync-rsync-src-dd-io-pvc-7-dwfwg   0/1     Error     0          3m1s


# Old Primary 

oc get pods 
NAME                                  READY   STATUS    RESTARTS   AGE
dd-io-1-5857bfdcd9-2rtzm              1/1     Running   1          24h
dd-io-2-bcd6d9f65-96762               1/1     Running   1          24h
dd-io-3-5d6b4b84df-w87w8              1/1     Running   1          24h
dd-io-4-6f6db89fbf-jdlbx              1/1     Running   1          24h
dd-io-5-7868bc6b5c-bzsfh              1/1     Running   1          24h
dd-io-6-58c98598d5-9897j              1/1     Running   1          24h
dd-io-7-694958ff97-vfv2t              1/1     Running   1          24h
volsync-rsync-dst-dd-io-pvc-1-5rcf5   1/1     Running   0          26m
volsync-rsync-dst-dd-io-pvc-2-2jjh9   1/1     Running   0          26m
volsync-rsync-dst-dd-io-pvc-3-7dtcl   1/1     Running   0          26m
volsync-rsync-dst-dd-io-pvc-4-sxz7h   1/1     Running   0          26m
volsync-rsync-dst-dd-io-pvc-5-cb5r9   1/1     Running   0          26m
volsync-rsync-dst-dd-io-pvc-6-mg56r   1/1     Running   0          26m
volsync-rsync-dst-dd-io-pvc-7-zp592   1/1     Running   0          26m


Expected results:
There should not be any workload pod running on that cluster

Additional info:

This was the Scenario 

#. The workload was running on C2 initially 
#. Power off C2
#. wait for 5-10 min
#. failover workload to C1
#. after 1,2 hr power on C2
#. and we see that workload are still running on C2