Bug 1947487

Summary: Document known corner case with differing AZs and DVM
Product: Migration Toolkit for Containers Reporter: Erik Nelson <ernelson>
Component: DocumentationAssignee: Avital Pinnick <apinnick>
Status: CLOSED NEXTRELEASE QA Contact: Xin jiang <xjiang>
Severity: low Docs Contact: Avital Pinnick <apinnick>
Priority: low    
Version: 1.4.2CC: ernelson, jmatthew
Target Milestone: ---   
Target Release: 1.5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-06-03 11:16:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Erik Nelson 2021-04-08 15:17:46 UTC
There exists a corner case that can occur with DVM when a workload is using node preference configuration that could cause it to fail to launch due to being unable to mount a volume that resides in a different availability zone:

1. DVM controller creates the rsync client pods on the destination with spec.nodeName: "". The empty nodeName means that it will be scheduled by the kube scheduler on any available nodes in the destination cluster

2.Assume that the kube-scheduler selected node A in availability zone 1.

3. Later on when we migrate the workloads, the application pods can have node selectors or other node scheduling configurations, that forces the application pods to be deployed on node B in availability zone 2.

4. Because the pvc was first used by the pods running on node A, this will mean that when pod on node B tries to run, it will not find the pvc and will not be able to run.

We'd like to document this as a known issue.

Comment 2 Xin jiang 2021-06-02 09:18:04 UTC
LGTM

Comment 3 Avital Pinnick 2021-06-03 11:16:13 UTC
Changes merged