There exists a corner case that can occur with DVM when a workload is using node preference configuration that could cause it to fail to launch due to being unable to mount a volume that resides in a different availability zone:
1. DVM controller creates the rsync client pods on the destination with spec.nodeName: "". The empty nodeName means that it will be scheduled by the kube scheduler on any available nodes in the destination cluster
2.Assume that the kube-scheduler selected node A in availability zone 1.
3. Later on when we migrate the workloads, the application pods can have node selectors or other node scheduling configurations, that forces the application pods to be deployed on node B in availability zone 2.
4. Because the pvc was first used by the pods running on node A, this will mean that when pod on node B tries to run, it will not find the pvc and will not be able to run.
We'd like to document this as a known issue.