Description of problem: Pod complains vsphere volume vmdk does not exist, but volume is already mounted to the node How reproducible: The scene is in case the pod is restarted or is evicted and if it is scheduled on the same node the pod will observe the error "vmdk not found". But in case the pod is scheduled or forced to schedule on a node other than the old node, the pod will start properly. For testing purposes, we saw that the pod was currently running on the node <node-name> and it was rescheduled on the same node after we force deleted it we observed the same "VMDK not found" error. Also, we double-checked, we validated that the vmdk existed in the correct datastore as well as the same datastore was also attached to all the ESXi hosts in the cluster. Additional info: 1. This specifically happens with Kafka cluster. 2. All the VM nodes all have access to the datastore "Datastore-name" and we can list the volume "xxx-6gpp5-dynamic-pvc-xxx.vmdk" in the kubevols folder. I will add the relevant details and the necessary data in the private notes.
Marked as verified according to https://bugzilla.redhat.com/show_bug.cgi?id=2044718#c8
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.10.9 bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:1241
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days