openshift v3.7.0-0.184.0 kubernetes v1.7.6+a08f5eeb62 Getting this in the node logs every 3m while running any pod with a pvc volume bound to a dynamically provisioned glusterfs pv: E1101 23:25:57.142060 16240 glusterfs.go:148] glusterfs: failed to get endpoints pvc-a508e939-bda1-11e7-95d2-fa163ea482fd[an empty namespace may not be set when a resource name is provided] E1101 23:25:57.142094 16240 reconciler.go:367] Could not construct volume information: MountVolume.NewMounter failed for volume "kubernetes.io/glusterfs/bf968d61-bda1-11e7-95d2-fa163ea482fd-pvc-a508e939-bda1-11e7-95d2-fa163ea482fd" (spec.Name: "pvc-a508e939-bda1-11e7-95d2-fa163ea482fd") pod "bf968d61-bda1-11e7-95d2-fa163ea482fd" (UID: "bf968d61-bda1-11e7-95d2-fa163ea482fd") with: an empty namespace may not be set when a resource name is provided Other occurrences: https://github.com/kubernetes/kubernetes/issues/49376 https://github.com/kubernetes/kubernetes/issues/37625 There two bugs in the glusterfs volume driver afaict. https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/pkg/volume/glusterfs/glusterfs.go#L204 ConstructVolumeSpec() sets the EndpointsName to the volumeName. This is not correct. The volumeName is something like "pvc-a508e939-bda1-11e7-95d2-fa163ea482fd". The endpoint name should be something like "glusterfs-dynamic-mysql". https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/pkg/volume/glusterfs/glusterfs.go#L803-L807 There is also a second bug in that the pod parameter passed to NewMounter() from reconstructVolume() is not a complete pod. Only ObjectMeta.UID is set. The pod namespace is not set. https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go#L430-L434 That, in turn, is the direct cause of the error here https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/pkg/volume/glusterfs/glusterfs.go#L148 Fortunately, the initial mount when the pod first starts uses a different code path and is successful. Thus the bugs have no functional affect under normal conditions. Just error level log spam every 3m per pod that using a gluster backed pv.
Refer # https://bugzilla.redhat.com/show_bug.cgi?id=1546156
Created attachment 1407107 [details] ocp logs
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0639
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days