Description of problem: logs from csi-provisioner container in the hostpath-provisioner-csi pod E0222 17:52:54.088950 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: unable to parse requirement: values[0][csi.storage.k8s.io/managed-by]: Invalid value: "external-provisioner-TRIMMED": must be no more than 63 characters Version-Release number of selected component (if applicable): OCP-4.10.0 CNV-4.10.0-686 How reproducible: The issue reproduce with both hpp-csi-basic and also with hpp-csi-pvc-block storage class. Using WFFC bindingMode on the storage class. Note that when I change it to Immediate it works. Steps to Reproduce: 1. Deploy HPP-CSI on cluster which has long fqdn names. 2. Try to bind a PVC 3. Actual results: Stuck in pending state Expected results: PVC should bind with PV Additional info: apiVersion: v1 items: - apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: creationTimestamp: "2022-02-22T17:52:08Z" finalizers: - finalizer.delete.hostpath-provisioner generation: 12 name: hostpath-provisioner resourceVersion: "28215" uid: 70dcf437-0b0b-4626-b6ae-fd405d397865 spec: imagePullPolicy: IfNotPresent storagePools: - name: hpp-csi-local-basic path: /var/hpp-csi-local-basic - name: hpp-csi-pvc-block path: /var/hpp-csi-pvc-block pvcTemplate: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: local-block-hpp volumeMode: Block workload: nodeSelector: kubernetes.io/os: linux status: conditions: - lastHeartbeatTime: "2022-02-22T17:52:31Z" lastTransitionTime: "2022-02-22T17:52:31Z" message: Application Available reason: Complete status: "True" type: Available - lastHeartbeatTime: "2022-02-22T17:52:31Z" lastTransitionTime: "2022-02-22T17:52:31Z" status: "False" type: Progressing - lastHeartbeatTime: "2022-02-22T17:52:31Z" lastTransitionTime: "2022-02-22T17:52:09Z" status: "False" type: Degraded observedVersion: v4.10.0 operatorVersion: v4.10.0 storagePoolStatuses: - name: hpp-csi-local-basic phase: Ready - claimStatuses: - name: hpp-pool-5e8d1dd5 status: accessModes: - ReadWriteOnce capacity: storage: 446Gi phase: Bound currentReady: 1 desiredReady: 1 name: hpp-csi-pvc-block phase: Ready targetVersion: v4.10.0 kind: List metadata: resourceVersion: "" selfLink: "" --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" creationTimestamp: "2022-02-22T17:52:08Z" name: hostpath-csi-basic resourceVersion: "27230" uid: 54847180-2570-4a32-9a95-4d4d208c5d69 parameters: storagePool: hpp-csi-local-basic provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: "2022-02-22T19:02:09Z" name: hostpath-csi-pvc-block resourceVersion: "178561" uid: e3dead94-111d-469a-a8af-c8796eaf3d54 parameters: storagePool: hpp-csi-pvc-block provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer
The pertinent message is definitely this: E0222 17:52:54.088950 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: unable to parse requirement: values[0][csi.storage.k8s.io/managed-by]: Invalid value: "external-provisioner-TRIMMED": must be no more than 63 characters Basically it is saying it is unable to watch and manage the CSIStorageCapacity object because the csi.storage.k8s.io/managed-by label is too long. This object is created by the csi external provisioner and that is one of the side cars in use by the hpp CSI driver. I have opened an issue on the external provisioner [0] [0] https://github.com/kubernetes-csi/external-provisioner/issues/707
Is it too late to have release note for this?
Open a new doc bug for this #bug 2067190
external-provisioner PR is merged. Just need to get that into a release upstream and then into a release downstream.
So we have to wait for upstream to release a version with the fix in it. The current latest release is 3.1.0 which does NOT contain the fix. We can't really target a release until we have the upstream release.
Alexander, could you please reply once there is an upstream release? so we can target this bug
I will let you know when a new release happens likely won't happen until upstream k8s 1.25 gets released.
As of https://gitlab.cee.redhat.com/cpaas-midstream/openshift-virtualization/hco-bundle-registry/-/commit/e97186308828c20bb44b3e3f122dda5614f3d1e3 this is now downstream. After some digging due to CNV version explorer being down, it seems like CNV v4.11.0-596 bundle contains this commit. This means it will make its way to CNV 4.11.0. Adjusting target release.
Verified on CNV v4.11.0-601
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Virtualization 4.11.0 Images security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6526