Description of problem: If "type: directory" isn't set for hostPath volume mounts, this may cause timing issues if the directory does not exist.
Tested and verified in 4.11.0-0.nightly-2022-06-01-200905 [weliang@weliang multus_logs]$ oc get ds multus -o yaml apiVersion: apps/v1 kind: DaemonSet metadata: annotations: deprecated.daemonset.template.generation: "1" kubernetes.io/description: | This daemon set launches the Multus networking component on each node. release.openshift.io/version: 4.11.0-0.nightly-2022-06-01-200905 creationTimestamp: "2022-06-02T13:34:00Z" generation: 1 labels: networkoperator.openshift.io/generates-operator-status: "" name: multus namespace: openshift-multus ownerReferences: - apiVersion: operator.openshift.io/v1 blockOwnerDeletion: true controller: true kind: Network name: cluster uid: 24135e86-9624-4be1-a4eb-a61fd750b8d4 resourceVersion: "20264" uid: 37b2f806-186a-45e2-a29d-4f62cbefb3c9 spec: revisionHistoryLimit: 10 selector: matchLabels: app: multus template: metadata: annotations: target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}' creationTimestamp: null labels: app: multus component: network openshift.io/component: network type: infra spec: containers: - args: - | /entrypoint/cnibincopy.sh; exec /entrypoint.sh --multus-conf-file=auto --multus-autoconfig-dir=/host/var/run/multus/cni/net.d --multus-kubeconfig-file-host=/etc/kubernetes/cni/net.d/multus.d/multus.kubeconfig --readiness-indicator-file=/var/run/multus/cni/net.d/80-openshift-network.conf --cleanup-config-on-exit=true --namespace-isolation=true --multus-log-level=debug --multus-log-file=/var/run/multus/cni/net.d/multus.log --cni-version=0.3.1 --additional-bin-dir=/opt/multus/bin --skip-multus-binary-copy=true - "--global-namespaces=default,openshift-multus,openshift-sriov-network-operator" command: - /bin/bash - -ec - -- env: - name: RHEL7_SOURCE_DIRECTORY value: /usr/src/multus-cni/rhel7/bin/ - name: RHEL8_SOURCE_DIRECTORY value: /usr/src/multus-cni/rhel8/bin/ - name: DEFAULT_SOURCE_DIRECTORY value: /usr/src/multus-cni/bin/ - name: KUBERNETES_SERVICE_PORT value: "6443" - name: KUBERNETES_SERVICE_HOST value: api-int.weliang-621.qe.gcp.devcluster.openshift.com image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ed23c0645083bac0ed470116cf45462243db4a83ddf321cce32b2de95e02716 imagePullPolicy: IfNotPresent name: kube-multus resources: requests: cpu: 10m memory: 65Mi securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /entrypoint name: cni-binary-copy - mountPath: /host/etc/os-release name: os-release - mountPath: /host/etc/cni/net.d name: system-cni-dir - mountPath: /host/var/run/multus/cni/net.d name: multus-cni-dir - mountPath: /host/opt/cni/bin name: cnibin dnsPolicy: ClusterFirst hostNetwork: true nodeSelector: kubernetes.io/os: linux priorityClassName: system-node-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: multus serviceAccountName: multus terminationGracePeriodSeconds: 10 tolerations: - operator: Exists volumes: - hostPath: path: /etc/kubernetes/cni/net.d type: Directory name: system-cni-dir - hostPath: path: /var/run/multus/cni/net.d type: Directory name: multus-cni-dir - hostPath: path: /var/lib/cni/bin type: Directory name: cnibin - hostPath: path: /etc/os-release type: File name: os-release - configMap: defaultMode: 484 name: cni-copy-resources name: cni-binary-copy updateStrategy: rollingUpdate: maxSurge: 0 maxUnavailable: 10% type: RollingUpdate status: currentNumberScheduled: 6 desiredNumberScheduled: 6 numberAvailable: 6 numberMisscheduled: 0 numberReady: 6 observedGeneration: 1 updatedNumberScheduled: 6 [weliang@weliang multus_logs]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-06-01-200905 True False 111m Cluster version is 4.11.0-0.nightly-2022-06-01-200905 [weliang@weliang multus_logs]$
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069