Description of problem (please be detailed as possible and provide log snippests): The lvmo operand, topolvm-node, does not contain packages required to mount ext4 volumes to pods. Version of all relevant components (if applicable): All versions. Is there any workaround available to the best of your knowledge? No Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Complexity: 1 Can this issue reproducible? Yes, 100% of the time Steps to Reproduce: 1. After installing topolvm, either directly, or via the LVM-Operator, apply the following manifest. 2. `oc describe pod mypod` will report this event: ``` Warning FailedMount 0s (x5 over 8s) kubelet MountVolume.SetUp failed for volume "pvc-382cb438-40c4-43f0-9c8c-58f9dd2cdb0e" : rpc error: code = Internal desc = mount failed: volume=427fb6ed-c0af-41b1-9d42-50bcfb571c67, error=format of disk "/dev/topolvm/427fb6ed-c0af-41b1-9d42-50bcfb571c67" failed: type:("ext4") target:("/var/lib/kubelet/pods/dc732718-d3be-4d8b-83f3-ae21d357f01c/volumes/kubernetes.io~csi/pvc-382cb438-40c4-43f0-9c8c-58f9dd2cdb0e/mount") options:("defaults") errcode:(executable file not found in $PATH) output:() ``` ``` kind: StorageClass apiVersion: storage.k8s.io/v1 allowVolumeExpansion: true metadata: name: my-sc provisioner: topolvm.cybozu.com volumeBindingMode: WaitForFirstConsumer reclaimPolicy: "Delete" parameters: csi.storage.k8s.io/fstype: ext4 --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mypvc1 spec: accessModes: - ReadWriteOnce storageClassName: my-sc resources: requests: storage: 5Gi --- kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: dynamic image: quay.io/openshifttest/hello-openshift@sha256:b1aabe8c8272f750ce757b6c4263a2712796297511e0c6df79144ee188933623 volumeMounts: - mountPath: /mnt/storage name: my-vol ports: - containerPort: 80 name: "http-server" volumes: - name: my-vol persistentVolumeClaim: claimName: mypvc1 ``` Can this issue reproduce from the UI? Yes. Actual results: Pod fails to start due to failed mount Expected results: Mount succeeds and pod starts
Dockerfile is updated. It should be fixed in the next nightly build.
Fixed in the current builds, moving to ON_QA.