Bug 2130654
| Summary: | Downstream TopoLVM cannot mount ext4 volume due to missing e2fsprogs package | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Jon Cope <jcope> |
| Component: | build | Assignee: | Tamil <tmuthami> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Aviad Polak <apolak> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.9 | CC: | apolak, dkhandel, muagarwa, nigoyal, ocs-bugs, odf-bz-bot, rsinghal, tmuthami |
| Target Milestone: | --- | ||
| Target Release: | ODF 4.12.0 | ||
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | No Doc Update | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-02-08 14:06:28 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Dockerfile is updated. It should be fixed in the next nightly build. Fixed in the current builds, moving to ON_QA. |
Description of problem (please be detailed as possible and provide log snippests): The lvmo operand, topolvm-node, does not contain packages required to mount ext4 volumes to pods. Version of all relevant components (if applicable): All versions. Is there any workaround available to the best of your knowledge? No Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Complexity: 1 Can this issue reproducible? Yes, 100% of the time Steps to Reproduce: 1. After installing topolvm, either directly, or via the LVM-Operator, apply the following manifest. 2. `oc describe pod mypod` will report this event: ``` Warning FailedMount 0s (x5 over 8s) kubelet MountVolume.SetUp failed for volume "pvc-382cb438-40c4-43f0-9c8c-58f9dd2cdb0e" : rpc error: code = Internal desc = mount failed: volume=427fb6ed-c0af-41b1-9d42-50bcfb571c67, error=format of disk "/dev/topolvm/427fb6ed-c0af-41b1-9d42-50bcfb571c67" failed: type:("ext4") target:("/var/lib/kubelet/pods/dc732718-d3be-4d8b-83f3-ae21d357f01c/volumes/kubernetes.io~csi/pvc-382cb438-40c4-43f0-9c8c-58f9dd2cdb0e/mount") options:("defaults") errcode:(executable file not found in $PATH) output:() ``` ``` kind: StorageClass apiVersion: storage.k8s.io/v1 allowVolumeExpansion: true metadata: name: my-sc provisioner: topolvm.cybozu.com volumeBindingMode: WaitForFirstConsumer reclaimPolicy: "Delete" parameters: csi.storage.k8s.io/fstype: ext4 --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mypvc1 spec: accessModes: - ReadWriteOnce storageClassName: my-sc resources: requests: storage: 5Gi --- kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: dynamic image: quay.io/openshifttest/hello-openshift@sha256:b1aabe8c8272f750ce757b6c4263a2712796297511e0c6df79144ee188933623 volumeMounts: - mountPath: /mnt/storage name: my-vol ports: - containerPort: 80 name: "http-server" volumes: - name: my-vol persistentVolumeClaim: claimName: mypvc1 ``` Can this issue reproduce from the UI? Yes. Actual results: Pod fails to start due to failed mount Expected results: Mount succeeds and pod starts