Created attachment 1894593 [details] Must-gather Created attachment 1894593 [details] Must-gather Description of problem (please be detailed as possible and provide log snippests): When restoring a snapshot and connecting a pod, the files that when on the original pvc do not exist. Beofre quay.io/rhceph-dev/ocs-registry:4.11.0-105 there was no problem. It's just a guess but it looks like the restored PVC get formatted by LVM when attached to pod: 2022-07-05T02:40:30.295507975+00:00 stderr F {"level":"info","ts":1656988830.295328,"logger":"driver.node","msg":"NodePublishVolume called","volume_id":"fde00554-8511-485c-ae60-0fda11f5a478","publish_context":null,"target_path":"/var/lib/kubelet/pods/df9dc0a1-24f7-469d-86e4-595dceeb2c5e/volumes/kubernetes.io~csi/pvc-4a78497c-5d88-4d52-996f-1f0b64b8eb5f/mount","volume_capability":"mount:{fs_type:\"xfs\"} access_mode:{mode:SINGLE_NODE_WRITER}","read_only":false,"num_secrets":0,"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"pod-test-rbd-58daed01da384c78b1f1d409a6e","csi.storage.k8s.io/pod.namespace":"namespace-test-0bf98fd45bad4a96a6e8bd841","csi.storage.k8s.io/pod.uid":"df9dc0a1-24f7-469d-86e4-595dceeb2c5e","csi.storage.k8s.io/serviceAccount.name":"default","storage.kubernetes.io/csiProvisionerIdentity":"1656988688169-8081-topolvm.cybozu.com"}} 2022-07-05T02:40:30.495749690+00:00 stderr F I0705 02:40:30.495695 1587187 mount_linux.go:449] Disk "/dev/topolvm/fde00554-8511-485c-ae60-0fda11f5a478" appears to be unformatted, attempting to format as type: "xfs" with options: [-f /dev/topolvm/fde00554-8511-485c-ae60-0fda11f5a478] 2022-07-05T02:40:30.735811776+00:00 stderr F I0705 02:40:30.735758 1587187 mount_linux.go:459] Disk successfully formatted (mkfs): xfs - /dev/topolvm/fde00554-8511-485c-ae60-0fda11f5a478 /var/lib/kubelet/pods/df9dc0a1-24f7-469d-86e4-595dceeb2c5e/volumes/kubernetes.io~csi/pvc-4a78497c-5d88-4d52-996f-1f0b64b8eb5f/mount 2022-07-05T02:40:30.749794492+00:00 stderr F {"level":"info","ts":1656988830.749734,"logger":"driver.node","msg":"NodePublishVolume(fs) succeeded","volume_id":"fde00554-8511-485c-ae60-0fda11f5a478","target_path":"/var/lib/kubelet/pods/df9dc0a1-24f7-469d-86e4-595dceeb2c5e/volumes/kubernetes.io~csi/pvc-4a78497c-5d88-4d52-996f-1f0b64b8eb5f/mount","fstype":"xfs"} Version of all relevant components (if applicable): quay.io/rhceph-dev/ocs-registry:4.11.0-105 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Can't restore snapshots Is there any workaround available to the best of your knowledge? NA Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? Yes Can this issue reproduce from the UI? Yes If this is a regression, please provide more details to justify this: Yes Steps to Reproduce: 1. Create lvmcluster. 2. Create PVC and attach POD. 3. run IO. 4. Create snapshot from the PVC. 5. Restore PVC from snapshot. 6. Attach a POD and check the content. Actual results: There is no content at the restored PVC Expected results: Files that were on origin PVC should exist Additional info: Looks like the volume get formatted when attached to POD 2022-07-05T02:40:30.295507975+00:00 stderr F {"level":"info","ts":1656988830.295328,"logger":"driver.node","msg":"NodePublishVolume called","volume_id":"fde00554-8511-485c-ae60-0fda11f5a478","publish_context":null,"target_path":"/var/lib/kubelet/pods/df9dc0a1-24f7-469d-86e4-595dceeb2c5e/volumes/kubernetes.io~csi/pvc-4a78497c-5d88-4d52-996f-1f0b64b8eb5f/mount","volume_capability":"mount:{fs_type:\"xfs\"} access_mode:{mode:SINGLE_NODE_WRITER}","read_only":false,"num_secrets":0,"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"pod-test-rbd-58daed01da384c78b1f1d409a6e","csi.storage.k8s.io/pod.namespace":"namespace-test-0bf98fd45bad4a96a6e8bd841","csi.storage.k8s.io/pod.uid":"df9dc0a1-24f7-469d-86e4-595dceeb2c5e","csi.storage.k8s.io/serviceAccount.name":"default","storage.kubernetes.io/csiProvisionerIdentity":"1656988688169-8081-topolvm.cybozu.com"}} 2022-07-05T02:40:30.495749690+00:00 stderr F I0705 02:40:30.495695 1587187 mount_linux.go:449] Disk "/dev/topolvm/fde00554-8511-485c-ae60-0fda11f5a478" appears to be unformatted, attempting to format as type: "xfs" with options: [-f /dev/topolvm/fde00554-8511-485c-ae60-0fda11f5a478] 2022-07-05T02:40:30.735811776+00:00 stderr F I0705 02:40:30.735758 1587187 mount_linux.go:459] Disk successfully formatted (mkfs): xfs - /dev/topolvm/fde00554-8511-485c-ae60-0fda11f5a478 /var/lib/kubelet/pods/df9dc0a1-24f7-469d-86e4-595dceeb2c5e/volumes/kubernetes.io~csi/pvc-4a78497c-5d88-4d52-996f-1f0b64b8eb5f/mount 2022-07-05T02:40:30.749794492+00:00 stderr F {"level":"info","ts":1656988830.749734,"logger":"driver.node","msg":"NodePublishVolume(fs) succeeded","volume_id":"fde00554-8511-485c-ae60-0fda11f5a478","target_path":"/var/lib/kubelet/pods/df9dc0a1-24f7-469d-86e4-595dceeb2c5e/volumes/kubernetes.io~csi/pvc-4a78497c-5d88-4d52-996f-1f0b64b8eb5f/mount","fstype":"xfs"}
Merged into the release-4.11 branch : https://github.com/red-hat-storage/topolvm/pull/14
Reopening the BZ. The fix is not in Topolvm. The LogicalVolume CRD has not been updated with the latest changes for the snapshot and clone CRs.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6156