Bug 2103818 - Restored snapshot don't have any content
Summary: Restored snapshot don't have any content
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: lvm-operator
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.11.0
Assignee: N Balachandran
QA Contact: Shay Rozen
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-07-05 02:55 UTC by Shay Rozen
Modified: 2023-08-09 16:46 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-08-24 13:55:09 UTC
Embargoed:


Attachments (Terms of Use)
Must-gather (247.98 KB, application/gzip)
2022-07-05 02:55 UTC, Shay Rozen
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage lvm-operator pull 224 0 None open Update the LogicalVolumes.topolvm.cybozu.com CRD. 2022-07-05 14:19:26 UTC
Github red-hat-storage lvm-operator pull 225 0 None open Bug 2103818: [release-4.11] Update the LogicalVolumes.topolvm.cybozu.com CRD. 2022-07-05 15:23:08 UTC
Github topolvm topolvm pull 522 0 None Merged mount: remove new UUID generation 2022-07-05 14:45:06 UTC
Red Hat Product Errata RHSA-2022:6156 0 None None None 2022-08-24 13:55:50 UTC

Description Shay Rozen 2022-07-05 02:55:16 UTC
Created attachment 1894593 [details]
Must-gather

Created attachment 1894593 [details]
Must-gather

Description of problem (please be detailed as possible and provide log
snippests):
When restoring a snapshot and connecting a pod, the files that when on the original pvc do not exist. Beofre quay.io/rhceph-dev/ocs-registry:4.11.0-105 there was no problem.
It's just a guess but it looks like the restored PVC get formatted by LVM when attached to pod:

2022-07-05T02:40:30.295507975+00:00 stderr F {"level":"info","ts":1656988830.295328,"logger":"driver.node","msg":"NodePublishVolume called","volume_id":"fde00554-8511-485c-ae60-0fda11f5a478","publish_context":null,"target_path":"/var/lib/kubelet/pods/df9dc0a1-24f7-469d-86e4-595dceeb2c5e/volumes/kubernetes.io~csi/pvc-4a78497c-5d88-4d52-996f-1f0b64b8eb5f/mount","volume_capability":"mount:{fs_type:\"xfs\"}  access_mode:{mode:SINGLE_NODE_WRITER}","read_only":false,"num_secrets":0,"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"pod-test-rbd-58daed01da384c78b1f1d409a6e","csi.storage.k8s.io/pod.namespace":"namespace-test-0bf98fd45bad4a96a6e8bd841","csi.storage.k8s.io/pod.uid":"df9dc0a1-24f7-469d-86e4-595dceeb2c5e","csi.storage.k8s.io/serviceAccount.name":"default","storage.kubernetes.io/csiProvisionerIdentity":"1656988688169-8081-topolvm.cybozu.com"}}
2022-07-05T02:40:30.495749690+00:00 stderr F I0705 02:40:30.495695 1587187 mount_linux.go:449] Disk "/dev/topolvm/fde00554-8511-485c-ae60-0fda11f5a478" appears to be unformatted, attempting to format as type: "xfs" with options: [-f /dev/topolvm/fde00554-8511-485c-ae60-0fda11f5a478]
2022-07-05T02:40:30.735811776+00:00 stderr F I0705 02:40:30.735758 1587187 mount_linux.go:459] Disk successfully formatted (mkfs): xfs - /dev/topolvm/fde00554-8511-485c-ae60-0fda11f5a478 /var/lib/kubelet/pods/df9dc0a1-24f7-469d-86e4-595dceeb2c5e/volumes/kubernetes.io~csi/pvc-4a78497c-5d88-4d52-996f-1f0b64b8eb5f/mount
2022-07-05T02:40:30.749794492+00:00 stderr F {"level":"info","ts":1656988830.749734,"logger":"driver.node","msg":"NodePublishVolume(fs) succeeded","volume_id":"fde00554-8511-485c-ae60-0fda11f5a478","target_path":"/var/lib/kubelet/pods/df9dc0a1-24f7-469d-86e4-595dceeb2c5e/volumes/kubernetes.io~csi/pvc-4a78497c-5d88-4d52-996f-1f0b64b8eb5f/mount","fstype":"xfs"}



Version of all relevant components (if applicable):
quay.io/rhceph-dev/ocs-registry:4.11.0-105

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Can't restore snapshots

Is there any workaround available to the best of your knowledge?
NA

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
Yes

Can this issue reproduce from the UI?
Yes

If this is a regression, please provide more details to justify this:
Yes

Steps to Reproduce:
1. Create lvmcluster.
2. Create PVC and attach POD.
3. run IO.
4. Create snapshot from the PVC.
5. Restore PVC from snapshot.
6. Attach a POD and check the content.


Actual results:
There is no content at the restored PVC

Expected results:
Files that were on origin PVC should exist

Additional info:
Looks like the volume get formatted when attached to POD
2022-07-05T02:40:30.295507975+00:00 stderr F {"level":"info","ts":1656988830.295328,"logger":"driver.node","msg":"NodePublishVolume called","volume_id":"fde00554-8511-485c-ae60-0fda11f5a478","publish_context":null,"target_path":"/var/lib/kubelet/pods/df9dc0a1-24f7-469d-86e4-595dceeb2c5e/volumes/kubernetes.io~csi/pvc-4a78497c-5d88-4d52-996f-1f0b64b8eb5f/mount","volume_capability":"mount:{fs_type:\"xfs\"}  access_mode:{mode:SINGLE_NODE_WRITER}","read_only":false,"num_secrets":0,"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"pod-test-rbd-58daed01da384c78b1f1d409a6e","csi.storage.k8s.io/pod.namespace":"namespace-test-0bf98fd45bad4a96a6e8bd841","csi.storage.k8s.io/pod.uid":"df9dc0a1-24f7-469d-86e4-595dceeb2c5e","csi.storage.k8s.io/serviceAccount.name":"default","storage.kubernetes.io/csiProvisionerIdentity":"1656988688169-8081-topolvm.cybozu.com"}}
2022-07-05T02:40:30.495749690+00:00 stderr F I0705 02:40:30.495695 1587187 mount_linux.go:449] Disk "/dev/topolvm/fde00554-8511-485c-ae60-0fda11f5a478" appears to be unformatted, attempting to format as type: "xfs" with options: [-f /dev/topolvm/fde00554-8511-485c-ae60-0fda11f5a478]
2022-07-05T02:40:30.735811776+00:00 stderr F I0705 02:40:30.735758 1587187 mount_linux.go:459] Disk successfully formatted (mkfs): xfs - /dev/topolvm/fde00554-8511-485c-ae60-0fda11f5a478 /var/lib/kubelet/pods/df9dc0a1-24f7-469d-86e4-595dceeb2c5e/volumes/kubernetes.io~csi/pvc-4a78497c-5d88-4d52-996f-1f0b64b8eb5f/mount
2022-07-05T02:40:30.749794492+00:00 stderr F {"level":"info","ts":1656988830.749734,"logger":"driver.node","msg":"NodePublishVolume(fs) succeeded","volume_id":"fde00554-8511-485c-ae60-0fda11f5a478","target_path":"/var/lib/kubelet/pods/df9dc0a1-24f7-469d-86e4-595dceeb2c5e/volumes/kubernetes.io~csi/pvc-4a78497c-5d88-4d52-996f-1f0b64b8eb5f/mount","fstype":"xfs"}

Comment 8 N Balachandran 2022-07-05 09:01:24 UTC
Merged into the release-4.11 branch : https://github.com/red-hat-storage/topolvm/pull/14

Comment 9 N Balachandran 2022-07-05 13:57:27 UTC
Reopening the BZ. The fix is not in Topolvm. The LogicalVolume CRD has not been updated with the latest changes for the snapshot  and clone CRs.

Comment 14 errata-xmlrpc 2022-08-24 13:55:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:6156


Note You need to log in before you can comment on or make changes to this bug.