Description of problem (please be detailed as possible and provide log snippests): When the snapshot of a thick provisioned PVC is restored using a thick provisioning enabled storage class, the restored volume is not thick provisioned. The restored PVC will reach Bound state without thick provisioning. Parent PV: $ oc get pv pvc-3cbd06e2-9023-4213-a973-0eabb55c23b0 -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: openshift-storage.rbd.csi.ceph.com creationTimestamp: "2021-05-12T06:49:56Z" finalizers: - kubernetes.io/pv-protection managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:pv.kubernetes.io/provisioned-by: {} f:spec: f:accessModes: {} f:capacity: {} f:claimRef: .: {} f:apiVersion: {} f:kind: {} f:name: {} f:namespace: {} f:resourceVersion: {} f:uid: {} f:csi: .: {} f:controllerExpandSecretRef: .: {} f:name: {} f:namespace: {} f:driver: {} f:fsType: {} f:nodeStageSecretRef: .: {} f:name: {} f:namespace: {} f:volumeAttributes: .: {} f:clusterID: {} f:csi.storage.k8s.io/pv/name: {} f:csi.storage.k8s.io/pvc/name: {} f:csi.storage.k8s.io/pvc/namespace: {} f:imageFeatures: {} f:imageFormat: {} f:imageName: {} f:journalPool: {} f:pool: {} f:storage.kubernetes.io/csiProvisionerIdentity: {} f:thickProvision: {} f:volumeHandle: {} f:persistentVolumeReclaimPolicy: {} f:storageClassName: {} f:volumeMode: {} manager: csi-provisioner operation: Update time: "2021-05-12T06:49:56Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:phase: {} manager: kube-controller-manager operation: Update time: "2021-05-12T06:49:56Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:capacity: f:storage: {} manager: csi-resizer operation: Update time: "2021-05-12T07:20:59Z" name: pvc-3cbd06e2-9023-4213-a973-0eabb55c23b0 resourceVersion: "875085" uid: ef947613-5e51-454e-a7d5-90ee50524c51 spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: pvc-test-8c4ef3625b3640bb9fec1386e46dd84 namespace: namespace-test-bb5640e23bdc499cbbda70673 resourceVersion: "857361" uid: 3cbd06e2-9023-4213-a973-0eabb55c23b0 csi: controllerExpandSecretRef: name: secret-test-rbd-c455e7170d5843a480c1b0cf namespace: openshift-storage driver: openshift-storage.rbd.csi.ceph.com fsType: ext4 nodeStageSecretRef: name: secret-test-rbd-c455e7170d5843a480c1b0cf namespace: openshift-storage volumeAttributes: clusterID: openshift-storage csi.storage.k8s.io/pv/name: pvc-3cbd06e2-9023-4213-a973-0eabb55c23b0 csi.storage.k8s.io/pvc/name: pvc-test-8c4ef3625b3640bb9fec1386e46dd84 csi.storage.k8s.io/pvc/namespace: namespace-test-bb5640e23bdc499cbbda70673 imageFeatures: layering imageFormat: "2" imageName: csi-vol-26c6e9d5-b2ee-11eb-bfa1-0a580a81020f journalPool: cbp-test-1de389f0a2ca479da4e8d92535bc9cf pool: cbp-test-1de389f0a2ca479da4e8d92535bc9cf storage.kubernetes.io/csiProvisionerIdentity: 1620713586686-8081-openshift-storage.rbd.csi.ceph.com thickProvision: "true" volumeHandle: 0001-0011-openshift-storage-000000000000000b-26c6e9d5-b2ee-11eb-bfa1-0a580a81020f persistentVolumeReclaimPolicy: Delete storageClassName: storageclass-test-rbd-bd929aec400b46eeb9 volumeMode: Filesystem status: phase: Bound Restored PV: $ oc get pv pvc-ffd0968e-f00e-4304-9cb3-1f5113d033e2 -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: openshift-storage.rbd.csi.ceph.com creationTimestamp: "2021-05-12T10:19:08Z" finalizers: - kubernetes.io/pv-protection managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:pv.kubernetes.io/provisioned-by: {} f:spec: f:accessModes: {} f:capacity: .: {} f:storage: {} f:claimRef: .: {} f:apiVersion: {} f:kind: {} f:name: {} f:namespace: {} f:resourceVersion: {} f:uid: {} f:csi: .: {} f:controllerExpandSecretRef: .: {} f:name: {} f:namespace: {} f:driver: {} f:fsType: {} f:nodeStageSecretRef: .: {} f:name: {} f:namespace: {} f:volumeAttributes: .: {} f:clusterID: {} f:csi.storage.k8s.io/pv/name: {} f:csi.storage.k8s.io/pvc/name: {} f:csi.storage.k8s.io/pvc/namespace: {} f:imageFeatures: {} f:imageFormat: {} f:imageName: {} f:journalPool: {} f:pool: {} f:storage.kubernetes.io/csiProvisionerIdentity: {} f:thickProvision: {} f:volumeHandle: {} f:persistentVolumeReclaimPolicy: {} f:storageClassName: {} f:volumeMode: {} manager: csi-provisioner operation: Update time: "2021-05-12T10:19:08Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:phase: {} manager: kube-controller-manager operation: Update time: "2021-05-12T10:19:08Z" name: pvc-ffd0968e-f00e-4304-9cb3-1f5113d033e2 resourceVersion: "973972" uid: 1e254295-a69a-4d2d-b612-3f55aa47d779 spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: pvc-test-8c4ef3625b3640bb9fec1386e46dd84-snapshot-restore namespace: namespace-test-bb5640e23bdc499cbbda70673 resourceVersion: "973967" uid: ffd0968e-f00e-4304-9cb3-1f5113d033e2 csi: controllerExpandSecretRef: name: secret-test-rbd-c455e7170d5843a480c1b0cf namespace: openshift-storage driver: openshift-storage.rbd.csi.ceph.com fsType: ext4 nodeStageSecretRef: name: secret-test-rbd-c455e7170d5843a480c1b0cf namespace: openshift-storage volumeAttributes: clusterID: openshift-storage csi.storage.k8s.io/pv/name: pvc-ffd0968e-f00e-4304-9cb3-1f5113d033e2 csi.storage.k8s.io/pvc/name: pvc-test-8c4ef3625b3640bb9fec1386e46dd84-snapshot-restore csi.storage.k8s.io/pvc/namespace: namespace-test-bb5640e23bdc499cbbda70673 imageFeatures: layering imageFormat: "2" imageName: csi-vol-7cc0da7a-b30b-11eb-bfa1-0a580a81020f journalPool: cbp-test-1de389f0a2ca479da4e8d92535bc9cf pool: cbp-test-1de389f0a2ca479da4e8d92535bc9cf storage.kubernetes.io/csiProvisionerIdentity: 1620713586686-8081-openshift-storage.rbd.csi.ceph.com thickProvision: "true" volumeHandle: 0001-0011-openshift-storage-000000000000000b-7cc0da7a-b30b-11eb-bfa1-0a580a81020f persistentVolumeReclaimPolicy: Delete storageClassName: storageclass-test-rbd-bd929aec400b46eeb9 volumeMode: Filesystem status: phase: Bound Version of all relevant components (if applicable): ocs-operator.v4.8.0-386.ci Cluster version is 4.8.0-0.nightly-2021-05-10-225140 ceph version 14.2.11-147.el8cp (1f54d52f20d93c1b91f1ec6af4c67a4b81402800) nautilus (stable) Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Yes, snapshot-restore is not enabled in thick provisioned PVC Is there any workaround available to the best of your knowledge? No Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 2 Can this issue reproducible? Yes Can this issue reproduce from the UI? Yes If this is a regression, please provide more details to justify this: New feature in OCS 4.8 Steps to Reproduce: 1. Create a PVC of size 10GiB using a thick provision enabled storage class and verify it is thick provisioned. # rbd du -p cbp-test-1de389f0a2ca479da4e8d92535bc9cf csi-vol-26c6e9d5-b2ee-11eb-bfa1-0a580a81020f warning: fast-diff map is not enabled for csi-vol-26c6e9d5-b2ee-11eb-bfa1-0a580a81020f. operation may be slow. NAME PROVISIONED USED csi-vol-26c6e9d5-b2ee-11eb-bfa1-0a580a81020f 10 GiB 10 GiB 2. Create a snapshot of the PVC. 3. Restore the snapshot using the same storage class as that of the parent PVC. 4. Verify the restored volume is thick provisioned. Actual results: The restored volume is not thick provisioned. The used size of the image is 0. # rbd du -p cbp-test-1de389f0a2ca479da4e8d92535bc9cf csi-vol-7cc0da7a-b30b-11eb-bfa1-0a580a81020f warning: fast-diff map is not enabled for csi-vol-7cc0da7a-b30b-11eb-bfa1-0a580a81020f. operation may be slow. NAME PROVISIONED USED csi-vol-7cc0da7a-b30b-11eb-bfa1-0a580a81020f 10 GiB 0 B Expected results: The restored volume should be thick provisioned. The provisioned and used size of the image should be same. Additional info:
This bug applicable for the below scenario as well. Thin SC to create PVC --> Thick SC to restore snapshot.
Sure.. We have to add this support, iow, its not available yet. Below upstream issue track it. https://github.com/ceph/ceph-csi/issues/2071
As suggested by Humble, using this bug to track the same issue with PVC clone because a different bug is not needed.
(In reply to Jilju Joy from comment #4) > As suggested by Humble, using this bug to track the same issue with PVC > clone because a different bug is not needed. Cloned volume: $ oc get pv pvc-f46ae24d-c55f-4e1e-aa3b-d0da921d32ba -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: openshift-storage.rbd.csi.ceph.com creationTimestamp: "2021-05-12T10:20:12Z" finalizers: - kubernetes.io/pv-protection managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:pv.kubernetes.io/provisioned-by: {} f:spec: f:accessModes: {} f:capacity: .: {} f:storage: {} f:claimRef: .: {} f:apiVersion: {} f:kind: {} f:name: {} f:namespace: {} f:resourceVersion: {} f:uid: {} f:csi: .: {} f:controllerExpandSecretRef: .: {} f:name: {} f:namespace: {} f:driver: {} f:fsType: {} f:nodeStageSecretRef: .: {} f:name: {} f:namespace: {} f:volumeAttributes: .: {} f:clusterID: {} f:csi.storage.k8s.io/pv/name: {} f:csi.storage.k8s.io/pvc/name: {} f:csi.storage.k8s.io/pvc/namespace: {} f:imageFeatures: {} f:imageFormat: {} f:imageName: {} f:journalPool: {} f:pool: {} f:storage.kubernetes.io/csiProvisionerIdentity: {} f:thickProvision: {} f:volumeHandle: {} f:persistentVolumeReclaimPolicy: {} f:storageClassName: {} f:volumeMode: {} manager: csi-provisioner operation: Update time: "2021-05-12T10:20:12Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:phase: {} manager: kube-controller-manager operation: Update time: "2021-05-12T10:20:12Z" name: pvc-f46ae24d-c55f-4e1e-aa3b-d0da921d32ba resourceVersion: "974548" uid: 9e5d420b-9ec0-4c05-afa1-3ed96e4a2c11 spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: pvc-test-8c4ef3625b3640bb9fec1386e46dd84-clone namespace: namespace-test-bb5640e23bdc499cbbda70673 resourceVersion: "974515" uid: f46ae24d-c55f-4e1e-aa3b-d0da921d32ba csi: controllerExpandSecretRef: name: secret-test-rbd-c455e7170d5843a480c1b0cf namespace: openshift-storage driver: openshift-storage.rbd.csi.ceph.com fsType: ext4 nodeStageSecretRef: name: secret-test-rbd-c455e7170d5843a480c1b0cf namespace: openshift-storage volumeAttributes: clusterID: openshift-storage csi.storage.k8s.io/pv/name: pvc-f46ae24d-c55f-4e1e-aa3b-d0da921d32ba csi.storage.k8s.io/pvc/name: pvc-test-8c4ef3625b3640bb9fec1386e46dd84-clone csi.storage.k8s.io/pvc/namespace: namespace-test-bb5640e23bdc499cbbda70673 imageFeatures: layering imageFormat: "2" imageName: csi-vol-a219e895-b30b-11eb-bfa1-0a580a81020f journalPool: cbp-test-1de389f0a2ca479da4e8d92535bc9cf pool: cbp-test-1de389f0a2ca479da4e8d92535bc9cf storage.kubernetes.io/csiProvisionerIdentity: 1620713586686-8081-openshift-storage.rbd.csi.ceph.com thickProvision: "true" volumeHandle: 0001-0011-openshift-storage-000000000000000b-a219e895-b30b-11eb-bfa1-0a580a81020f persistentVolumeReclaimPolicy: Delete storageClassName: storageclass-test-rbd-bd929aec400b46eeb9 volumeMode: Filesystem status: phase: Bound # rbd du -p cbp-test-1de389f0a2ca479da4e8d92535bc9cf csi-vol-a219e895-b30b-11eb-bfa1-0a580a81020f warning: fast-diff map is not enabled for csi-vol-a219e895-b30b-11eb-bfa1-0a580a81020f. operation may be slow. NAME PROVISIONED USED csi-vol-a219e895-b30b-11eb-bfa1-0a580a81020f 10 GiB 0 B Parent volume: $ oc get pv pvc-3cbd06e2-9023-4213-a973-0eabb55c23b0 -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: openshift-storage.rbd.csi.ceph.com creationTimestamp: "2021-05-12T06:49:56Z" finalizers: - kubernetes.io/pv-protection managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:pv.kubernetes.io/provisioned-by: {} f:spec: f:accessModes: {} f:capacity: {} f:claimRef: .: {} f:apiVersion: {} f:kind: {} f:name: {} f:namespace: {} f:resourceVersion: {} f:uid: {} f:csi: .: {} f:controllerExpandSecretRef: .: {} f:name: {} f:namespace: {} f:driver: {} f:fsType: {} f:nodeStageSecretRef: .: {} f:name: {} f:namespace: {} f:volumeAttributes: .: {} f:clusterID: {} f:csi.storage.k8s.io/pv/name: {} f:csi.storage.k8s.io/pvc/name: {} f:csi.storage.k8s.io/pvc/namespace: {} f:imageFeatures: {} f:imageFormat: {} f:imageName: {} f:journalPool: {} f:pool: {} f:storage.kubernetes.io/csiProvisionerIdentity: {} f:thickProvision: {} f:volumeHandle: {} f:persistentVolumeReclaimPolicy: {} f:storageClassName: {} f:volumeMode: {} manager: csi-provisioner operation: Update time: "2021-05-12T06:49:56Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:phase: {} manager: kube-controller-manager operation: Update time: "2021-05-12T06:49:56Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:capacity: f:storage: {} manager: csi-resizer operation: Update time: "2021-05-12T07:20:59Z" name: pvc-3cbd06e2-9023-4213-a973-0eabb55c23b0 resourceVersion: "875085" uid: ef947613-5e51-454e-a7d5-90ee50524c51 spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: pvc-test-8c4ef3625b3640bb9fec1386e46dd84 namespace: namespace-test-bb5640e23bdc499cbbda70673 resourceVersion: "857361" uid: 3cbd06e2-9023-4213-a973-0eabb55c23b0 csi: controllerExpandSecretRef: name: secret-test-rbd-c455e7170d5843a480c1b0cf namespace: openshift-storage driver: openshift-storage.rbd.csi.ceph.com fsType: ext4 nodeStageSecretRef: name: secret-test-rbd-c455e7170d5843a480c1b0cf namespace: openshift-storage volumeAttributes: clusterID: openshift-storage csi.storage.k8s.io/pv/name: pvc-3cbd06e2-9023-4213-a973-0eabb55c23b0 csi.storage.k8s.io/pvc/name: pvc-test-8c4ef3625b3640bb9fec1386e46dd84 csi.storage.k8s.io/pvc/namespace: namespace-test-bb5640e23bdc499cbbda70673 imageFeatures: layering imageFormat: "2" imageName: csi-vol-26c6e9d5-b2ee-11eb-bfa1-0a580a81020f journalPool: cbp-test-1de389f0a2ca479da4e8d92535bc9cf pool: cbp-test-1de389f0a2ca479da4e8d92535bc9cf storage.kubernetes.io/csiProvisionerIdentity: 1620713586686-8081-openshift-storage.rbd.csi.ceph.com thickProvision: "true" volumeHandle: 0001-0011-openshift-storage-000000000000000b-26c6e9d5-b2ee-11eb-bfa1-0a580a81020f persistentVolumeReclaimPolicy: Delete storageClassName: storageclass-test-rbd-bd929aec400b46eeb9 volumeMode: Filesystem status: phase: Bound # rbd du -p cbp-test-1de389f0a2ca479da4e8d92535bc9cf csi-vol-26c6e9d5-b2ee-11eb-bfa1-0a580a81020f warning: fast-diff map is not enabled for csi-vol-26c6e9d5-b2ee-11eb-bfa1-0a580a81020f. operation may be slow. NAME PROVISIONED USED csi-vol-26c6e9d5-b2ee-11eb-bfa1-0a580a81020f 10 GiB 10 GiB Storage class used: allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: "2021-05-12T06:48:32Z" managedFields: - apiVersion: storage.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:allowVolumeExpansion: {} f:parameters: .: {} f:clusterID: {} f:csi.storage.k8s.io/controller-expand-secret-name: {} f:csi.storage.k8s.io/controller-expand-secret-namespace: {} f:csi.storage.k8s.io/node-stage-secret-name: {} f:csi.storage.k8s.io/node-stage-secret-namespace: {} f:csi.storage.k8s.io/provisioner-secret-name: {} f:csi.storage.k8s.io/provisioner-secret-namespace: {} f:imageFeatures: {} f:imageFormat: {} f:pool: {} f:thickProvision: {} f:provisioner: {} f:reclaimPolicy: {} f:volumeBindingMode: {} manager: kubectl-create operation: Update time: "2021-05-12T06:48:32Z" name: storageclass-test-rbd-bd929aec400b46eeb9 resourceVersion: "856945" uid: 9326b37f-f934-4356-a38b-58a55b67ad0b parameters: clusterID: openshift-storage csi.storage.k8s.io/controller-expand-secret-name: secret-test-rbd-c455e7170d5843a480c1b0cf csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage csi.storage.k8s.io/node-stage-secret-name: secret-test-rbd-c455e7170d5843a480c1b0cf csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: secret-test-rbd-c455e7170d5843a480c1b0cf csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage imageFeatures: layering imageFormat: "2" pool: cbp-test-1de389f0a2ca479da4e8d92535bc9cf thickProvision: "true" provisioner: openshift-storage.rbd.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate Logs collected after PVC snapshot-restore and PVC clone : http://rhsqe-repo.lab.eng.blr.redhat.com/OCS/ocs-qe-bugs/bz-1959793/
Proposing as a blocker because snapshot and clone feature is not supported on a thick provisioned PVC.
We need to decide upon the changes required to fix this issue, doesn't look like this can be addressed in 4.8 timeframe. Humble, can you please add more details as discussed.
(In reply to Mudit Agarwal from comment #7) > We need to decide upon the changes required to fix this issue, doesn't look > like this can be addressed in 4.8 timeframe. > > Humble, can you please add more details as discussed. True. It is still an issue with upstream code. Unfortunately there is no `one switch` option or similar mechanism available with RBD Clone operation to get this done. So, we have to find a work around by shaking the existing clone code path. This need detailed inspection and fix to make sure the existing functionalities are not broken or not introducing regression along with the RBD clone think provisioning support. With that, we would like to propose this out of OCS 4.8 and continue fixing this in upstream before we ack in downstream.
Elad/Eran, any comments before we move this out and remove the blocker flag?
Eran provided his perspective and I don't have anything to add
I do believe we need to work on a fix in high priority, then assess our ability to get it in to 4.8 or 4.8.z, depending on the complexity and risk.
(In reply to Yaniv Kaul from comment #12) > I do believe we need to work on a fix in high priority, then assess our > ability to get it in to 4.8 or 4.8.z, depending on the complexity and risk. +1
Niels is already working on a fix.
An image that was used for development testing is available for early verification: - in the deployment/csi-rbdplugin-provisioner use `image: quay.io/nixpanic/cephcsi:testing_rbd_thick-provisioning_clone` A PR for upstream has been created, but it needs to wait for a new function in a released version of go-ceph.
https://github.com/ceph/ceph-csi/pull/2184 is a 2nd PR that addresses the restoring from snapshot for thick-provisioned PVCs.
PRs for upstream release-3.3 that need to be backported to ODF 4.8: - https://github.com/ceph/ceph-csi/pull/2187 - https://github.com/ceph/ceph-csi/pull/2202 - https://github.com/ceph/ceph-csi/pull/2211
Moving this out of 4.8 because the ceph changes required for this are not present in RHCS4.2z2 Flashing this as a known issue for now, not sure whether it is required or not as TP feature is now in dev preview.
This can only be fixed when RHCS-5.x is used. Older Ceph versions do not support copying of zero-filled data blocks (used when thick-provisioning). The best we can do with RHCS-4.x based deployments is to mark PVC-cloning and snapshot-restoring of thick-provisioned volumes as a limitation. The new created volumes will become thin-provisioned. Fixes for this with OCS-4.9 are automatically included once the container images are based on RHCS-5. Moving this to MODIFIED, as we're waiting for a build before QE can re-test this.
(In reply to Niels de Vos from comment #28) > This can only be fixed when RHCS-5.x is used. Older Ceph versions do not > support copying of zero-filled data blocks (used when thick-provisioning). > > The best we can do with RHCS-4.x based deployments is to mark PVC-cloning > and snapshot-restoring of thick-provisioned volumes as a limitation. The new > created volumes will become thin-provisioned. > > Fixes for this with OCS-4.9 are automatically included once the container > images are based on RHCS-5. Okay, so this bug can be verified when we have RHCS 5 in OCS 4.9 builds. > > Moving this to MODIFIED, as we're waiting for a build before QE can re-test > this.
Niels, can we please add doc text for this as a known issue. Also, we need to create a ceph bug for the ceph fix and make this BZ as the tracker.
Verified using the test case tests/manage/pv_services/test_expansion_snapshot_clone.py::TestExpansionSnapshotClone::test_expansion_snapshot_clone[thick-thick] Test case logs http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/jijoy-24th/jijoy-24th_20210824T125951/logs/ocs-ci-logs-1629825510/tests/manage/pv_services/test_expansion_snapshot_clone.py/TestExpansionSnapshotClone/test_expansion_snapshot_clone-thick-thick/logs Verified in version: odf operator 4.9.0-105.ci OCP 4.9.0-0.nightly-2021-08-23-224104 Ceph version 16.2.0-81.el8cp (8908ce967004ed706acb5055c01030e6ecd06036) pacific (stable) rbdplugin 2530cce366065b13892433ddd60ce7fcf0a8c81a756e67f3e7f13935cfcd13da
Since this bug is used to track the fix for thick to thick snapshot restore and clone, a new bug 1997384 is opened to track the fix for thin to thick snapshot restore (mentioned in comment #2) and thick to thin snapshot restore.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat OpenShift Data Foundation 4.9.0 enhancement, security, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:5086