Description of problem: Case 1: After creating a pvc/pv with csi driver, when I tried to create a cloned pvc but did not specific storageclass, so pv created with default storageclass(intree) successfully, but not a cloned pv. It should return error as it could not provide the clone pv. Case2: After creating a pvc/pv with csi driver, when I tried to create a cloned pvc and specific different storageclass with same csi driver, it report message like this, it should be expected one: Warning ProvisioningFailed 14s (x5 over 29s) disk.csi.azure.com_wduan-513b-tvddr-worker-westus-kzbg5_af083ba3-bdaf-4727-a6fa-0c78b17f6a58 failed to provision volume with StorageClass "sc-csi-imm": error getting handle for DataSource Type PersistentVolumeClaim by Name mypvc01: the source PVC and destination PVCs must be in the same storage class for cloning. Source is in sc-csi, but new PVC is in sc-csi-imm Impact: When using PVC.spec.dataSource with kind pvc, user should like clone pv, but in case 1, it will be misunderstood cloned successfully but actually not. I think we need to add dataSource check Version-Release number of selected component (if applicable): Server Version: 4.5.0-0.ci-2020-05-10-055905 Kubernetes Version: v1.18.0-rc.1 How reproducible: Always Steps to Reproduce: 1. create pvc/pv with csi driver [wduan@MINT clone]$ oc get pvc mypvc01 -o yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-provisioner: disk.csi.azure.com volume.kubernetes.io/selected-node: wduan-513b-tvddr-worker-westus-nk5b7 creationTimestamp: "2020-05-13T11:42:45Z" finalizers: - kubernetes.io/pvc-protection name: mypvc01 namespace: wduan-01 resourceVersion: "629648" selfLink: /api/v1/namespaces/wduan-01/persistentvolumeclaims/mypvc01 uid: 8dec5834-bed2-424b-b22c-7cfbe1195852 spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: sc-csi volumeMode: Filesystem volumeName: pvc-8dec5834-bed2-424b-b22c-7cfbe1195852 status: accessModes: - ReadWriteOnce capacity: storage: 2Gi phase: Bound 2. use PVC.spec.dataSource with kind pvc, with default storageclass or other intree storageclass more pvc-clone.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc01-clone-err5 spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi dataSource: kind: PersistentVolumeClaim name: mypvc01 3. check pvc mypvc01-clone-err5 [wduan@MINT clone]$ oc get pvc mypvc01-clone-err5 -o yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/azure-disk volume.kubernetes.io/selected-node: wduan-513b-tvddr-worker-westus-nk5b7 creationTimestamp: "2020-05-13T11:57:04Z" finalizers: - kubernetes.io/pvc-protection name: mypvc01-clone-err5 namespace: wduan-01 resourceVersion: "635584" selfLink: /api/v1/namespaces/wduan-01/persistentvolumeclaims/mypvc01-clone-err5 uid: d1201c07-d4c1-4355-b1aa-e9d2c67a58a7 spec: accessModes: - ReadWriteOnce dataSource: apiGroup: null kind: PersistentVolumeClaim name: mypvc01 resources: requests: storage: 2Gi storageClassName: managed-premium volumeMode: Filesystem volumeName: pvc-d1201c07-d4c1-4355-b1aa-e9d2c67a58a7 status: accessModes: - ReadWriteOnce capacity: storage: 2Gi phase: Bound 4. check pvc [wduan@MINT clone]$ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mypvc01 Bound pvc-8dec5834-bed2-424b-b22c-7cfbe1195852 2Gi RWO sc-csi 98m mypvc01-clone-err5 Bound pvc-d1201c07-d4c1-4355-b1aa-e9d2c67a58a7 2Gi RWO managed-premium 83m Actual results: pvc created successfully without cloned Expected results: pvc should in pending and return error as mention in Description Master Log: Node Log (of failed PODs): PV Dump: PVC Dump: StorageClass Dump (if StorageClass used by PV/PVC): Additional info:
Not sure we can catch 4.5, it needs to be fixed upstream.
There's an open PR [1] to address this upstream. We'll monitor this PR, an then backport it once it gets merged. [1] https://github.com/kubernetes/kubernetes/pull/97086/
This change has been merged in openshift/kubernetes, and we can see it present here [1]. Sending to QE to review. [1] https://github.com/openshift/kubernetes/tree/676a3a70129f06239994403770f19b94a7f77f12
Looks like the fix in introduced in https://github.com/openshift/kubernetes/tree/676a3a70129f06239994403770f19b94a7f77f12, so no need to https://github.com/openshift/kubernetes/pull/600 I understand. Verified pass on 4.8.0-0.nightly-2021-04-09-222447: PVC with datasource for in-tree will not be provisioned with following message. Normal WaitForPodScheduled <invalid> persistentvolume-controller waiting for pod mypod-clone to be scheduled Warning ProvisioningFailed <invalid> (x2 over <invalid>) persistentvolume-controller plugin "kubernetes.io/aws-ebs" is not a CSI plugin. Only CSI plugin can provision a claim with a datasource So mark the status to "Verified"
I'm moving this bug back to VERIFIED per the previous comment. The PR was closed as it landed during the rebase, and therefore didn't need a separate PR.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438