Bug 1926054

Summary: Localvolume CR is created successfully, when the storageclass name defined in the localvolume exists.
Product: OpenShift Container Platform Reporter: Chao Yang <chaoyang>
Component: StorageAssignee: Hemant Kumar <hekumar>
Storage sub component: Local Storage Operator QA Contact: Chao Yang <chaoyang>
Status: CLOSED ERRATA Docs Contact:
Severity: low    
Priority: low CC: aos-bugs, bfuru, hekumar, jsafrane
Version: 4.7Keywords: Reopened
Target Milestone: ---   
Target Release: 4.8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-07-27 22:41:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Chao Yang 2021-02-08 06:05:42 UTC
Description of problem:
Localvolume CR is created successfully, when the storageclass name defined in the localvolume exists.

Version-Release number of selected component (if applicable):
Clusterversion: 4.7.0-0.nightly-2021-02-06-084550
LSO version: 4.7.0-202102060108.p0

How reproducible:
Always


Steps to Reproduce:
1.Deploy Local storage Operator
2.Create Localvolume and example1, storageclass name is same as gcp csi driver.
3.
[chaoyang@dhcp-141-216 ~]$ oc get localvolume example1 -o json | jq .spec
{
  "logLevel": "Normal",
  "managementState": "Managed",
  "storageClassDevices": [
    {
      "devicePaths": [
        "/dev/sdc"
      ],
      "storageClassName": "standard-csi"
    }
  ]
}


oc get sc
NAME                 PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
standard (default)   kubernetes.io/gce-pd    Delete          WaitForFirstConsumer   true                   22h
standard-csi         pd.csi.storage.gke.io   Delete          WaitForFirstConsumer   true                   18m

4.Create pvc,pod
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc1
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: 'standard-csi'

5.oc get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                          STORAGECLASS   REASON   AGE
local-pv-bba87e91                          50Gi       RWO            Delete           Available                                  standard-csi            16m
pvc-7fb133d4-8dce-4909-a356-9fce86f60cde   5Gi        RWO            Delete           Bound       openshift-local-storage/pvc1   standard-csi            14m


Actual results:
PVC bound pv provisioned by gcp csi driver.

Expected results:
If the localvolume defined an invalid storageclass name, the installation of local volume provisioner should be blocked

Master Log:

Node Log (of failed PODs):

PV Dump:

PVC Dump:

StorageClass Dump (if StorageClass used by PV/PVC):

Additional info:

Comment 1 Jan Safranek 2021-02-09 15:25:29 UTC
This is IMO correct behavior - one can use the same storage class for several LSO CRs. And if they're crazy enough, they can use GCE PD storage class, if they find it useful.

Comment 3 Bob Furu 2021-02-09 21:07:01 UTC
Hemant created https://github.com/openshift/openshift-docs/pull/29315 - lgtm and is ready for merge. Waiting to confirm whether this is for OCP 4.7+ only.

Comment 4 Bob Furu 2021-02-10 00:23:44 UTC
PR merged and CP'ed to 4.7 after confirming that this does not to be backported.

Comment 6 Bob Furu 2021-02-22 17:02:50 UTC
This doc has been verified and will be live at 4.7 GA. For example: https://docs.openshift.com/container-platform/4.7/storage/persistent_storage/persistent-storage-local.html#local-volume-cr_persistent-storage-local.

Comment 9 errata-xmlrpc 2021-07-27 22:41:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438