Bug 1993918 - CSI drivers are not getting created
Summary: CSI drivers are not getting created
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: csi-driver
Version: 4.9
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Yug Gupta
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-08-16 12:11 UTC by Bipul Adhikari
Modified: 2023-08-09 16:37 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-17 06:59:38 UTC
Embargoed:


Attachments (Terms of Use)

Description Bipul Adhikari 2021-08-16 12:11:12 UTC
Description of problem (please be detailed as possible and provide log
snippests):
Installed ocs-operator 4.9 on OCP 4.9 (Kubernetes Version: v1.22.0-rc.0+76ff583)
However it did not create any CSIDriver instances.
It probably has to do something with the deprecated support for v1beta1 APIs of CSIDriver. 


Version of all relevant components (if applicable):


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.
2.
3.


Actual results:


Expected results:


Additional info:

Comment 2 Humble Chirammal 2021-08-16 12:15:07 UTC
Just to conclude quick findings from this cluster:

1) #oc get csidriver does not list the ceph csi drivers

  But it list the aws csi driver ( note that its on v1 version) 

2) We have done  enhancements in Rook, where we recreate the CSI driver object based on the cluster version 

It delete the Betav1 csi driver and create v1 version if cluster is >1.17

3) We can manually create the CSI driver with v1 yaml which get picked up here.

Comment 3 Yug Gupta 2021-08-16 13:08:33 UTC
betav1 CSIDriver object is not supported from Kubernetes v1.22. For that, we recently added a check in rook so that:

1. If Kubernetes version >= v1.18 and Kubernetes version <= v1.21 then we delete the betav1 CSIDriver (if present no-op otherwise)
2. For If Kubernetes version >= v1.22, directly create the v1 CSIDriver

Although the csi driver is present, the CSIDriver object is not listed when `oc get CSIDriver` is done.
When I did local testing on minikube with k8s v1.22 and Rook 1.7.0, the v1CSIDriver object comes up properly without any issues.

```
$ kubectl version Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.0", GitCommit:"c2b5237ccd9c0f1d600d3072634ca66cefdf272f", GitTreeState:"clean", BuildDate:"2021-08-04T18:03:20Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.0", GitCommit:"c2b5237ccd9c0f1d600d3072634ca66cefdf272f", GitTreeState:"clean", BuildDate:"2021-08-04T17:57:25Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
```
```
[yuggupta@fedora ceph-csi](test-rook-dep)$ kubectl get CSIDriver NAME ATTACHREQUIRED PODINFOONMOUNT STORAGECAPACITY TOKENREQUESTS REQUIRESREPUBLISH MODES AGE rook-ceph.cephfs.csi.ceph.com true false false <unset> false Persistent 2m16s rook-ceph.rbd.csi.ceph.com true false false <unset> false Persistent 2m17s
```

Bipul can you please share the exact OCP build and the rook-ceph-operator logs so that we can debug it further?

Comment 4 Humble Chirammal 2021-08-16 14:14:17 UTC
Looks like the cluster got destroyed while we started to dig further.

@Bipul, please provide the access details, if you are hitting this again in your next test run?

Comment 5 Yug Gupta 2021-08-17 06:45:27 UTC
On verifying the rook version, the rook version being used is `rook/ceph:v1.6.5-2.gb78358e` which is a 2 months old image and doesn't contain my fix for the same in rook upstream https://github.com/rook/rook/pull/8029 . 
Since this image doesn't have the fix, this is expected to happen. Let me know if you face any issues in the rook downstream image based on v1.7.

Comment 6 Yug Gupta 2021-08-17 06:59:38 UTC
Closing this a not a bug, feel free to re-open if similar behavior is seen for downstream rook images which has the fix https://github.com/rook/rook/pull/8029.


Note You need to log in before you can comment on or make changes to this bug.