Bug 1878086

Summary: OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default "Filesystem Name" instead of providing a textbox, & the name should be validated
Product: OpenShift Container Platform Reporter: Neha Berry <nberry>
Component: Console Storage PluginAssignee: Vineet <vbadrina>
Status: CLOSED ERRATA QA Contact: Neha Berry <nberry>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.6CC: aos-bugs, nthomas, ocs-bugs
Target Milestone: ---   
Target Release: 4.7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-02-24 15:17:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
Create SC page- cephfs provisioner none

Description Neha Berry 2020-09-11 09:49:25 UTC
Created attachment 1714536 [details]
Create SC page- cephfs provisioner

With OCP 4.6 + OCS 4.6, we are supporting creation of custom Storageclasses on need basis.

In the Storage-> Storage Classes->Create Storage Class Page, If I select the provisioner as "openshift-storage.cephfs.csi.ceph.com", the "Filesystem Name" box gets enabled.

it is seen that this expects a user to enter the Filesystem Name which can lead to Human error. For few users, the following statement might not be enough to understand they need to add the cephfs Name. 

 "CephFS filesystem name into which the volume shall be created"

How would they know to get this Filesystem Name in the absence of toolbox pod ?



>> Ask: 

a) IMO, we should either populate the box by default (taking the FS Name from ceph) or provide a drop down. OCS 4.6 doesnt support multiple CEPHFS, hence for internal mode, one can easily use the default FSNAME. For External Mode, we need to decide what would be the best option.

How would users know to get this Filesystem Name in the absence of toolbox pod ?

b) It is seen there is no validation for this Filesystem Name's existence, and the SC creation succeeds even if I provide a dummy name as "abcd". The resulting PVCs from this incorrect SC would be in Pending state though

c) I understand SC is a static entity and doesnt throw error if wrong inputs are provided, but to avoid situations like these, can we not provide the default FSNAME , atleast for Internal Mode, in the drop-down or as a default value in text box ?



Version-Release number of selected component (if applicable):
---------------------------------------------------------------
OCP  = 4.6.0-0.nightly-2020-09-10-195619

How reproducible:
-------------------
Always

Steps to Reproduce:
1. Create an OCP 4.6 + OCS 4.5/4.6 cluster
2. Navigate to Storage-> Storage Classes->Create Storage Class
3. Select "openshift-storage.cephfs.csi.ceph.com" under Provisioners box
4. Check the Filesystem Name box. It is not populated by any default value and expects users to enter the Name.
5. There is no validation even if one adds a dummy Filesystem Name.

Actual results:
--------------------
User is expected to type in the Name of the Filesystem, but they might not be aware of how to procure the name itself.

Expected results:
---------------------
The text box should either have the default FSName populated for Internal Mode (since we do not support more than 1 CephFS right now) , or there should be a drop-box.

There should be a validation in place for the entered FSName.


Additional info:
=========================

$ oc get pvc -n default
NAME              STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-cephfs-pvc   Pending                                      test-cephs     13s


[nberry@localhost logs]$  oc describe pvc ^C
[nberry@localhost logs]$ oc describe sc test-cephs 
Name:                  test-cephs
IsDefaultClass:        No
Annotations:           <none>
Provisioner:           openshift-storage.cephfs.csi.ceph.com
Parameters:            clusterID=openshift-storage,csi.storage.k8s.io/controller-expand-secret-name=rook-csi-cephfs-provisioner,csi.storage.k8s.io/controller-expand-secret-namespace=openshift-storage,csi.storage.k8s.io/node-stage-secret-name=rook-csi-cephfs-node,csi.storage.k8s.io/node-stage-secret-namespace=openshift-storage,csi.storage.k8s.io/provisioner-secret-name=rook-csi-cephfs-provisioner,csi.storage.k8s.io/provisioner-secret-namespace=openshift-storage,fsName=abcd
AllowVolumeExpansion:  True
MountOptions:          <none>
ReclaimPolicy:         Retain
VolumeBindingMode:     Immediate
Events:                <none>

>>Only if we check the PVC Events, we would know the cause of failure.

>> [nberry@localhost logs]$ oc describe pvc test-cephfs-pvc -n default
Name:          test-cephfs-pvc
Namespace:     default
StorageClass:  test-cephs
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: openshift-storage.cephfs.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    <none>
Events:
  Type     Reason                Age                From                                                                                                                     Message
  ----     ------                ----               ----                                                                                                                     -------
  Warning  ProvisioningFailed    60s                openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-dc4684678-szmb9_745f007d-020c-4724-8920-73b43b47e487  failed to provision volume with StorageClass "test-cephs": rpc error: code = InvalidArgument desc = an error occurred while running (23) ceph [-m 172.30.227.212:6789,172.30.146.114:6789,172.30.68.7:6789 --id csi-cephfs-provisioner --keyfile=***stripped*** -c /etc/ceph/ceph.conf fs get abcd --format=json]: exit status 2: Error ENOENT: filesystem 'abcd' not found
  Warning  ProvisioningFailed    59s                openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-dc4684678-szmb9_745f007d-020c-4724-8920-73b43b47e487  failed to provision volume with StorageClass "test-cephs": rpc error: code = InvalidArgument desc = an error occurred while running (48) ceph [-m 172.30.227.212:6789,172.30.146.114:6789,172.30.68.7:6789 --id csi-cephfs-provisioner --keyfile=***stripped*** -c /etc/ceph/ceph.conf fs get abcd --format=json]: exit status 2: Error ENOENT: filesystem 'abcd' not found
  Warning  ProvisioningFailed    58s                openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-dc4684678-szmb9_745f007d-020c-4724-8920-73b43b47e487  failed to provision volume with StorageClass "test-cephs": rpc error: code = InvalidArgument desc = an error occurred while running (73) ceph [-m 172.30.227.212:6789,172.30.146.114:6789,172.30.68.7:6789 --id csi-cephfs-provisioner --keyfile=***stripped*** -c /etc/ceph/ceph.conf fs get abcd --format=json]: exit status 2: Error ENOENT: filesystem 'abcd' not found
  Warning  ProvisioningFailed    55s                openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-dc4684678-szmb9_745f007d-020c-4724-8920-73b43b47e487  failed to provision volume with StorageClass "test-cephs": rpc error: code = InvalidArgument desc = an error occurred while running (98) ceph [-m 172.30.227.212:6789,172.30.146.114:6789,172.30.68.7:6789 --id csi-cephfs-provisioner --keyfile=***stripped*** -c /etc/ceph/ceph.conf fs get abcd --format=json]: exit status 2: Error ENOENT: filesystem 'abcd' not found
  Warning  ProvisioningFailed    51s                openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-dc4684678-szmb9_745f007d-020c-4724-8920-73b43b47e487  failed to provision volume with StorageClass "test-cephs": rpc error: code = InvalidArgument desc = an error occurred while running (123) ceph [-m 172.30.227.212:6789,172.30.146.114:6789,172.30.68.7:6789 --id csi-cephfs-provisioner --keyfile=***stripped*** -c /etc/ceph/ceph.conf fs get abcd --format=json]: exit status 2: Error ENOENT: filesystem 'abcd' not found
  Warning  ProvisioningFailed    43s                openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-dc4684678-szmb9_745f007d-020c-4724-8920-73b43b47e487  failed to provision volume with StorageClass "test-cephs": rpc error: code = InvalidArgument desc = an error occurred while running (148) ceph [-m 172.30.227.212:6789,172.30.146.114:6789,172.30.68.7:6789 --id csi-cephfs-provisioner --keyfile=***stripped*** -c /etc/ceph/ceph.conf fs get abcd --format=json]: exit status 2: Error ENOENT: filesystem 'abcd' not found
  Normal   Provisioning          27s (x7 over 60s)  openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-dc4684678-szmb9_745f007d-020c-4724-8920-73b43b47e487  External provisioner is provisioning volume for claim "default/test-cephfs-pvc"
  Warning  ProvisioningFailed    26s                openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-dc4684678-szmb9_745f007d-020c-4724-8920-73b43b47e487  failed to provision volume with StorageClass "test-cephs": rpc error: code = InvalidArgument desc = an error occurred while running (173) ceph [-m 172.30.227.212:6789,172.30.146.114:6789,172.30.68.7:6789 --id csi-cephfs-provisioner --keyfile=***stripped*** -c /etc/ceph/ceph.conf fs get abcd --format=json]: exit status 2: Error ENOENT: filesystem 'abcd' not found
  Normal   ExternalProvisioning  12s (x6 over 60s)  persistentvolume-controller                                                                                              waiting for a volume to be created, either by external provisioner "openshift-storage.cephfs.csi.ceph.com" or manually created by system administrator

Comment 8 errata-xmlrpc 2021-02-24 15:17:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633