Bug 2208527

Summary: ODF 4.12.2 "Create StorageSystem" wizard is missing on OCP 4.12.12
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Suvendu Mitra <suvmitra>
Component: management-consoleAssignee: Sanjal Katiyar <skatiyar>
Status: CLOSED NOTABUG QA Contact: Prasad Desala <tdesala>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.12CC: muagarwa, ocs-bugs, odf-bz-bot
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-05-25 12:39:44 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Suvendu Mitra 2023-05-19 12:15:07 UTC
Created attachment 1965654 [details]
ODF operator

Description of problem (please be detailed as possible and provide log
snippests):
Create StorageSystem" wizard is missing on demo.redhat.com

Version of all relevant components (if applicable):
ODF 4.12.2, OCP 4.12.12

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes

Is there any workaround available to the best of your knowledge?

Not known
Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

1
Can this issue reproducible?

yes
Can this issue reproduce from the UI?

yes
If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Create OCP 4.12 cluster on demo.redhat.com
2. Install latest ODF operator
3. Create StorageSystem


Actual results:
Wizard view is missing

Expected results:

Wizard should be present in order to deploy StorageSystem
Additional info:

Comment 3 Suvendu Mitra 2023-05-19 12:30:40 UTC
[lab-user@bastion ~]$ oc get co
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.12.12   True        False         False      90m     
baremetal                                  4.12.12   True        False         False      112m    
cloud-controller-manager                   4.12.12   True        False         False      114m    
cloud-credential                           4.12.12   True        False         False      115m    
cluster-autoscaler                         4.12.12   True        False         False      112m    
config-operator                            4.12.12   True        False         False      113m    
console                                    4.12.12   True        True          False      89m     SyncLoopRefreshProgressing: Working toward version 4.12.12, 1 replicas available
control-plane-machine-set                  4.12.12   True        False         False      112m    
csi-snapshot-controller                    4.12.12   True        False         False      112m    
dns                                        4.12.12   True        False         False      112m    
etcd                                       4.12.12   True        False         False      107m    
image-registry                             4.12.12   True        False         False      98m     
ingress                                    4.12.12   True        False         False      102m    
insights                                   4.12.12   True        False         False      106m    
kube-apiserver                             4.12.12   True        False         False      103m    
kube-controller-manager                    4.12.12   True        False         False      110m    
kube-scheduler                             4.12.12   True        False         False      110m    
kube-storage-version-migrator              4.12.12   True        False         False      113m    
machine-api                                4.12.12   True        False         False      102m    
machine-approver                           4.12.12   True        False         False      112m    
machine-config                             4.12.12   True        False         False      111m    
marketplace                                4.12.12   True        False         False      112m    
monitoring                                 4.12.12   True        False         False      90m     
network                                    4.12.12   True        False         False      115m    
node-tuning                                4.12.12   True        False         False      112m    
openshift-apiserver                        4.12.12   True        False         False      88m     
openshift-controller-manager               4.12.12   True        False         False      97m     
openshift-samples                          4.12.12   True        False         False      97m     
operator-lifecycle-manager                 4.12.12   True        False         False      113m    
operator-lifecycle-manager-catalog         4.12.12   True        False         False      113m    
operator-lifecycle-manager-packageserver   4.12.12   True        False         False      103m    
service-ca                                 4.12.12   True        False         False      113m    
storage                                    4.12.12   True        False         False      112m    
[lab-user@bastion ~]$

Comment 4 Sanjal Katiyar 2023-05-23 06:41:32 UTC
Does not seems like an ODF bug, it seems to be based upon how/where OCP cluster is created (cluster on "demo.redhat.com").
As per observations "console" pod was in progressing state and "console" (OCP pod) sends request to "odf-console" (ODF UI pod) in order to fetch and display ODF related UI as part of OCP. If "console" pod itself is not running it can happen that it won;t be able to fetch from "odf-console" and u won;t see any "StorageSystem" UI.

For more details refer:
https://chat.google.com/room/AAAABTq-uCM/xC6ibTutuTg
https://redhat.service-now.com/nav_to.do?uri=%2Fsc_req_item.do%3Fsys_id%3Dccf04a8297ee6994d658b82bf253afcb

Comment 5 Sanjal Katiyar 2023-05-23 06:50:54 UTC
Does pods other than "console" were in error/progressing state as well, like any of ODF related pod/operator (odf-operator or odf-console) ?? Please add OCP/ODF must gathers.
Thanks in advance.

Comment 7 Suvendu Mitra 2023-05-23 09:45:06 UTC
Except console all services up and also console service still have 1 replica and it's state AVAILABLE:True.
[lab-user@bastion ~]$ oc get co
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.12.12   True        False         False      56m     
baremetal                                  4.12.12   True        False         False      78m     
cloud-controller-manager                   4.12.12   True        False         False      80m     
cloud-credential                           4.12.12   True        False         False      79m     
cluster-autoscaler                         4.12.12   True        False         False      78m     
config-operator                            4.12.12   True        False         False      79m     
console                                    4.12.12   True        True          False      57m     SyncLoopRefreshProgressing: Working toward version 4.12.12, 1 replicas available
control-plane-machine-set                  4.12.12   True        False         False      78m     
csi-snapshot-controller                    4.12.12   True        False         False      78m     
dns                                        4.12.12   True        False         False      78m     
etcd                                       4.12.12   True        False         False      76m     
image-registry                             4.12.12   True        False         False      68m     
ingress                                    4.12.12   True        False         False      71m     
insights                                   4.12.12   True        False         False      72m     
kube-apiserver                             4.12.12   True        False         False      71m     
kube-controller-manager                    4.12.12   True        False         False      72m     
kube-scheduler                             4.12.12   True        False         False      73m     
kube-storage-version-migrator              4.12.12   True        False         False      78m     
machine-api                                4.12.12   True        False         False      70m     
machine-approver                           4.12.12   True        False         False      78m     
machine-config                             4.12.12   True        False         False      71m     
marketplace                                4.12.12   True        False         False      78m     
monitoring                                 4.12.12   True        False         False      56m     
network                                    4.12.12   True        False         False      81m     
node-tuning                                4.12.12   True        False         False      78m     
openshift-apiserver                        4.12.12   True        False         False      54m     
openshift-controller-manager               4.12.12   True        False         False      60m     
openshift-samples                          4.12.12   True        False         False      67m     
operator-lifecycle-manager                 4.12.12   True        False         False      78m     
operator-lifecycle-manager-catalog         4.12.12   True        False         False      78m     
operator-lifecycle-manager-packageserver   4.12.12   True        False         False      71m     
service-ca                                 4.12.12   True        False         False      79m     
storage                                    4.12.12   True        False         False      73m     
[lab-user@bastion ~]$ oc get all -n openshift-storage 
NAME                                                   READY   STATUS    RESTARTS   AGE
pod/csi-addons-controller-manager-78c6d74945-fjhv7     2/2     Running   0          18m
pod/noobaa-operator-678f97f448-vsczd                   1/1     Running   0          18m
pod/ocs-metrics-exporter-7c85fbc488-tprlt              1/1     Running   0          18m
pod/ocs-operator-79bc76fcb5-22kxd                      1/1     Running   0          18m
pod/odf-console-7d65c9964d-kggwn                       1/1     Running   0          18m
pod/odf-operator-controller-manager-57c6f98c49-fdrt4   2/2     Running   0          18m
pod/rook-ceph-operator-797b9b54fd-7q6w4                1/1     Running   0          18m

NAME                                                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/csi-addons-controller-manager-metrics-service     ClusterIP   172.30.11.81    <none>        8443/TCP   18m
service/noobaa-operator-service                           ClusterIP   172.30.219.73   <none>        443/TCP    18m
service/odf-console-service                               ClusterIP   172.30.45.5     <none>        9001/TCP   18m
service/odf-operator-controller-manager-metrics-service   ClusterIP   172.30.74.167   <none>        8443/TCP   18m

NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/csi-addons-controller-manager     1/1     1            1           18m
deployment.apps/noobaa-operator                   1/1     1            1           18m
deployment.apps/ocs-metrics-exporter              1/1     1            1           18m
deployment.apps/ocs-operator                      1/1     1            1           18m
deployment.apps/odf-console                       1/1     1            1           18m
deployment.apps/odf-operator-controller-manager   1/1     1            1           18m
deployment.apps/rook-ceph-operator                1/1     1            1           18m

NAME                                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/csi-addons-controller-manager-78c6d74945     1         1         1       18m
replicaset.apps/noobaa-operator-678f97f448                   1         1         1       18m
replicaset.apps/ocs-metrics-exporter-7c85fbc488              1         1         1       18m
replicaset.apps/ocs-operator-79bc76fcb5                      1         1         1       18m
replicaset.apps/odf-console-7d65c9964d                       1         1         1       18m
replicaset.apps/odf-operator-controller-manager-57c6f98c49   1         1         1       18m
replicaset.apps/rook-ceph-operator-797b9b54fd                1         1         1       18m
[lab-user@bastion ~]$

Comment 8 Sanjal Katiyar 2023-05-23 12:10:03 UTC
There was very limited data in the must gather logs shared above (ex: no info about pods, their logs etc).

Anyway from the previous comments and screenshots in ticket https://redhat.service-now.com/nav_to.do?uri=%2Fsc_req_item.do%3Fsys_id%3Dccf04a8297ee6994d658b82bf253afcb I can see "odf-console" plugin is successfully enabled (screenshot attached in the ticket) and there are no errors/pending state for any ODF related resources in "openshift-storage" namespace.
Still under "Home > Overview > Status" it shows "0/1 enabled" in the plugins popover and as mentioned above OCP's "console" pod is in progressing state as well.

I already tested on OCP 4.12.0-0.nightly-2023-05-20-124205, installed using ClusterBot and ODF 4.12.3 which working as expected. Suvendu can you create a cluster for me (or share the existing one, if any) and we can decide after that who should be responsible for taking a look into this issue.

This does not look like a product (ODF) bug.

Comment 9 Suvendu Mitra 2023-05-24 07:03:26 UTC
Can you share must gather command example which includes all the data captured.
I have shared privately cluster details.