Bug 1626345 - Prometheus status is Unknown in cluster web UI
Summary: Prometheus status is Unknown in cluster web UI
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Management Console
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: ---
Assignee: Samuel Padgett
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-09-07 06:06 UTC by Junqi Zhao
Modified: 2020-05-01 17:07 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-01 17:07:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
status is unknown (79.59 KB, image/png)
2018-09-07 06:06 UTC, Junqi Zhao
no flags Details
status is still unknown (79.85 KB, image/png)
2019-01-29 04:44 UTC, Junqi Zhao
no flags Details

Description Junqi Zhao 2018-09-07 06:06:43 UTC
Created attachment 1481491 [details]
status is unknown

Description of problem:
Steps to Reproduce:
1. Enable the `olm` when building the OCP 3.11, as below:
openshift_ansible_vars:
  openshift_enable_olm: true

Or, you can install it via run a playbook:
ansible-playbook -i qe-inventory-host-file playbooks/olm/config.yml

2. Login the cluster console as a admin user.
3. Create one project, such as # oc new-project testing
and select "testing" from "Project" drop-down list
4. Click "Operators" >  "Catalog Sources"
5. For the Prometheus services, click "Create Subscription"
6. In the yaml editor, click "Create"
7. Click "Operators" > "Cluster Service Versions"
8. Click "View Instances" on Prometheus
9. Click "Create New" > "Prometheus"
10. Click "Create"
11. Check On the prometheus view, the Status is Unknown
(url is /k8s/ns/testing/clusterserviceversions/prometheusoperator.0.22.2/instances)

actually the Statefulset and pods are created
# oc get pod -n testing
NAME                                   READY     STATUS    RESTARTS   AGE
prometheus-example-0                   3/3       Running   1          16m
prometheus-example-1                   3/3       Running   1          16m
prometheus-operator-7fccbd7c74-597rx   1/1       Running   0          17m

# oc get statefulset -n testing
NAME                 DESIRED   CURRENT   AGE
prometheus-example   2         2         17m


Version-Release number of selected component (if applicable):
# openshift version
openshift v3.11.0-0.28.0


How reproducible:
Always

Steps to Reproduce:
1. See the description part
2.
3.

Actual results:
Prometheus status is Unknown in cluster web UI

Expected results:
Prometheus status should not be Unknown

Additional info:

Comment 1 Ivan Chavero 2019-01-28 19:55:07 UTC
Could not reproduce this problem using minishift with OKD version v3.11.0+0cbc58b-dirty.

¿Does the problem persists?

Comment 2 Junqi Zhao 2019-01-29 04:44:14 UTC
Created attachment 1524451 [details]
status is still unknown

Comment 3 Junqi Zhao 2019-01-29 04:46:36 UTC
(In reply to Ivan Chavero from comment #1)
> Could not reproduce this problem using minishift with OKD version
> v3.11.0+0cbc58b-dirty.
> 
> ¿Does the problem persists?

Yes, still exists, see the attached pictures.
# oc version
oc v3.11.75
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

catalog-operator images:
ose-operator-lifecycle-manager:v3.11.75


olm-operator images
ose-operator-lifecycle-manager:v3.11.75

# oc -n operator-lifecycle-manager get po 
NAME                              READY     STATUS    RESTARTS   AGE
catalog-operator-df59bcc9-b5ph6   1/1       Running   0          15m
olm-operator-665b85f7b4-svr4j     1/1       Running   0          15m

Comment 4 Ivan Chavero 2019-01-29 21:36:04 UTC
thanks, i'm checking it out

Comment 5 Samuel Padgett 2020-05-01 17:07:56 UTC
This is working as intended. Console is looking for one of the following properties (in OpenShift 4.4):

* status.phase
* status.status
* status.state
* status.conditions

If none of those is set (as is the case here), console will show "Unknown."


Note You need to log in before you can comment on or make changes to this bug.