Bug 1301552

Summary: [UI] - unclear WARNING sign next to cluster name
Product: Red Hat Storage Console Reporter: Daniel Horák <dahorak>
Component: UIAssignee: sankarshan <sankarshan>
Status: CLOSED CURRENTRELEASE QA Contact: Martin Kudlej <mkudlej>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 2CC: mkudlej, nthomas, sankarshan
Target Milestone: ---   
Target Release: 2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: rhscon-ceph-0.0.23-1.el7scon.x86_64, rhscon-core-0.0.24-1.el7scon.x86_64, rhscon-ui-0.0.39-1.el7scon.noarch Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-11-19 05:31:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
Warning sign on Clusters page
none
Warning sign on Dashboard page none

Description Daniel Horák 2016-01-25 11:27:02 UTC
Created attachment 1117941 [details]
Warning sign on Clusters page

Description of problem:
  On one installed/configured cluster I see warning sign (orange triangle with exclamation mark) next to cluster name on "Clusters" page and also in "Clusters" field on Dashboard page, but it is not clear what it means and there is no explaining tooltip". 
There is also no page with cluster details (see Bug 1300986), but I think it would be good to have also some quick "tooltip" with simple explanation what it means.


Version-Release number of selected component (if applicable):
  rhscon-ceph-0.0.5-0.1.alpha1.el7.x86_64
  rhscon-core-0.0.7-0.1.alpha1.el7.x86_64
  rhscon-ui-0.0.6-0.1.alpha1.el7.noarch

How reproducible:
  100%

Steps to Reproduce:
1. Prepare USM cluster.
2. Simulate some "warning" state (not sure how to do it).
3. Check for some explanation in the USM UI about the warning.

Actual results:
  It is not clear why is cluster in WARNING state.

Expected results:
  It is easily visible/accessible why is cluster in WARNING state.

Additional info:
  In my (above mentioned) case is the real issue quite clearly visible from status information:

# ceph -s --cluster TestCluster01
    cluster 8df7b5e8-5a0c-4942-b3ae-f33bd9fbf49e
     health HEALTH_WARN
            too few PGs per OSD (21 < min 30)
     monmap e1: 1 mons at {a=172.16.180.7:6789/0}
            election epoch 2, quorum 0 a
     osdmap e29: 6 osds: 6 up, 6 in
      pgmap v44: 64 pgs, 1 pools, 0 bytes data, 0 objects
            201 MB used, 185 GB / 185 GB avail
                  64 active+clean

Comment 1 Daniel Horák 2016-01-25 11:27:55 UTC
Created attachment 1117942 [details]
Warning sign on Dashboard page

Comment 4 Martin Kudlej 2016-07-22 06:06:49 UTC
Tested with
ceph-ansible-1.0.5-27.el7scon.noarch
ceph-installer-1.0.14-1.el7scon.noarch
rhscon-ceph-0.0.33-1.el7scon.x86_64
rhscon-core-0.0.34-1.el7scon.x86_64
rhscon-core-selinux-0.0.34-1.el7scon.noarch
rhscon-ui-0.0.48-1.el7scon.noarch
and there is now tooltip that icon means warning. Details of warning are in cluster dashboard.