Bug 1357006 - cluster->osd doesn't update
Summary: cluster->osd doesn't update
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: UI
Version: 2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 2
Assignee: kamlesh
QA Contact: Martin Kudlej
URL:
Whiteboard:
Depends On:
Blocks: Console-2-GA
TreeView+ depends on / blocked
 
Reported: 2016-07-15 13:09 UTC by Martin Kudlej
Modified: 2016-08-23 19:57 UTC (History)
5 users (show)

Fixed In Version: rhscon-ui-0.0.51-1.el7scon.noarch
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-23 19:57:00 UTC
Embargoed:


Attachments (Terms of Use)
before reload (149.49 KB, image/png)
2016-07-15 13:09 UTC, Martin Kudlej
no flags Details
after page reload (93.47 KB, image/png)
2016-07-15 13:10 UTC, Martin Kudlej
no flags Details
different host starage utilization and profiles and pool on right (124.42 KB, image/png)
2016-07-15 16:10 UTC, Martin Kudlej
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:1754 0 normal SHIPPED_LIVE New packages: Red Hat Storage Console 2.0 2017-04-18 19:09:06 UTC

Description Martin Kudlej 2016-07-15 13:09:29 UTC
Created attachment 1180146 [details]
before reload

Description of problem:
As you can see at screenshot there is different information at left side in the "Hosts" list and on right side in list of OSDs in cluster. I've waited for more than 10 minutes to be sure that every graph and info are updated.
I've set up cluster with 2 OSDs - 100 GB each. Then I've created pool with replica 2. So there are correct values in Hosts list but info in OSD list is not updated.

Second screenshot demonstrates difference after page reload where is correct info.

Version-Release number of selected component (if applicable):
ceph-ansible-1.0.5-27.el7scon.noarch
ceph-installer-1.0.14-1.el7scon.noarch
rhscon-ceph-0.0.33-1.el7scon.x86_64
rhscon-core-0.0.34-1.el7scon.x86_64
rhscon-core-selinux-0.0.34-1.el7scon.noarch
rhscon-ui-0.0.48-1.el7scon.noarch

How reproducible:
100%

Steps to Reproduce:
1. open cluster -> osds tab and another one with similar info for comparing values
2. start to copy data into pool
3. check that custer -> osds tab is not updating

Comment 2 Martin Kudlej 2016-07-15 13:10:18 UTC
Created attachment 1180147 [details]
after page reload

Comment 3 Martin Kudlej 2016-07-15 16:10:03 UTC
Created attachment 1180203 [details]
different host starage utilization and profiles and pool on right

It seems that it is not just about tab OSDs but I see this issue also on cluster dashboard.

Comment 4 Martin Kudlej 2016-07-29 13:33:03 UTC
Tested with 
ceph-ansible-1.0.5-31.el7scon.noarch
ceph-installer-1.0.14-1.el7scon.noarch
rhscon-ceph-0.0.38-1.el7scon.x86_64
rhscon-core-0.0.38-1.el7scon.x86_64
rhscon-core-selinux-0.0.38-1.el7scon.noarch
rhscon-ui-0.0.51-1.el7scon.noarch
and it is updating.

Comment 6 errata-xmlrpc 2016-08-23 19:57:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1754


Note You need to log in before you can comment on or make changes to this bug.