Bug 1593912
Summary: | IOPS chart from At Glance section of Host Dashboard reports different values compared to all other IOPS charts | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Martin Bukatovic <mbukatov> | ||||||
Component: | web-admin-tendrl-monitoring-integration | Assignee: | Shubhendu Tripathi <shtripat> | ||||||
Status: | CLOSED ERRATA | QA Contact: | Martin Bukatovic <mbukatov> | ||||||
Severity: | unspecified | Docs Contact: | |||||||
Priority: | unspecified | ||||||||
Version: | rhgs-3.4 | CC: | anbehl, julim, mbukatov, nthomas, rhs-bugs, sankarshan, shtripat | ||||||
Target Milestone: | --- | ||||||||
Target Release: | RHGS 3.4.0 | ||||||||
Hardware: | Unspecified | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | tendrl-node-agent-1.6.3-8.el7rhgs, tendrl-monitoring-integration-1.6.3-6.el7rhgs | Doc Type: | If docs needed, set a value | ||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2018-09-04 07:07:57 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | |||||||||
Bug Blocks: | 1503137, 1595013 | ||||||||
Attachments: |
|
Description
Martin Bukatovic
2018-06-21 20:04:09 UTC
Reported during testing of BZ 1581736. Created attachment 1453602 [details]
screenshot 1
Created attachment 1453604 [details]
screenshot 2 (the problem highlighted, comparing IOPS and Disk Load chart)
Adding more clear screenshot of the problem, comparing IOPS and Disk Load chart.
Another use case, during testing BZ 1581736: I extracted 10000 files with names based on sha1 of it's content, so that when uploaded into arbiter 2 plus 1x2 volume, every brick will host some files. But I see IOPS reported only for one of 6 storage machines in IOPS charts of At a glance section of host dashboard. There was a small issue where writes values were getting added to the reads only and so the second part of issue happens where looks like reads and writes are swapped. Sent a PR https://github.com/Tendrl/node-agent/pull/834 for the same. QE will verify that the problem is fully addressed by making sure WA behaves as described in Expected Results section of this BZ. Testing with ============ [root@mbukatov-usm1-server ~]# rpm -qa | grep tendrl | sort tendrl-ansible-1.6.3-6.el7rhgs.noarch tendrl-api-1.6.3-5.el7rhgs.noarch tendrl-api-httpd-1.6.3-5.el7rhgs.noarch tendrl-commons-1.6.3-12.el7rhgs.noarch tendrl-grafana-plugins-1.6.3-10.el7rhgs.noarch tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch tendrl-monitoring-integration-1.6.3-10.el7rhgs.noarch tendrl-node-agent-1.6.3-10.el7rhgs.noarch tendrl-notifier-1.6.3-4.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch tendrl-ui-1.6.3-10.el7rhgs.noarch [root@mbukatov-usm1-gl1 ~]# rpm -qa | grep tendrl | sort tendrl-collectd-selinux-1.5.4-2.el7rhgs.noarch tendrl-commons-1.6.3-12.el7rhgs.noarch tendrl-gluster-integration-1.6.3-9.el7rhgs.noarch tendrl-node-agent-1.6.3-10.el7rhgs.noarch tendrl-selinux-1.5.4-2.el7rhgs.noarch Results ======= When I perform the steps to reproduce, IOPS chart from At Glance section of Host Dashboard: * was renamed to Brick IOPS (related to other BZ 1595013) * reports single value (combining both read and write operations) as other IOPS charts do * reports data immediately (data starts at the same time as on other charts) * data are reported on charts for all 3 machines which are part of replica set hosting the data Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2616 The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |