Bug 1599691 - Volume data should not be sent to graphite from nodes with deleted bricks
Summary: Volume data should not be sent to graphite from nodes with deleted bricks
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: web-admin-tendrl-node-agent
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Timothy Asir
QA Contact: sds-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-10 11:53 UTC by Filip Balák
Modified: 2020-02-07 08:22 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-02-07 08:22:15 UTC
Embargoed:


Attachments (Terms of Use)
Graphite utilization data from node with removed bricks (94.46 KB, image/png)
2018-07-10 11:53 UTC, Filip Balák
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1559433 0 unspecified CLOSED Non participating nodes should not send rebalance data for a volume to graphite 2021-02-22 00:41:40 UTC

Internal Links: 1559433

Description Filip Balák 2018-07-10 11:53:17 UTC
Created attachment 1457792 [details]
Graphite utilization data from node with removed bricks

Description of problem:
WA keeps collecting volume data related to node/bricks even after all bricks from the node are removed.

Version-Release number of selected component (if applicable):
tendrl-ansible-1.6.3-5.el7rhgs.noarch
tendrl-api-1.6.3-4.el7rhgs.noarch
tendrl-api-httpd-1.6.3-4.el7rhgs.noarch
tendrl-commons-1.6.3-8.el7rhgs.noarch
tendrl-grafana-plugins-1.6.3-6.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
tendrl-monitoring-integration-1.6.3-6.el7rhgs.noarch
tendrl-node-agent-1.6.3-8.el7rhgs.noarch
tendrl-notifier-1.6.3-4.el7rhgs.noarch
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-ui-1.6.3-6.el7rhgs.noarch

How reproducible:
100%

Steps to Reproduce:
1. Import gluster cluster with created volume into WA.
2. Remove bricks on several nodes:
gluster volume remove-brick <volname> <hostname>:<brickpath> ... <hostname>:<brickpath> start
gluster volume remove-brick <volname> <hostname>:<brickpath> ... <hostname>:<brickpath> commit
3. Wait 5 minutes.
4. Check http://<hostname>:10080/render/?target=tendrl.clusters.<cluster>.volumes.<volname>.nodes.<nodewithdeletedbricks>.rebalance_skipped&format=json&from=-5min
or some other relevant graphite target.

Actual results:
There are returned some empty records that WA tried to collect.

Expected results:
There should be no records returned.

Additional info:

Comment 1 Filip Balák 2018-08-01 11:44:21 UTC
With current version I see that the data is not collected and reproducer returns null values.

Tested with:
tendrl-ansible-1.6.3-5.el7rhgs.noarch
tendrl-api-1.6.3-4.el7rhgs.noarch
tendrl-api-httpd-1.6.3-4.el7rhgs.noarch
tendrl-commons-1.6.3-9.el7rhgs.noarch
tendrl-grafana-plugins-1.6.3-7.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
tendrl-monitoring-integration-1.6.3-7.el7rhgs.noarch
tendrl-node-agent-1.6.3-9.el7rhgs.noarch
tendrl-notifier-1.6.3-4.el7rhgs.noarch
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-ui-1.6.3-8.el7rhgs.noarch

Comment 3 Shubhendu Tripathi 2018-11-19 08:20:00 UTC
@Gowtham, is this expected behavior? If so what is the action required here?

Comment 4 gowtham 2019-02-12 07:31:26 UTC
As per current implementation when the brick is removed then we are actually removing brick related information under node and volumes, so it is already resolved, Please test this scenario with the latest build.

Comment 6 Timothy Asir 2019-08-26 05:13:41 UTC
I have verified with the latest build and its fixed already.


Note You need to log in before you can comment on or make changes to this bug.