Bug 1723680 - RHV Reduce polling interval for gluster
Summary: RHV Reduce polling interval for gluster
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.6
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
: RHHI-V 1.6.z Async Update
Assignee: Sahina Bose
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1530495
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-25 07:05 UTC by SATHEESARAN
Modified: 2019-09-06 05:23 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1530495
Environment:
Last Closed: 2019-09-06 05:23:04 UTC
Embargoed:


Attachments (Terms of Use)

Comment 1 SATHEESARAN 2019-06-25 07:06:42 UTC
Description of problem:

To fix the locking issue, RHV monitoring of gluster will need to change to use get-state and aggregate information collected from each node

Version-Release number of selected component (if applicable):

NA


Additional info:

Currently, Vdsm triggering many call per status check which is resulting in failed locking inside gluster. This is stopping user to run any manual gluster related commands, since locks are already held.

Comment 2 SATHEESARAN 2019-07-26 15:18:27 UTC
Tested with RHV 4.3.3

'gluster volume status detail' is queried for every 15 mins

[2019-07-26 15:00:38.425519]  : system:: uuid get : SUCCESS
[2019-07-26 15:00:54.035630]  : system:: uuid get : SUCCESS
[2019-07-26 15:01:09.674883]  : system:: uuid get : SUCCESS
[2019-07-26 15:01:12.398454]  : volume status vmstore : SUCCESS
[2019-07-26 15:01:12.586130]  : volume status vmstore detail : SUCCESS
[2019-07-26 15:01:16.200499]  : volume status engine : SUCCESS
[2019-07-26 15:01:16.378735]  : volume status engine detail : SUCCESS
[2019-07-26 15:01:19.968367]  : volume status non_vdo : SUCCESS
[2019-07-26 15:01:20.152647]  : volume status non_vdo detail : SUCCESS
[2019-07-26 15:01:23.795427]  : volume status data : SUCCESS
[2019-07-26 15:01:23.984186]  : volume status data detail : SUCCESS
<lines-snipped>
[2019-07-26 15:16:27.651132]  : volume status vmstore : SUCCESS
[2019-07-26 15:16:27.835725]  : volume status vmstore detail : SUCCESS
[2019-07-26 15:16:31.528667]  : volume status engine : SUCCESS
[2019-07-26 15:16:31.708591]  : volume status engine detail : SUCCESS
[2019-07-26 15:16:35.358410]  : volume status non_vdo : SUCCESS
[2019-07-26 15:16:35.538591]  : volume status non_vdo detail : SUCCESS
[2019-07-26 15:16:36.997835]  : system:: uuid get : SUCCESS
[2019-07-26 15:16:39.815920]  : volume status data : SUCCESS
[2019-07-26 15:16:40.006440]  : volume status data detail : SUCCESS


Note You need to log in before you can comment on or make changes to this bug.