Bug 1527304 - [RFE] Integrate with gluster eventing
Summary: [RFE] Integrate with gluster eventing
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhi-1.1
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: RHHI-V 1.5
Assignee: Sahina Bose
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1379309
Blocks: 1520833
TreeView+ depends on / blocked
 
Reported: 2017-12-19 07:25 UTC by Sahina Bose
Modified: 2018-11-08 05:38 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Red Hat Gluster Storage status in the Administration Portal is now reported using the gluster events framework instead of polling at regular intervals, improving performance and accuracy.
Clone Of: 1379309
Environment:
Last Closed: 2018-11-08 05:37:25 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:3523 0 None None None 2018-11-08 05:38:50 UTC

Description Sahina Bose 2017-12-19 07:25:50 UTC
+++ This bug was initially created as a clone of Bug #1379309 +++

Description of problem:

Integrate with gluster events to avoid polling the gluster CLI for status. Polling based approach leads to stale data as the polling intervals are setup to be 5 minute intervals as some of the gluster queries are expensive.
With the introduction of gluster events in 3.8, oVirt can make use of this feature to provide real time update on state of cluster.

Version-Release number of selected component (if applicable):


How reproducible:
NA



Additional info:

--- Additional comment from Sahina Bose on 2016-12-22 00:07:46 EST ---

This has a dependency on gluster >= 3.9, glusterfs 3.9 is only available in http://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.9/ not in http://mirror.centos.org/centos/7/storage/x86_64/

--- Additional comment from Bronce McClain on 2017-01-04 09:53:15 EST ---

Yaniv, any chance for an exception here? Backport in a z?

--- Additional comment from Yaniv Lavi on 2017-01-16 07:51:37 EST ---

(In reply to Bronce McClain from comment #2)
> Yaniv, any chance for an exception here? Backport in a z?

Can we get some info on risk and testing scope?

--- Additional comment from Sahina Bose on 2017-01-23 03:33:28 EST ---

(In reply to Yaniv Dary from comment #3)
> (In reply to Bronce McClain from comment #2)
> > Yaniv, any chance for an exception here? Backport in a z?
> 
> Can we get some info on risk and testing scope?

This feature did not make it though patches were acked, as we're waiting for a gluster build in CentOS storage repo with the eventing functionality (gluster 3.9 or gluster 3.10)

Risk - Patches have been added to listen to gluster events in cluster via a gluster webhook endpoint in oVirt. Impact is limited to gluster events and state change of gluster entities. Risk is minimal, as we already do polling to determine state of gluster entities, the feature makes it easier to determine state and will enable us to do less frequent polling

Testing scope - Ensure events are received and state reflected correctly. Flow when a new host is added to cluster should be tested to see that it is registered to send events to the engine webhook.

--- Additional comment from Red Hat Bugzilla Rules Engine on 2017-11-22 05:07:17 EST ---

The documentation text flag should only be set after 'doc text' field is provided. Please provide the documentation text and set the flag to '?' again.

Comment 3 SATHEESARAN 2018-05-09 11:43:01 UTC
Tested with RHV 4.2.3 , the glusterfs eventing works as expected.

But certain times, cli based polling mechanism is also happening and takes more time to reflect in RHVM UI.
For example, killing a brick, takes more than 5 mins to reflect in the UI

Comment 6 errata-xmlrpc 2018-11-08 05:37:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:3523


Note You need to log in before you can comment on or make changes to this bug.