Bug 1386516

Summary: [Eventing]: UUID is showing zeros in the event message for the peer probe operation.
Product: [Community] GlusterFS Reporter: Atin Mukherjee <amukherj>
Component: glusterdAssignee: Atin Mukherjee <amukherj>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: mainlineCC: bsrirama, bugs, rhs-bugs, storage-qa-internal, vbellur
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.10.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1386172
: 1387564 (view as bug list) Environment:
Last Closed: 2017-03-06 17:30:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1386172, 1387564    

Description Atin Mukherjee 2016-10-19 07:24:50 UTC
+++ This bug was initially created as a clone of Bug #1386172 +++

Description of problem:
=======================
UUID is showing zeros in the event message for the peer probe operation.

{u'message': {u'host': u'NODE_IP', u'uuid': u'00000000-0000-0000-0000-000000000000'}, u'event': u'PEER_CONNECT', u'ts': 1476787104, u'nodeid': u'50687898-88b3-4157-9f13-1c32266f83be'}

For complete event messages check the next comment 


Version-Release number of selected component (if applicable):
============================================================
glusterfs-3.8.4-2


How reproducible:
=================
Always


Steps to Reproduce:
===================
1. Have gluster events setup with two nodes
2. Do peer probe operation
3. Check the event message generated by the peer probe command

Actual results:
===============
UUID is showing zeros in the event message for the peer probe operation.



Expected results:
=================
It should show correct UUID in the event message.


Additional info:

--- Additional comment from Red Hat Bugzilla Rules Engine on 2016-10-18 06:46:51 EDT ---

This bug is automatically being proposed for the current release of Red Hat Gluster Storage 3 under active development, by setting the release flag 'rhgs‑3.2.0' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from Byreddy on 2016-10-18 06:53:53 EDT ---

Events messages for peer probe operation:
=========================================

{u'message': {u'host': u'10.70.41.165', u'uuid': u'00000000-0000-0000-0000-000000000000'}, u'event': u'PEER_CONNECT', u'ts': 1476787104, u'nodeid': u'50687898-88b3-4157-9f13-1c32266f83be'}

======================================================================================================================

10.70.43.190 - - [18/Oct/2016 16:03:44] "POST /listen HTTP/1.1" 200 -
{u'message': {u'host': u'dhcp43-190.lab.eng.blr.redhat.com', u'uuid': u'00000000-0000-0000-0000-000000000000'}, u'event': u'PEER_CONNECT', u'ts': 1476787104, u'nodeid': u'96a86504-3612-45db-8ba9-3c458881dcfe'}

======================================================================================================================

10.70.41.165 - - [18/Oct/2016 16:03:45] "POST /listen HTTP/1.1" 200 -
{u'message': {u'host': u'10.70.41.165'}, u'event': u'PEER_ATTACH', u'ts': 1476787105, u'nodeid': u'50687898-88b3-4157-9f13-1c32266f83be'}

======================================================================================================================

10.70.43.190 - - [18/Oct/2016 16:03:45] "POST /listen HTTP/1.1" 200 -

--- Additional comment from Atin Mukherjee on 2016-10-19 03:21:13 EDT ---

RCA:

When a new node is probed, on the first RPC_CLNT_CONNECT peerctx->uuid is set to NULL as the same is yet to be populated. However the subesquent (dis)connect events would be carrying the valid UUIDs.

The way to solve this issue is to conditionally populate UUIDs. If EVENT_PEER_CONNECT has to be generated for the first RPC_CLNT_CONNECT which is because of a peer probe trigger the event would only have hostname whereas for the rest of the other cases the event will populate uuid along with hostname.

Comment 1 Worker Ant 2016-10-19 07:27:39 UTC
REVIEW: http://review.gluster.org/15678 (glusterd: conditionally pass uuid for EVENT_PEER_CONNECT) posted (#1) for review on master by Atin Mukherjee (amukherj)

Comment 2 Worker Ant 2016-10-19 07:46:30 UTC
REVIEW: http://review.gluster.org/15678 (glusterd: conditionally pass uuid for EVENT_PEER_CONNECT) posted (#2) for review on master by Atin Mukherjee (amukherj)

Comment 3 Atin Mukherjee 2016-10-19 07:47:23 UTC
I got a suggestion in the patch saying let's not generate EVENT_PEER_CONNECT for a peer probe trigger which sounds right to me.

Comment 4 Worker Ant 2016-10-19 08:37:16 UTC
REVIEW: http://review.gluster.org/15678 (glusterd: conditionally pass uuid for EVENT_PEER_CONNECT) posted (#3) for review on master by Atin Mukherjee (amukherj)

Comment 5 Worker Ant 2016-10-21 09:25:49 UTC
COMMIT: http://review.gluster.org/15678 committed in master by Atin Mukherjee (amukherj) 
------
commit 9565222c3bb17d124e3d62ec0ab987ce45999047
Author: Atin Mukherjee <amukherj>
Date:   Wed Oct 19 12:53:35 2016 +0530

    glusterd: conditionally pass uuid for EVENT_PEER_CONNECT
    
    When a new node is probed, on the first RPC_CLNT_CONNECT peerctx->uuid is set to
    NULL as the same is yet to be populated. However the subesquent (dis)connect
    events would be carrying the valid UUIDs.
    
    Solution is not to generate EVENT_PEER_CONNECT on a peer probe trigger as CLI is
    already going to take care of generating the same.
    
    Change-Id: I2f0de054ca09f12013a6afdd8ee158c0307796b9
    BUG: 1386516
    Signed-off-by: Atin Mukherjee <amukherj>
    Reviewed-on: http://review.gluster.org/15678
    Smoke: Gluster Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Samikshan Bairagya <samikshan>
    NetBSD-regression: NetBSD Build System <jenkins.org>

Comment 6 Shyamsundar 2017-03-06 17:30:20 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/