Bug 1377288 - The GlusterFS Callback RPC-calls always use RPC/XID 42
Summary: The GlusterFS Callback RPC-calls always use RPC/XID 42
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: rpc
Version: 3.9
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
Assignee: Niels de Vos
QA Contact:
URL:
Whiteboard:
Depends On: 1377097
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-09-19 11:39 UTC by Niels de Vos
Modified: 2016-12-06 06:00 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.9.0
Clone Of: 1377097
Environment:
Last Closed: 2016-12-06 06:00:39 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Niels de Vos 2016-09-19 11:39:54 UTC
+++ This bug was initially created as a clone of Bug #1377097 +++

Description of problem:
The RPC/XID for callbacks has been hardcoded to GF_UNIVERSAL_ANSWER. In
Wireshark these RPC-calls are marked as "RPC retransmissions" because of
the repeating RPC/XID. This is most confusing when verifying the
callbacks that the upcall framework sends. There is no way to see the
difference between real retransmissions and new callbacks.


Version-Release number of selected component (if applicable):
all

How reproducible:
100%

Steps to Reproduce:
1. enable features.cache-invalidation on a volume
2. start a capture of network traffic (tcpdump -iany -s0 -w/tmp/out.pcap tcp)
3. create and delete some files on the mountpoint
4. inspect the .pcap file with Wireskark and filter on 'glusterfs.cbk'
5. notice the "RPC retransmission of #..." on all callback packets

Actual results:
Wireshark thinks all callback packets are a retransmission. This is not the case, the contents of the packets are different (except for the rpc.xid).

Expected results:
The rpc.xid should increase for each callback that gets sent (per client).

Additional info:

--- Additional comment from Worker Ant on 2016-09-18 14:15:34 CEST ---

REVIEW: http://review.gluster.org/15524 (rpc: increase RPC/XID with each callback) posted (#1) for review on master by Niels de Vos (ndevos)

--- Additional comment from Niels de Vos on 2016-09-18 14:16 CEST ---

tcpdump with the patch applied, showing the RPC/XID increasing per client (different ports, all on the same localhost)

--- Additional comment from Worker Ant on 2016-09-19 11:32:08 CEST ---

COMMIT: http://review.gluster.org/15524 committed in master by Raghavendra G (rgowdapp) 
------
commit e9b39527d5dcfba95c4c52a522c8ce1f4512ac21
Author: Niels de Vos <ndevos>
Date:   Fri Sep 16 17:29:21 2016 +0200

    rpc: increase RPC/XID with each callback
    
    The RPC/XID for callbacks has been hardcoded to GF_UNIVERSAL_ANSWER. In
    Wireshark these RPC-calls are marked as "RPC retransmissions" because of
    the repeating RPC/XID. This is most confusing when verifying the
    callbacks that the upcall framework sends. There is no way to see the
    difference between real retransmissions and new callbacks.
    
    This change was verified by create and removal of files through
    different Gluster clients. The RPC/XID is increased on a per connection
    (or client) base. The expectations of the RPC protocol are met this way.
    
    Change-Id: I2116bec0e294df4046d168d8bcbba011284cd0b2
    BUG: 1377097
    Signed-off-by: Niels de Vos <ndevos>
    Reviewed-on: http://review.gluster.org/15524
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Raghavendra G <rgowdapp>

Comment 1 Worker Ant 2016-09-19 11:42:03 UTC
REVIEW: http://review.gluster.org/15527 (rpc: increase RPC/XID with each callback) posted (#1) for review on release-3.9 by Niels de Vos (ndevos)

Comment 2 Worker Ant 2016-09-21 03:11:50 UTC
COMMIT: http://review.gluster.org/15527 committed in release-3.9 by Raghavendra G (rgowdapp) 
------
commit bf75a81c9732774d8ec3fbae34329481abb026f5
Author: Niels de Vos <ndevos>
Date:   Fri Sep 16 17:29:21 2016 +0200

    rpc: increase RPC/XID with each callback
    
    The RPC/XID for callbacks has been hardcoded to GF_UNIVERSAL_ANSWER. In
    Wireshark these RPC-calls are marked as "RPC retransmissions" because of
    the repeating RPC/XID. This is most confusing when verifying the
    callbacks that the upcall framework sends. There is no way to see the
    difference between real retransmissions and new callbacks.
    
    This change was verified by create and removal of files through
    different Gluster clients. The RPC/XID is increased on a per connection
    (or client) base. The expectations of the RPC protocol are met this way.
    
    > Change-Id: I2116bec0e294df4046d168d8bcbba011284cd0b2
    > BUG: 1377097
    > Signed-off-by: Niels de Vos <ndevos>
    > Reviewed-on: http://review.gluster.org/15524
    > Smoke: Gluster Build System <jenkins.org>
    > NetBSD-regression: NetBSD Build System <jenkins.org>
    > CentOS-regression: Gluster Build System <jenkins.org>
    > Reviewed-by: Raghavendra G <rgowdapp>
    (cherry picked from commit e9b39527d5dcfba95c4c52a522c8ce1f4512ac21)
    
    Change-Id: I2116bec0e294df4046d168d8bcbba011284cd0b2
    BUG: 1377288
    Signed-off-by: Niels de Vos <ndevos>
    Reviewed-on: http://review.gluster.org/15527
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Raghavendra G <rgowdapp>

Comment 3 Aravinda VK 2016-10-27 05:22:08 UTC
glusterfs-3.9.0rc2 is released[1] and packages are available for different distributions[2] to test.

[1] http://www.gluster.org/pipermail/maintainers/2016-October/001601.html
[2] http://www.gluster.org/pipermail/maintainers/2016-October/001605.html and http://www.gluster.org/pipermail/maintainers/2016-October/001606.html

Comment 4 Aravinda VK 2016-12-06 06:00:39 UTC
Gluster 3.9 GA is released http://blog.gluster.org/2016/11/announcing-gluster-3-9/


Note You need to log in before you can comment on or make changes to this bug.