Bug 1377097 - The GlusterFS Callback RPC-calls always use RPC/XID 42
Summary: The GlusterFS Callback RPC-calls always use RPC/XID 42
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: rpc
Version: mainline
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
Assignee: Niels de Vos
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1377288 1377290 1377291
TreeView+ depends on / blocked
 
Reported: 2016-09-18 12:12 UTC by Niels de Vos
Modified: 2017-03-06 17:26 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.10.0
Clone Of:
: 1377288 1377290 1377291 (view as bug list)
Environment:
Last Closed: 2017-03-06 17:26:11 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
capture with NFS and Gluster traffice, showing the RPC/XID re-use (12.35 MB, application/octet-stream)
2016-09-18 12:12 UTC, Niels de Vos
no flags Details
RPC/XID increasing after a callback, own XID per client (7.91 KB, application/octet-stream)
2016-09-18 12:16 UTC, Niels de Vos
no flags Details

Description Niels de Vos 2016-09-18 12:12:21 UTC
Created attachment 1202185 [details]
capture with NFS and Gluster traffice, showing the RPC/XID re-use

Description of problem:
The RPC/XID for callbacks has been hardcoded to GF_UNIVERSAL_ANSWER. In
Wireshark these RPC-calls are marked as "RPC retransmissions" because of
the repeating RPC/XID. This is most confusing when verifying the
callbacks that the upcall framework sends. There is no way to see the
difference between real retransmissions and new callbacks.


Version-Release number of selected component (if applicable):
all

How reproducible:
100%

Steps to Reproduce:
1. enable features.cache-invalidation on a volume
2. start a capture of network traffic (tcpdump -iany -s0 -w/tmp/out.pcap tcp)
3. create and delete some files on the mountpoint
4. inspect the .pcap file with Wireskark and filter on 'glusterfs.cbk'
5. notice the "RPC retransmission of #..." on all callback packets

Actual results:
Wireshark thinks all callback packets are a retransmission. This is not the case, the contents of the packets are different (except for the rpc.xid).

Expected results:
The rpc.xid should increase for each callback that gets sent (per client).

Additional info:

Comment 1 Worker Ant 2016-09-18 12:15:34 UTC
REVIEW: http://review.gluster.org/15524 (rpc: increase RPC/XID with each callback) posted (#1) for review on master by Niels de Vos (ndevos)

Comment 2 Niels de Vos 2016-09-18 12:16:45 UTC
Created attachment 1202190 [details]
RPC/XID increasing after a callback, own XID per client

tcpdump with the patch applied, showing the RPC/XID increasing per client (different ports, all on the same localhost)

Comment 3 Worker Ant 2016-09-19 09:32:08 UTC
COMMIT: http://review.gluster.org/15524 committed in master by Raghavendra G (rgowdapp) 
------
commit e9b39527d5dcfba95c4c52a522c8ce1f4512ac21
Author: Niels de Vos <ndevos>
Date:   Fri Sep 16 17:29:21 2016 +0200

    rpc: increase RPC/XID with each callback
    
    The RPC/XID for callbacks has been hardcoded to GF_UNIVERSAL_ANSWER. In
    Wireshark these RPC-calls are marked as "RPC retransmissions" because of
    the repeating RPC/XID. This is most confusing when verifying the
    callbacks that the upcall framework sends. There is no way to see the
    difference between real retransmissions and new callbacks.
    
    This change was verified by create and removal of files through
    different Gluster clients. The RPC/XID is increased on a per connection
    (or client) base. The expectations of the RPC protocol are met this way.
    
    Change-Id: I2116bec0e294df4046d168d8bcbba011284cd0b2
    BUG: 1377097
    Signed-off-by: Niels de Vos <ndevos>
    Reviewed-on: http://review.gluster.org/15524
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Raghavendra G <rgowdapp>

Comment 4 Shyamsundar 2017-03-06 17:26:11 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.