Bug 1582063 - rpc: The gluster auth version is always AUTH_GLUSTERFS_v2
Summary: rpc: The gluster auth version is always AUTH_GLUSTERFS_v2
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: rpc
Version: 4.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kotresh HR
QA Contact:
URL:
Whiteboard:
Depends On: 1579276
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-24 06:30 UTC by Kotresh HR
Modified: 2018-06-20 18:06 UTC (History)
1 user (show)

Fixed In Version: glusterfs-v4.1.0
Clone Of: 1579276
Environment:
Last Closed: 2018-06-20 18:06:39 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kotresh HR 2018-05-24 06:30:20 UTC
+++ This bug was initially created as a clone of Bug #1579276 +++

Description of problem:
The new features like ctime which uses auth version - AUTH_GLUSTERFS_v3 is failing because the auth value is always AUTH_GLUSTERFS_v2

Version-Release number of selected component (if applicable):
mainline

How reproducible:
Always happening in one machine and not reproducible in others

Steps to Reproduce:
1.  Create gluster volume, start it.
2.  Enable ctime and utime features on the volume
3.  Mount the volume
4.  Create a file, stat to record ctime
5.  chmod on file, stat to record ctime. 

Actual results:

The ctime is same between create and chmod.

Expected results:

The ctime should have changed between create and chmod

Additional info:

--- Additional comment from Worker Ant on 2018-05-17 05:45:06 EDT ---

REVIEW: https://review.gluster.org/20030 (rpc: Don't reset auth_value in disconnect) posted (#1) for review on master by Kotresh HR

--- Additional comment from Kotresh HR on 2018-05-17 05:48 EDT ---

Client logs where AUTH_GLUSTERFS_v2 is being chosen over AUTH_GLUSTERFS_v3

--- Additional comment from Kotresh HR on 2018-05-17 05:49 EDT ---

Brick logs

--- Additional comment from Worker Ant on 2018-05-24 01:26:43 EDT ---

COMMIT: https://review.gluster.org/20030 committed in master by "Raghavendra G" <rgowdapp> with a commit message- rpc: Don't reset auth_value in disconnect

The auth_value was being reset to AUTH_GLUSTERFS_v2
during rpc disconnect. It shoud not be reset. The
disconnect during portmap request can race with
handshake. If handshake happens first and
disconnect later, auth_value would set to default
value and it never sets back to actual auth_value

fixes: bz#1579276
Change-Id: Ib46c9e01a97f6defb3fd1e0423fdb4b899b4a361
Signed-off-by: Kotresh HR <khiremat>

Comment 1 Worker Ant 2018-05-24 06:50:16 UTC
REVISION POSTED: https://review.gluster.org/20076 (rpc: Don't reset auth_value in disconnect) posted (#2) for review on release-4.1 by Kotresh HR

Comment 2 Worker Ant 2018-05-24 06:50:19 UTC
REVIEW: https://review.gluster.org/20076 (rpc: Don't reset auth_value in disconnect) posted (#2) for review on release-4.1 by Kotresh HR

Comment 3 Worker Ant 2018-05-25 12:57:51 UTC
COMMIT: https://review.gluster.org/20076 committed in release-4.1 by "Shyamsundar Ranganathan" <srangana> with a commit message- rpc: Don't reset auth_value in disconnect

The auth_value was being reset to AUTH_GLUSTERFS_v2
during rpc disconnect. It shoud not be reset. The
disconnect during portmap request can race with
handshake. If handshake happens first and
disconnect later, auth_value would set to default
value and it never sets back to actual auth_value

Back port of
> BUG: 1579276
> Change-Id: Ib46c9e01a97f6defb3fd1e0423fdb4b899b4a361
> Signed-off-by: Kotresh HR <khiremat>
(cherry picked from commit 2d5b179d1a545f5b7ae8b1b2274769691dd3468f)

fixes: bz#1582063
Change-Id: Ib46c9e01a97f6defb3fd1e0423fdb4b899b4a361
Signed-off-by: Kotresh HR <khiremat>

Comment 4 Shyamsundar 2018-06-20 18:06:39 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.

glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.