Bug 1501315

Summary: Gluster Volume restart fail after exporting fuse sub-dir
Product: [Community] GlusterFS Reporter: Amar Tumballi <atumball>
Component: protocolAssignee: Amar Tumballi <atumball>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: urgent Docs Contact:
Priority: urgent    
Version: 3.12CC: amukherj, bugs, msaini, rhs-bugs, storage-qa-internal
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-glusterfs-3.12.3 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1500720 Environment:
Last Closed: 2017-11-17 07:37:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Amar Tumballi 2017-10-12 11:10:00 UTC
+++ This bug was initially created as a clone of Bug #1500720 +++

Description of problem:
Gluster Volume restart fail after exporting fuse sub-dir 

How reproducible:


Steps to Reproduce:
1.Create a Volume
2.Mount the volume to client via fuse
3.Create a directory inside the volume
4.Unmount the volume from client.
5.Export the subdirectory created in step 3.
#gluster volume set test auth.allow "/dir1(10.70.37.192),/(*)"
6.Mount the subdirectory on client
#mount -t glusterfs dhcp42-125.lab.eng.blr.redhat.com:/test/dir1 /mnt/test_vol/
7.Stop the volume
8.Start the volume

Actual results:
Volume restart fails after exporting fuse sub-dir

# gluster v start test
volume start: test: failed: Commit failed on localhost. Please check log file for details.

Brick log-
less /var/log/glusterfs/bricks/gluster-brick4-3.log

95-6d2e-4915-afda-e3783025117c.password
[2017-10-11 11:00:05.854770] E [MSGID: 115035] [server.c:448:_check_for_auth_option] 0-/gluster/brick4/3: internet address '/dir1(10.70.37.192)' does not conform to standards.
[2017-10-11 11:00:05.854775] E [MSGID: 115001] [server.c:481:validate_auth_options] 0-test-server: volume '/gluster/brick4/3' defined as subvolume, but no authentication defined for the same
[2017-10-11 11:00:05.854782] E [MSGID: 101019] [xlator.c:486:xlator_init] 0-test-server: Initialization of volume 'test-server' failed, review your volfile again
[2017-10-11 11:00:05.854788] E [MSGID: 101066] [graph.c:324:glusterfs_graph_init] 0-test-server: initializing translator failed
[2017-10-11 11:00:05.854807] E [MSGID: 101176] [graph.c:680:glusterfs_graph_activate] 0-graph: init failed


glusterd.log
tailf /var/log/glusterfs/glusterd.log

[2017-10-11 11:26:07.571280] I [glusterd-utils.c:5872:glusterd_brick_start] 0-management: starting a fresh brick process for brick /gluster/brick4/3
[2017-10-11 11:26:07.586304] E [MSGID: 106005] [glusterd-utils.c:5878:glusterd_brick_start] 0-management: Unable to start brick dhcp42-125.lab.eng.blr.redhat.com:/gluster/brick4/3
[2017-10-11 11:26:07.586402] E [MSGID: 106123] [glusterd-mgmt.c:317:gd_mgmt_v3_commit_fn] 0-management: Volume start commit failed.
[2017-10-11 11:26:07.586428] E [MSGID: 106123] [glusterd-mgmt.c:1456:glusterd_mgmt_v3_commit] 0-management: Commit failed for operation Start on local node
[2017-10-11 11:26:07.586445] E [MSGID: 106123] [glusterd-mgmt.c:2047:glusterd_mgmt_v3_initiate_all_phases] 0-management: Commit Op Failed


Expected results:
Volume restart should succeed after exporting fuse sub-dir

Comment 1 Worker Ant 2017-10-12 11:11:34 UTC
REVIEW: https://review.gluster.org/18509 (protocol-auth: use the proper validation method) posted (#2) for review on release-3.12 by Amar Tumballi (amarts)

Comment 2 Worker Ant 2017-10-12 11:26:37 UTC
REVIEW: https://review.gluster.org/18509 (protocol-auth: use the proper validation method) posted (#3) for review on release-3.12 by Amar Tumballi (amarts)

Comment 4 Worker Ant 2017-10-25 11:35:29 UTC
COMMIT: https://review.gluster.org/18509 committed in release-3.12 by jiffin tony Thottan (jthottan) 
------
commit d7006089177d4ff73674ebe84ace651a3457f358
Author: Amar Tumballi <amarts>
Date:   Wed Oct 11 17:33:20 2017 +0530

    protocol-auth: use the proper validation method
    
    Currently, server protocol's init and glusterd's option
    validation methods are different, causing an issue. They
    should be same for having consistent behavior
    
    Change-Id: Ibbf9a18c7192b2d77f9b7675ae7da9b8d2fe5de4
    BUG: 1501315
    Signed-off-by: Amar Tumballi <amarts>

Comment 5 Amar Tumballi 2017-11-17 07:37:49 UTC
fixed in version 3.12.3 version of glusterfs release.

Comment 6 Jiffin 2017-11-29 05:52:17 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.12.3, please open a new bug report.

glusterfs-glusterfs-3.12.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-devel/2017-November/053983.html
[2] https://www.gluster.org/pipermail/gluster-users/