Bug 1500720 - Gluster Volume restart fail after exporting fuse sub-dir
Summary: Gluster Volume restart fail after exporting fuse sub-dir
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: fuse
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: RHGS 3.3.1
Assignee: Amar Tumballi
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On:
Blocks: 1475688
TreeView+ depends on / blocked
 
Reported: 2017-10-11 11:32 UTC by Manisha Saini
Modified: 2017-11-29 03:30 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.8.4-50
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1501315 (view as bug list)
Environment:
Last Closed: 2017-11-29 03:30:36 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:3276 0 normal SHIPPED_LIVE glusterfs bug fix update 2017-11-29 08:28:52 UTC

Description Manisha Saini 2017-10-11 11:32:57 UTC
Description of problem:
Gluster Volume restart fail after exporting fuse sub-dir 

Version-Release number of selected component (if applicable):
glusterfs-fuse-3.8.4-48.el7rhgs.x86_64

How reproducible:


Steps to Reproduce:
1.Create a Volume
2.Mount the volume to client via fuse
3.Create a directory inside the volume
4.Unmount the volume from client.
5.Export the subdirectory created in step 3.
#gluster volume set test auth.allow "/dir1(10.70.37.192),/(*)"
6.Mount the subdirectory on client
#mount -t glusterfs dhcp42-125.lab.eng.blr.redhat.com:/test/dir1 /mnt/test_vol/
7.Stop the volume
8.Start the volume

Actual results:
Volume restart fails after exporting fuse sub-dir

# gluster v start test
volume start: test: failed: Commit failed on localhost. Please check log file for details.

Brick log-
less /var/log/glusterfs/bricks/gluster-brick4-3.log

95-6d2e-4915-afda-e3783025117c.password
[2017-10-11 11:00:05.854770] E [MSGID: 115035] [server.c:448:_check_for_auth_option] 0-/gluster/brick4/3: internet address '/dir1(10.70.37.192)' does not conform to standards.
[2017-10-11 11:00:05.854775] E [MSGID: 115001] [server.c:481:validate_auth_options] 0-test-server: volume '/gluster/brick4/3' defined as subvolume, but no authentication defined for the same
[2017-10-11 11:00:05.854782] E [MSGID: 101019] [xlator.c:486:xlator_init] 0-test-server: Initialization of volume 'test-server' failed, review your volfile again
[2017-10-11 11:00:05.854788] E [MSGID: 101066] [graph.c:324:glusterfs_graph_init] 0-test-server: initializing translator failed
[2017-10-11 11:00:05.854807] E [MSGID: 101176] [graph.c:680:glusterfs_graph_activate] 0-graph: init failed


glusterd.log
tailf /var/log/glusterfs/glusterd.log

[2017-10-11 11:26:07.571280] I [glusterd-utils.c:5872:glusterd_brick_start] 0-management: starting a fresh brick process for brick /gluster/brick4/3
[2017-10-11 11:26:07.586304] E [MSGID: 106005] [glusterd-utils.c:5878:glusterd_brick_start] 0-management: Unable to start brick dhcp42-125.lab.eng.blr.redhat.com:/gluster/brick4/3
[2017-10-11 11:26:07.586402] E [MSGID: 106123] [glusterd-mgmt.c:317:gd_mgmt_v3_commit_fn] 0-management: Volume start commit failed.
[2017-10-11 11:26:07.586428] E [MSGID: 106123] [glusterd-mgmt.c:1456:glusterd_mgmt_v3_commit] 0-management: Commit failed for operation Start on local node
[2017-10-11 11:26:07.586445] E [MSGID: 106123] [glusterd-mgmt.c:2047:glusterd_mgmt_v3_initiate_all_phases] 0-management: Commit Op Failed


Expected results:
Volume restart should succeed after exporting fuse sub-dir

Additional info:

Comment 2 Amar Tumballi 2017-10-11 12:21:37 UTC
Upstream patch to fix the issue is at : https://review.gluster.org/#/c/18489/

This is a blocker for Subdir mount feature, hence requesting for Acks!

Comment 3 Amar Tumballi 2017-10-16 08:57:07 UTC
 https://code.engineering.redhat.com/gerrit/120558

Comment 5 Manisha Saini 2017-10-17 04:54:34 UTC
Verified this bug with glusterfs-3.8.4-50.el7rhgs.x86_64.

Steps:
1.Create 4*3 Distributed-Replicate volume.
2.Mount the volume on client via fuse
# mount -t glusterfs dhcp42-125.lab.eng.blr.redhat.com:ganeshavol2/ /mnt/volume_mount1

3.Create sub-dir inside mount point
[root@dhcp37-192 volume_mount1]# ls
[root@dhcp37-192 volume_mount1]# mkdir dir1

4.Set permissions on volume for the subdir
# gluster v set ganeshavol2 auth.allow "/dir1(10.70.37.192),/(*)"
volume set: success

5.Mount the sub-dir on client
[root@dhcp37-192 mnt]# mount -t glusterfs dhcp42-125.lab.eng.blr.redhat.com:ganeshavol2/dir1 /mnt/sub-dir2
[root@dhcp37-192 mnt]# cd /mnt/sub-dir2
[root@dhcp37-192 sub-dir2]# ls
[root@dhcp37-192 sub-dir2]# touch f1
[root@dhcp37-192 sub-dir2]# touch f2
[root@dhcp37-192 sub-dir2]# touch f3
[root@dhcp37-192 sub-dir2]# mkdir dir1
[root@dhcp37-192 sub-dir2]# mkdir dir2
[root@dhcp37-192 sub-dir2]# mkdir dir3

6.Do volume stop and start.Check volume status.

 
[root@dhcp42-125 brick4]# gluster v stop ganeshavol2
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: ganeshavol2: success
[root@dhcp42-125 brick4]# gluster v start ganeshavol2
volume start: ganeshavol2: success


[root@dhcp42-125 brick4]# gluster v status ganeshavol2
Status of volume: ganeshavol2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick dhcp42-125.lab.eng.blr.redhat.com:/gl
uster/brick4/2                              49155     0          Y       10875
Brick dhcp42-127.lab.eng.blr.redhat.com:/gl
uster/brick4/2                              49155     0          Y       27736
Brick dhcp42-129.lab.eng.blr.redhat.com:/gl
uster/brick4/2                              49155     0          Y       29969
Brick dhcp42-119.lab.eng.blr.redhat.com:/gl
uster/brick4/2                              49155     0          Y       28521
Brick dhcp42-125.lab.eng.blr.redhat.com:/gl
uster/brick5/2                              49156     0          Y       10894
Brick dhcp42-127.lab.eng.blr.redhat.com:/gl
uster/brick5/2                              49156     0          Y       27755
Brick dhcp42-129.lab.eng.blr.redhat.com:/gl
uster/brick5/2                              49156     0          Y       29988
Brick dhcp42-119.lab.eng.blr.redhat.com:/gl
uster/brick5/2                              49156     0          Y       28540
Brick dhcp42-125.lab.eng.blr.redhat.com:/gl
uster/brick6/2                              49157     0          Y       10913
Brick dhcp42-127.lab.eng.blr.redhat.com:/gl
uster/brick6/2                              49157     0          Y       27774
Brick dhcp42-129.lab.eng.blr.redhat.com:/gl
uster/brick6/2                              49157     0          Y       30007
Brick dhcp42-119.lab.eng.blr.redhat.com:/gl
uster/brick6/2                              49157     0          Y       28559
Self-heal Daemon on localhost               N/A       N/A        Y       10933
Self-heal Daemon on dhcp42-127.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       27794
Self-heal Daemon on dhcp42-129.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       30027
Self-heal Daemon on dhcp42-119.lab.eng.blr.
redhat.com                                  N/A       N/A        Y       28579
 
Task Status of Volume ganeshavol2
------------------------------------------------------------------------------
There are no active volume tasks

Comment 8 errata-xmlrpc 2017-11-29 03:30:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3276


Note You need to log in before you can comment on or make changes to this bug.