Bug 1375617 - [SSL] Mount path becomes stale, when fuse mounting the volume from the unauthenticated host
Summary: [SSL] Mount path becomes stale, when fuse mounting the volume from the unauth...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: core
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Vijay Bellur
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-09-13 14:18 UTC by SATHEESARAN
Modified: 2018-02-06 06:14 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-02-06 06:14:43 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description SATHEESARAN 2016-09-13 14:18:41 UTC
Description of problem:
-----------------------
Configure SSL on mgmt path and also on I/O path.
Mount the volume from the host which doesn't have proper authentication.

In this case fuse mounting the volume fails, but leaves behind the mount path stale

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHEL 7.2 & RHGS 3.1.3 ( glusterfs-3.7.9-12.el7rhgs )

How reproducible:
-----------------
Always

Steps to Reproduce:
--------------------
1. Configure SSL on mgmt path and I/O path
2. Mount the volume from the unauthenticated hosts ( hosts not having any glusterfs.pem, glusterfs.ca, glusterfs.key )

Actual results:
----------------
Mount fails and the mount path becomes stale

Expected results:
-----------------
Mount path should be available post mounting failed

Additional info:
----------------
refer comment1

Comment 1 SATHEESARAN 2016-09-13 14:20:45 UTC
Following are the logs from client ( RHEL 7.2 ) which doesn't possess any authentication to mount the volume

[root@ ~]# rm -rf /etc/ssl/glusterfs.*

[root@ ~]# df -Th
Filesystem                          Type      Size  Used Avail Use% Mounted on
/dev/mapper/rhel_rhs--client15-root xfs        50G  1.2G   49G   3% /
devtmpfs                            devtmpfs  7.8G     0  7.8G   0% /dev
tmpfs                               tmpfs     7.8G     0  7.8G   0% /dev/shm
tmpfs                               tmpfs     7.8G  8.9M  7.8G   1% /run
tmpfs                               tmpfs     7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/mapper/rhel_rhs--client15-home xfs       1.8T   33M  1.8T   1% /home
/dev/sda1                           xfs      1014M  139M  876M  14% /boot
tmpfs                               tmpfs     1.6G     0  1.6G   0% /run/user/0

[root@ ~]# mount.glusterfs dhcp37-104.lab.eng.blr.redhat.com:/arbol /mnt/arbvol
Mount failed. Please check the log file for more details.

[root@ ~]# df -Th
df: ‘/mnt/arbvol’: Transport endpoint is not connected
Filesystem                          Type      Size  Used Avail Use% Mounted on
/dev/mapper/rhel_rhs--client15-root xfs        50G  1.2G   49G   3% /
devtmpfs                            devtmpfs  7.8G     0  7.8G   0% /dev
tmpfs                               tmpfs     7.8G     0  7.8G   0% /dev/shm
tmpfs                               tmpfs     7.8G  8.9M  7.8G   1% /run
tmpfs                               tmpfs     7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/mapper/rhel_rhs--client15-home xfs       1.8T   33M  1.8T   1% /home
/dev/sda1                           xfs      1014M  139M  876M  14% /boot
tmpfs                               tmpfs     1.6G     0  1.6G   0% /run/user/0

[root@rhs-client15 ~]# ps aux | grep arbol
root     24036  0.0  0.0 112648   956 pts/0    S+   18:41   0:00 grep --color=auto arbol

[root@rhs-client15 ~]# ls /mnt
ls: cannot access /mnt/arbvol: Transport endpoint is not connected
arbvol

[root@rhs-client15 ~]# umount /mnt/arbvol

Comment 2 SATHEESARAN 2016-09-13 14:24:25 UTC
Logs from fuse mount
<snip>
2016-09-13 13:11:13.229537] I [MSGID: 100030] [glusterfsd.c:2338:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.9 (args: /usr/sbin/glusterfs --volfile-server=d
hcp37-104.lab.eng.blr.redhat.com --volfile-id=/arbol /mnt/arbvol)
[2016-09-13 13:11:13.235618] I [socket.c:4057:socket_init] 0-glusterfs: SSL support for glusterd is ENABLED
[2016-09-13 13:11:13.235888] E [socket.c:4135:socket_init] 0-glusterfs: failed to open /etc/ssl/dhparam.pem, DH ciphers are disabled
[2016-09-13 13:11:13.236027] E [socket.c:4205:socket_init] 0-glusterfs: could not load our cert
[2016-09-13 13:11:13.236037] W [rpc-transport.c:359:rpc_transport_load] 0-rpc-transport: 'socket' initialization failed
[2016-09-13 13:11:13.236156] W [rpc-clnt.c:1008:rpc_clnt_connection_init] 0-glusterfs: loading of new rpc-transport failed
[2016-09-13 13:11:13.236173] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-glusterfs: size=588 max=0 total=0
[2016-09-13 13:11:13.236187] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-glusterfs: size=124 max=0 total=0
[2016-09-13 13:11:13.236212] W [glusterfsd-mgmt.c:2136:glusterfs_mgmt_init] 0-glusterfs: failed to create rpc clnt
</snip>

Comment 5 Mohit Agrawal 2016-09-20 01:57:54 UTC
Hi,

  I don't think issue will be reproducible on latest gluster package on 3.7 or 3.8.

Regards
Mohit Agrawal

Comment 6 SATHEESARAN 2016-10-19 08:39:22 UTC
Tested with RHGS 3.2.0 interim build ( glusterfs-3.8.4-2.el7rhgs ) on RHEL 7.2

I am not seeing this issue any more. See also comment5

Looks like the issue got fixed upstream glusterfs 3.8. If that's the case, I would suggest to target this bug for RHGS 3.2.0

@Atin, what do you suggest ?

Comment 7 SATHEESARAN 2016-10-19 08:59:23 UTC
(In reply to SATHEESARAN from comment #6)
> Tested with RHGS 3.2.0 interim build ( glusterfs-3.8.4-2.el7rhgs ) on RHEL
> 7.2
> 
> I am not seeing this issue any more. See also comment5
> 
> Looks like the issue got fixed upstream glusterfs 3.8. If that's the case, I
> would suggest to target this bug for RHGS 3.2.0
> 
> @Atin, what do you suggest ?

Apologies for the confusion here.
I haven't enabled SSL on the client by just creating /var/lib/glusterd/secure-access

I am hitting this problem now.

Better to fix it with future releases as the issue is not severe, as this is the negative test case


Note You need to log in before you can comment on or make changes to this bug.