Bug 1283563 - libgfapi to support set_volfile-server-transport type "unix"
Summary: libgfapi to support set_volfile-server-transport type "unix"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs-devel
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.1.2
Assignee: Mohamed Ashiq
QA Contact: Byreddy
URL:
Whiteboard:
Depends On: 1279739
Blocks: 1260783 1283038 1283040
TreeView+ depends on / blocked
 
Reported: 2015-11-19 10:06 UTC by Mohamed Ashiq
Modified: 2016-03-01 05:56 UTC (History)
10 users (show)

Fixed In Version: glusterfs-3.7.5-9
Doc Type: Bug Fix
Doc Text:
Glusterd is bound to a specific-IP address. Due to this, libgfapi fails to get the volfile since IP is hard-coded to localhost from the client. With this fix, the rpc_transport_unix_options_build is used, libgfapi is now enabled to get the volfile with unix domain socket. libgfapi can now fetch volfile with unix set_volfile-server-transport type even when glusterd is bound to a specific IP address.
Clone Of: 1279739
Environment:
Last Closed: 2016-03-01 05:56:08 UTC
Embargoed:


Attachments (Terms of Use)
test to reproduce the issue (1.36 KB, text/x-csrc)
2015-11-19 10:06 UTC, Mohamed Ashiq
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0193 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 update 2 2016-03-01 10:20:36 UTC

Description Mohamed Ashiq 2015-11-19 10:06:58 UTC
Created attachment 1096576 [details]
test to reproduce the issue

+++ This bug was initially created as a clone of Bug #1279739 +++

Description of problem:
libgfapi does not support unix domain socket. In case if glusterd is binded to a specific-IP, libgfapi clients(like heal) fails. Using unix domain socket libgfapi will over come the failure.


Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Download the attached program.
2. Find the readme within the file.
3. Follow the instructions.

--- Additional comment from Mohamed Ashiq on 2015-11-10 07:44:52 EST ---

patch:

http://review.gluster.org/12563

--- Additional comment from Vijay Bellur on 2015-11-16 01:26:09 EST ---

REVIEW: http://review.gluster.org/12563 (libgfapi: To support set_volfile-server-transport type "unix") posted (#6) for review on master by Mohamed Ashiq Liyazudeen (mliyazud)

--- Additional comment from Vijay Bellur on 2015-11-17 10:46:51 EST ---

COMMIT: http://review.gluster.org/12563 committed in master by Shyamsundar Ranganathan (srangana) 
------
commit f71c08b8d592fa6125fee57fb73f774ce522756c
Author: Mohamed Ashiq <mliyazud>
Date:   Tue Nov 10 13:18:41 2015 +0530

    libgfapi: To support set_volfile-server-transport type "unix"
    
    This patch helps libgfapi to get the volfile using Unix domain socket.
    run the attachment file in the bug to test.
    The patch checks if the glfs_set_volfile_server transport is of type "unix",
    If It is then uses rpc_transport_unix_options_build to get the volfile.
    
    Change-Id: Ifd5d1e7c0d8cc9a906c3c3355b8977141e892a2f
    BUG: 1279739
    Signed-off-by: Mohamed Ashiq <mliyazud>
    Signed-off-by: Humble Devassy Chirammal <hchiramm>
    Reviewed-on: http://review.gluster.org/12563
    Tested-by: NetBSD Build System <jenkins.org>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Niels de Vos <ndevos>
    Reviewed-by: Poornima G <pgurusid>
    Reviewed-by: Raghavendra Talur <rtalur>
    Reviewed-by: Shyamsundar Ranganathan <srangana>

Comment 4 Byreddy 2015-12-15 05:14:38 UTC
Verified this bug, below is the details.

Without fix:
============
[root@ ~]# gluster volume status
Status of volume: falcon
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick IP:/bricks/brick0/a00        49154     0          Y       3333 
Brick IP:/bricks/brick1/a11        49155     0          Y       3351 
NFS Server on localhost                     2049      0          Y       3372 
 
Task Status of Volume falcon
------------------------------------------------------------------------------
There are no active volume tasks
 

[root@ ~]# ./glfs_bvfs 
Failed to initialize volume (falcon)          <<<<<<<<<<<Failure
[root@ ~]# rpm -qa |grep gluster
nfs-ganesha-gluster-2.2.0-9.el7rhgs.x86_64
glusterfs-rdma-3.7.5-6.el7rhgs.x86_64
gluster-nagios-common-0.2.2-1.el7rhgs.noarch
glusterfs-3.7.5-6.el7rhgs.x86_64
glusterfs-api-devel-3.7.5-6.el7rhgs.x86_64
python-gluster-3.7.1-16.el7rhgs.x86_64
glusterfs-client-xlators-3.7.5-6.el7rhgs.x86_64
glusterfs-cli-3.7.5-6.el7rhgs.x86_64
glusterfs-ganesha-3.7.5-6.el7rhgs.x86_64
glusterfs-api-3.7.5-6.el7rhgs.x86_64
glusterfs-server-3.7.5-6.el7rhgs.x86_64
glusterfs-geo-replication-3.7.5-6.el7rhgs.x86_64
glusterfs-libs-3.7.5-6.el7rhgs.x86_64
glusterfs-devel-3.7.5-6.el7rhgs.x86_64
gluster-nagios-addons-0.2.5-1.el7rhgs.x86_64
glusterfs-fuse-3.7.5-6.el7rhgs.x86_64
[root@dhcp ~]# 



With Fix:
=========
[root@dhcp ~]# 
[root@dhcp ~]# gluster volume status
Status of volume: falcon
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick IP:/bricks/brick0/x00        49194     0          Y       12967
Brick IP:/bricks/brick1/x11        49195     0          Y       13000
NFS Server on localhost                     2049      0          Y       13038
 
Task Status of Volume falcon
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@dhcp ~]# 
[root@dhcp ~]# ./glfs_bvfs 
Init successfully done                              <<<<<<<<<<Init Passed
[root@dhcp ~]# 
[root@dhcp ~]# rpm -qa |grep gluster
glusterfs-devel-3.7.5-11.el7rhgs.x86_64
glusterfs-fuse-3.7.5-11.el7rhgs.x86_64
glusterfs-debuginfo-3.7.5-11.el7rhgs.x86_64
glusterfs-3.7.5-11.el7rhgs.x86_64
glusterfs-cli-3.7.5-11.el7rhgs.x86_64
glusterfs-ganesha-3.7.5-11.el7rhgs.x86_64
vdsm-gluster-4.16.20-1.3.el7rhgs.noarch
glusterfs-client-xlators-3.7.5-11.el7rhgs.x86_64
glusterfs-server-3.7.5-11.el7rhgs.x86_64
glusterfs-geo-replication-3.7.5-11.el7rhgs.x86_64
nfs-ganesha-gluster-2.2.0-9.el7rhgs.x86_64
glusterfs-api-3.7.5-11.el7rhgs.x86_64
glusterfs-rdma-3.7.5-11.el7rhgs.x86_64
glusterfs-libs-3.7.5-11.el7rhgs.x86_64
glusterfs-api-devel-3.7.5-11.el7rhgs.x86_64
gluster-nagios-addons-0.2.5-1.el7rhgs.x86_64
gluster-nagios-common-0.2.2-1.el7rhgs.noarch
python-gluster-3.7.1-16.el7rhgs.x86_64
[root@dhcp ~]# 

with above info, moving to verified state.

Comment 6 errata-xmlrpc 2016-03-01 05:56:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html


Note You need to log in before you can comment on or make changes to this bug.