Bug 1049726 - Rebalance process will fail to start for longer volume names.
Summary: Rebalance process will fail to start for longer volume names.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kaushal
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-01-08 05:19 UTC by Kaushal
Modified: 2014-11-11 08:26 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.6.0beta1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-11-11 08:26:41 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kaushal 2014-01-08 05:19:30 UTC
The gluster rebalance process creates a sock file which is used by glusterd for communication with it.
The present location of the sock file is at "<GLUSTER_WORKDIR>/vols/<volname>/rebalance/<peer-id>.sock". With the commonly used workdir of /var/lib/glusterd , the rebalance path would exceed the UNIX_PATH_MAX length of 108 with volume names of length greater than 32. In such cases the rebalance process would fail to start as unix domain sockets cannot be created with path of lengths greater than UNIX_PATH_MAX.

Comment 1 Anand Avati 2014-01-08 05:21:22 UTC
REVIEW: http://review.gluster.org/6616 (glusterd: Relocate rebalance sockfile) posted (#3) for review on master by Kaushal M (kaushal)

Comment 2 Anand Avati 2014-01-09 07:01:43 UTC
REVIEW: http://review.gluster.org/6616 (glusterd: Relocate rebalance sockfile) posted (#4) for review on master by Kaushal M (kaushal)

Comment 3 Anand Avati 2014-01-10 10:08:48 UTC
COMMIT: http://review.gluster.org/6616 committed in master by Vijay Bellur (vbellur) 
------
commit 2edf1ec797e6f56515d0208be152d18ca6e71456
Author: Kaushal M <kaushal>
Date:   Mon Dec 30 09:59:18 2013 +0530

    glusterd: Relocate rebalance sockfile
    
    The defrag sockfile was moved from priv->workdir to
    DEFAULT_VAR_RUN_DIRECTORY. The format for the new path of the defrag
    sockfile is 'DEFAULT_VAR_RUN_DIRECTORY/gluster-rebalance-<vol-id>.sock'.
    
    This was needed because the earlier location didn't have a fixed length
    and could exceed UNIX_PATH_MAX characters. This could lead to the
    rebalance process failing to start as the socket file could not be
    created.
    
    Also, for keeping backward compatiblity, glusterd_rebalance_rpc_create
    will try both the new and old sockfile locations when attempting
    reconnection.
    
    Change-Id: I6740ea665de84ebce1ef7199c412f426de54e3d0
    BUG: 1049726
    Signed-off-by: Kaushal M <kaushal>
    Reviewed-on: http://review.gluster.org/6616
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 4 Niels de Vos 2014-09-22 12:34:46 UTC
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/

Comment 5 Niels de Vos 2014-11-11 08:26:41 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report.

glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html
[2] http://supercolony.gluster.org/mailman/listinfo/gluster-users


Note You need to log in before you can comment on or make changes to this bug.