Bug 1423412 - Mount of older client fails
Summary: Mount of older client fails
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: read-ahead
Version: 3.10
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1423410
Blocks: glusterfs-3.10.0
TreeView+ depends on / blocked
 
Reported: 2017-02-17 09:32 UTC by Poornima G
Modified: 2017-03-06 17:46 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.10.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1423410
Environment:
Last Closed: 2017-02-27 15:29:48 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Poornima G 2017-02-17 09:32:01 UTC
+++ This bug was initially created as a clone of Bug #1423410 +++

Description of problem:

Post upgrading brick nodes from 3.8.8 to master, I tried a to mount from a 3.8.8 client (as I had not upgraded the client till then). The mount failed with the following in the logs (at the end of the mail).

The issue was that I did an rpm -U to get the latest version, so all vol files were upgraded (from the spec file, glusterd --xlator-option *.upgrade=on -N), this resulted in client vol files being upgraded as well, and now the client does not understand the option as below,

0-testvolssd-readdir-ahead: invalid number format "128KB" in option "rda-request-size"
0-testvolssd-readdir-ahead: validate of rda-request-size returned -1
0-testvolssd-readdir-ahead: validation failed: invalid number format "128KB" in option "rda-request-size"

- Shyam

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Worker Ant on 2017-02-17 04:31:12 EST ---

REVIEW: https://review.gluster.org/16657 (glusterd, readdir-ahead: Fix backward incompatibility) posted (#1) for review on master by Poornima G (pgurusid)

Comment 1 Worker Ant 2017-02-17 10:01:08 UTC
REVIEW: https://review.gluster.org/16658 (glusterd, readdir-ahead: Fix backward incompatibility) posted (#1) for review on release-3.10 by Poornima G (pgurusid)

Comment 2 Worker Ant 2017-02-19 05:07:12 UTC
REVIEW: https://review.gluster.org/16658 (glusterd, readdir-ahead: Fix backward incompatibility) posted (#2) for review on release-3.10 by Poornima G (pgurusid)

Comment 3 Worker Ant 2017-02-19 14:18:11 UTC
COMMIT: https://review.gluster.org/16658 committed in release-3.10 by Shyamsundar Ranganathan (srangana) 
------
commit c76e6397f544a4f08c8762ad0455d7f52e95f94f
Author: Poornima G <pgurusid>
Date:   Fri Feb 17 14:05:25 2017 +0530

    glusterd, readdir-ahead: Fix backward incompatibility
    
    Backport of https://review.gluster.org/#/c/16657/
    
    Issue:
    Any opion is spcified in two places: In the options[] of xlator
    itself and glusterd-volume-set.c. The default value of this option
    can be specified in both the places. If its specified only in xlator
    then the volfile generated will not have the option and default value,
    it will be assigned during graph initialization.
    With patch [1] the option rda-request-size was changed from INT to SIZET
    type, and default was changed from 131072 to 128KB, but was specified
    only in the readdir-ahead.c. Thus with this patch alone the volfile
    entry for readdir-ahead looks like:
    volume patchy-readdir-ahead
        type performance/readdir-ahead
        subvolumes patchy-read-ahead
    end-volume
    
    With patch [2], the default of option rda-request-size was specified
    in glusterd-volume-set.c as well(as it was necessary fr parallel readdir).
    With this patch the readdir entry in the volfile will look like:
    volume patchy-readdir-ahead
        type performance/readdir-ahead
        option rda-cache-limit 10MB
        option rda-request-size 128KB
        option parallel-readdir off
        subvolumes patchy-read-ahead
    end-volume
    
    
    Now consider the server has both these patches and client doesn't.
    Server will generate a volfile with entry:
    
    The old clients which thought the option rda-request-size is of type
    INT will now recieve the value 128KB which it willn't understand,
    and hence fail the mount.
    
    The issue is seen only with the combination of [1] and [2].
    
    Solution:
    Instead of specifying 128KB as default in glusterd we specify 131072
    so that the old clients will interpret as INT and new ones as 128KB
    
    Credits: Raghavendra G
    
    > Reviewed-on: https://review.gluster.org/16657
    > Smoke: Gluster Build System <jenkins.org>
    > NetBSD-regression: NetBSD Build System <jenkins.org>
    > CentOS-regression: Gluster Build System <jenkins.org>
    > Reviewed-by: Shyamsundar Ranganathan <srangana>
    > Reviewed-by: Raghavendra G <rgowdapp>
    > Reviewed-by: Atin Mukherjee <amukherj>
    
    Change-Id: I0c269a5890957fd8a38e9a05bdec088645a6688a
    BUG: 1423412
    Signed-off-by: Poornima G <pgurusid>
    Reviewed-on: https://review.gluster.org/16658
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Atin Mukherjee <amukherj>

Comment 4 Shyamsundar 2017-02-27 15:29:48 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-devel/2017-February/052173.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 5 Shyamsundar 2017-03-06 17:46:38 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.