Bug 1337811
Summary: | [GSS] - enabling glusternfs with nfs.rpc-auth-allow to many hosts failed | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Prashant Dhange <pdhange> | |
Component: | gluster-nfs | Assignee: | Bipin Kunal <bkunal> | |
Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> | |
Severity: | high | Docs Contact: | ||
Priority: | high | |||
Version: | rhgs-3.1 | CC: | amukherj, asrivast, bkunal, olim, rhinduja, rhs-bugs, rnalakka, skoduri, storage-qa-internal | |
Target Milestone: | --- | Keywords: | Patch, ZStream | |
Target Release: | RHGS 3.2.0 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.8.4-2 | Doc Type: | Bug Fix | |
Doc Text: |
Previously, when 'showmount' was run, the structure of data passed from the mount protocol meant that the groupnodes defined in the nfs.rpc-auth-allow volume option were handled as a single string, which caused errors when the string of groupnodes was longer than 255 characters. This single string is now handled as a list of strings so that 'showmount' receives the correct number of hostnames.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1343286 (view as bug list) | Environment: | ||
Last Closed: | 2017-03-23 05:32:13 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1343286 | |||
Bug Blocks: | 1351522, 1351530 |
Description
Prashant Dhange
2016-05-20 06:45:46 UTC
More information from customer: It is complete reproducible on a test cluster (installed from rhgs 3.1.2 iso). queries will timeout when nfs.rpc-auth-allow exceed 256 characters. Steps to reproduce will follow in a private comment. Niels, May I know the reason of moving back the component back to gluster-nfs? We have realligned the downstream components with upstream to keep a parity and in upstream we have nfs component and hence the change. (In reply to Atin Mukherjee from comment #9) > Niels, > > May I know the reason of moving back the component back to gluster-nfs? We > have realligned the downstream components with upstream to keep a parity and > in upstream we have nfs component and hence the change. This is a Gluster/NFS (gNFS) bug, we use the "nfs" component for changes to GlusterFS in relation with NFS-Ganesha. A patch for this has been included in RHGS-3.2.0 since it contains a rebase of GlusterFS 3.8 (http://review.gluster.org/14700). Rahul - can this BZ be tested with latest build? Verified this Bug on glusterfs-3.8.4-5.el7rhgs.x86_64 Steps: 1.HOSTS=$(echo 192.168.10.{1..40} | tr ' ' ',') 2.[root@dhcp47-159 ganesha]# gluster volume set Vol1 nfs.rpc-auth-allow ${HOSTS} volume set: success 3.[root@dhcp47-159 ganesha]# showmount -e localhost Export list for localhost: /Vol1 192.168.10.1,192.168.10.2,192.168.10.3,192.168.10.4,192.168.10.5,192.168.10.6,192.168.10.7,192.168.10.8,192.168.10.9,192.168.10.10,192.168.10.11,192.168.10.12,192.168.10.13,192.168.10.14,192.168.10.15,192.168.10.16,192.168.10.17,192.168.10.18,192.168.10.19,192.168.10.20,192.168.10.21,192.168.10.22,192.168.10.23,192.168.10.24,192.168.10.25,192.168.10.26,192.168.10.27,192.168.10.28,192.168.10.29,192.168.10.30,192.168.10.31,192.168.10.32,192.168.10.33,192.168.10.34,192.168.10.35,192.168.10.36,192.168.10.37,192.168.10.38,192.168.10.39,192.168.10.40 [root@dhcp46-241 ganesha]# gluster v info Volume Name: Vol1 Type: Distributed-Replicate Volume ID: 9678475b-3ecb-4f22-995b-346c5bcdecca Status: Started Snapshot Count: 0 Number of Bricks: 6 x 2 = 12 Transport-type: tcp Bricks: Brick1: 10.70.47.3:/mnt/data1/b1 Brick2: 10.70.47.159:/mnt/data1/b1 Brick3: 10.70.46.241:/mnt/data1/b1 Brick4: 10.70.46.219:/mnt/data1/b1 Brick5: 10.70.47.3:/mnt/data2/b2 Brick6: 10.70.47.159:/mnt/data2/b2 Brick7: 10.70.46.241:/mnt/data2/b2 Brick8: 10.70.46.219:/mnt/data2/b2 Brick9: 10.70.47.3:/mnt/data3/b3 Brick10: 10.70.47.159:/mnt/data3/b3 Brick11: 10.70.46.241:/mnt/data3/b3 Brick12: 10.70.46.219:/mnt/data3/b3 Options Reconfigured: nfs.rpc-auth-allow: 192.168.10.1,192.168.10.2,192.168.10.3,192.168.10.4,192.168.10.5,192.168.10.6,192.168.10.7,192.168.10.8,192.168.10.9,192.168.10.10,192.168.10.11,192.168.10.12,192.168.10.13,192.168.10.14,192.168.10.15,192.168.10.16,192.168.10.17,192.168.10.18,192.168.10.19,192.168.10.20,192.168.10.21,192.168.10.22,192.168.10.23,192.168.10.24,192.168.10.25,192.168.10.26,192.168.10.27,192.168.10.28,192.168.10.29,192.168.10.30,192.168.10.31,192.168.10.32,192.168.10.33,192.168.10.34,192.168.10.35,192.168.10.36,192.168.10.37,192.168.10.38,192.168.10.39,192.168.10.40 transport.address-family: inet performance.readdir-ahead: on nfs.disable: off nfs-ganesha: disable cluster.enable-shared-storage: disable As the issue reported is no more observed with this build,Hence marking this Bug as Verified Doc-text Looks good to me. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html |