Bug 868715
Summary: | Gluster does not honour /proc/sys/net/ipv4/ip_local_reserved_ports | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Brad Hubbard <bhubbard> |
Component: | glusterfs | Assignee: | Raghavendra Bhat <rabhat> |
Status: | CLOSED ERRATA | QA Contact: | Lalatendu Mohanty <lmohanty> |
Severity: | low | Docs Contact: | |
Priority: | low | ||
Version: | 2.0 | CC: | josh, lmohanty, psriniva, rabhat, rfortier, rhs-bugs, vagarwal, vbellur, vraman |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | RHGS 2.1.2 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.4.0qa5-1 | Doc Type: | Bug Fix |
Doc Text: |
Previously, glusterFS would not check the /proc/sys/net/ipv4/ip_local_reserved_ports file before connecting to a port. With this update, the glusterd service does not connect to port numbers mentioned in the /proc/sys/net/ipv4/ip_local_reserved_ports file.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2014-02-25 07:22:51 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Brad Hubbard
2012-10-21 22:35:15 UTC
*** Bug 852819 has been marked as a duplicate of this bug. *** This is related to 723228 which blocks 103401 which I believe may be the real issue here. Upstream have rejected that bug though so I'm not sure where it is going ATM. http://review.gluster.org/4131 should work as a fix. posted to review http://review.gluster.org/4131 fixes the issue. http://review.gluster.org/4131 fixes the issue. Mounted the volume on client with ports 1000-1024 as reserved ports and the client didn't use the reserve ports as expected and used ports other than the reserved ports. hence marking the defect as verified. root@unused lalatendu]# rpm -qa gluster* glusterfs-3.4.0qa5-1.el6rhs.x86_64 glusterfs-fuse-3.4.0qa5-1.el6rhs.x86_64 [root@unused lalatendu]# glusterfs --version glusterfs 3.4.0qa5 built on Dec 17 2012 04:36:15 [root@unused lalatendu]# cat /proc/sys/net/ipv4/ip_local_reserved_ports 1000-1024 [root@unused lalatendu]# mount -t glusterfs 10.70.35.63:/test-volume /mnt/gluster_test_volume/ [root@unused lalatendu]# netstat -neopa|grep gluster tcp 0 0 10.70.35.65:995 10.70.35.80:49152 ESTABLISHED 0 212998 20079/glusterfs keepalive (13.38/0/0) tcp 0 0 10.70.35.65:999 10.70.35.63:24007 ESTABLISHED 0 212382 20079/glusterfs keepalive (13.38/0/0) tcp 0 0 10.70.35.65:996 10.70.35.63:49167 ESTABLISHED 0 212996 20079/glusterfs keepalive (13.38/0/0) ############# Server side commands to create the volume root@rhsTestNode-1 ~]# uname -a Linux rhsTestNode-1 2.6.32-220.30.1.el6.x86_64 #1 SMP Sun Nov 18 15:00:27 EST 2012 x86_64 x86_64 x86_64 GNU/Linux [root@rhsTestNode-1 ~]# gluster --version glusterfs 3.4.0qa5 built on Dec 17 2012 04:36:17 [root@rhsTestNode-2 brick2]# uname -a Linux rhsTestNode-2 2.6.32-279.19.1.el6.x86_64 #1 SMP Sat Nov 24 14:35:28 EST 2012 x86_64 x86_64 x86_64 GNU/Linux [root@rhsTestNode-2 brick2]# gluster --version glusterfs 3.4.0qa5 built on Dec 17 2012 04:36:17 [root@rhsTestNode-1 ~]# gluster volume create test-volume replica 2 10.70.35.63:/brick1 10.70.35.80:/brick2 volume create: test-volume: success: please start the volume to access data [root@rhsTestNode-1 ~]# gluster volume start test-volume volume start: test-volume: success [root@rhsTestNode-1 ~]# gluster volume list test-volume Retargeting for 2.1.z U2 (Corbett) release. Raghavendra, Can you please review the doc text for technical accuracy? Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |