Description of problem: Gluster does not honour /proc/sys/net/ipv4/ip_local_reserved_ports. Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: 1. echo 1000-1024 > /proc/sys/net/ipv4/ip_local_reserved_ports 2. Mount a native gluster share on a client. 3. Use "netstat -neopa|grep gluster" to verify that gluster (on the client) has taken port 1023 etc. Actual results: Gluster uses a port that has been identified as reserved Expected results: Gluster should not use a port that has been identified as reserved
*** Bug 852819 has been marked as a duplicate of this bug. ***
This is related to 723228 which blocks 103401 which I believe may be the real issue here. Upstream have rejected that bug though so I'm not sure where it is going ATM.
http://review.gluster.org/4131 should work as a fix. posted to review
http://review.gluster.org/4131 fixes the issue.
Mounted the volume on client with ports 1000-1024 as reserved ports and the client didn't use the reserve ports as expected and used ports other than the reserved ports. hence marking the defect as verified. root@unused lalatendu]# rpm -qa gluster* glusterfs-3.4.0qa5-1.el6rhs.x86_64 glusterfs-fuse-3.4.0qa5-1.el6rhs.x86_64 [root@unused lalatendu]# glusterfs --version glusterfs 3.4.0qa5 built on Dec 17 2012 04:36:15 [root@unused lalatendu]# cat /proc/sys/net/ipv4/ip_local_reserved_ports 1000-1024 [root@unused lalatendu]# mount -t glusterfs 10.70.35.63:/test-volume /mnt/gluster_test_volume/ [root@unused lalatendu]# netstat -neopa|grep gluster tcp 0 0 10.70.35.65:995 10.70.35.80:49152 ESTABLISHED 0 212998 20079/glusterfs keepalive (13.38/0/0) tcp 0 0 10.70.35.65:999 10.70.35.63:24007 ESTABLISHED 0 212382 20079/glusterfs keepalive (13.38/0/0) tcp 0 0 10.70.35.65:996 10.70.35.63:49167 ESTABLISHED 0 212996 20079/glusterfs keepalive (13.38/0/0) ############# Server side commands to create the volume root@rhsTestNode-1 ~]# uname -a Linux rhsTestNode-1 2.6.32-220.30.1.el6.x86_64 #1 SMP Sun Nov 18 15:00:27 EST 2012 x86_64 x86_64 x86_64 GNU/Linux [root@rhsTestNode-1 ~]# gluster --version glusterfs 3.4.0qa5 built on Dec 17 2012 04:36:17 [root@rhsTestNode-2 brick2]# uname -a Linux rhsTestNode-2 2.6.32-279.19.1.el6.x86_64 #1 SMP Sat Nov 24 14:35:28 EST 2012 x86_64 x86_64 x86_64 GNU/Linux [root@rhsTestNode-2 brick2]# gluster --version glusterfs 3.4.0qa5 built on Dec 17 2012 04:36:17 [root@rhsTestNode-1 ~]# gluster volume create test-volume replica 2 10.70.35.63:/brick1 10.70.35.80:/brick2 volume create: test-volume: success: please start the volume to access data [root@rhsTestNode-1 ~]# gluster volume start test-volume volume start: test-volume: success [root@rhsTestNode-1 ~]# gluster volume list test-volume
Retargeting for 2.1.z U2 (Corbett) release.
Raghavendra, Can you please review the doc text for technical accuracy?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days