Bug 868715 - Gluster does not honour /proc/sys/net/ipv4/ip_local_reserved_ports [NEEDINFO]
Gluster does not honour /proc/sys/net/ipv4/ip_local_reserved_ports
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.0
Unspecified Unspecified
low Severity low
: ---
: RHGS 2.1.2
Assigned To: Raghavendra Bhat
Lalatendu Mohanty
: ZStream
: 852819 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-10-21 18:35 EDT by Brad Hubbard
Modified: 2015-05-15 14:16 EDT (History)
9 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0qa5-1
Doc Type: Bug Fix
Doc Text:
Previously, glusterFS would not check the /proc/sys/net/ipv4/ip_local_reserved_ports file before connecting to a port. With this update, the glusterd service does not connect to port numbers mentioned in the /proc/sys/net/ipv4/ip_local_reserved_ports file.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-02-25 02:22:51 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
psriniva: needinfo? (rabhat)


Attachments (Terms of Use)

  None (edit)
Description Brad Hubbard 2012-10-21 18:35:15 EDT
Description of problem:

Gluster does not honour /proc/sys/net/ipv4/ip_local_reserved_ports.


Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. echo 1000-1024 > /proc/sys/net/ipv4/ip_local_reserved_ports
2. Mount a native gluster share on a client.
3. Use "netstat -neopa|grep gluster" to verify that gluster (on the client) has taken port 1023 etc.
  
Actual results:
Gluster uses a port that has been identified as reserved

Expected results:
Gluster should not use a port that has been identified as reserved
Comment 1 Amar Tumballi 2012-10-21 22:38:55 EDT
*** Bug 852819 has been marked as a duplicate of this bug. ***
Comment 2 Brad Hubbard 2012-10-22 18:12:49 EDT
This is related to 723228 which blocks 103401 which I believe may be the real issue here. Upstream have rejected that bug though so I'm not sure where it is going ATM.
Comment 4 Amar Tumballi 2012-10-26 00:10:07 EDT
http://review.gluster.org/4131 should work as a fix. posted to review
Comment 5 Raghavendra Bhat 2012-12-03 06:06:34 EST
http://review.gluster.org/4131 fixes the issue.
Comment 6 Raghavendra Bhat 2012-12-03 06:06:34 EST
http://review.gluster.org/4131 fixes the issue.
Comment 7 Lalatendu Mohanty 2013-01-03 07:07:55 EST
Mounted the volume on client with ports 1000-1024 as reserved ports and the client didn't use the reserve ports as expected and used ports other than the reserved ports.

hence marking the defect as verified.

root@unused lalatendu]# rpm -qa gluster*
glusterfs-3.4.0qa5-1.el6rhs.x86_64
glusterfs-fuse-3.4.0qa5-1.el6rhs.x86_64

[root@unused lalatendu]# glusterfs --version
glusterfs 3.4.0qa5 built on Dec 17 2012 04:36:15

[root@unused lalatendu]# cat /proc/sys/net/ipv4/ip_local_reserved_ports
1000-1024


[root@unused lalatendu]# mount -t glusterfs 10.70.35.63:/test-volume /mnt/gluster_test_volume/
[root@unused lalatendu]# netstat -neopa|grep gluster
tcp        0      0 10.70.35.65:995         10.70.35.80:49152       ESTABLISHED 0          212998     20079/glusterfs      keepalive (13.38/0/0)
tcp        0      0 10.70.35.65:999         10.70.35.63:24007       ESTABLISHED 0          212382     20079/glusterfs      keepalive (13.38/0/0)
tcp        0      0 10.70.35.65:996         10.70.35.63:49167       ESTABLISHED 0          212996     20079/glusterfs      keepalive (13.38/0/0)

#############

Server side commands to create the volume 

root@rhsTestNode-1 ~]# uname -a
Linux rhsTestNode-1 2.6.32-220.30.1.el6.x86_64 #1 SMP Sun Nov 18 15:00:27 EST 2012 x86_64 x86_64 x86_64 GNU/Linux

[root@rhsTestNode-1 ~]# gluster --version
glusterfs 3.4.0qa5 built on Dec 17 2012 04:36:17

[root@rhsTestNode-2 brick2]# uname -a
Linux rhsTestNode-2 2.6.32-279.19.1.el6.x86_64 #1 SMP Sat Nov 24 14:35:28 EST 2012 x86_64 x86_64 x86_64 GNU/Linux

[root@rhsTestNode-2 brick2]# gluster --version
glusterfs 3.4.0qa5 built on Dec 17 2012 04:36:17


[root@rhsTestNode-1 ~]# gluster volume create test-volume replica 2 10.70.35.63:/brick1 10.70.35.80:/brick2
volume create: test-volume: success: please start the volume to access data
[root@rhsTestNode-1 ~]# gluster volume start test-volume
volume start: test-volume: success
[root@rhsTestNode-1 ~]# gluster volume list
test-volume
Comment 8 Scott Haines 2013-09-23 19:18:50 EDT
Retargeting for 2.1.z U2 (Corbett) release.
Comment 9 Pavithra 2014-01-08 04:24:43 EST
Raghavendra,

Can you please review the doc text for technical accuracy?
Comment 11 errata-xmlrpc 2014-02-25 02:22:51 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html

Note You need to log in before you can comment on or make changes to this bug.