Bug 1460597 - [RFE][GSS]Gluster process should use ports above reserved ports (above 1023)
Summary: [RFE][GSS]Gluster process should use ports above reserved ports (above 1023)
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: core
Version: rhgs-3.1
Hardware: All
OS: Unspecified
medium
high
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: Rahul Hinduja
URL:
Whiteboard:
: 1294237 (view as bug list)
Depends On:
Blocks: 1408949 RHGS-usability-bug-GSS RHGS-3.4-GSS-proposed-tracker
TreeView+ depends on / blocked
 
Reported: 2017-06-12 07:09 UTC by Abhishek Kumar
Modified: 2021-12-10 15:05 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-04-24 17:27:19 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 762989 0 high CLOSED Possibility of GlusterFS port clashes with reserved ports 2021-02-22 00:41:40 UTC
Red Hat Knowledge Base (Solution) 917263 0 None None None 2018-04-02 11:19:03 UTC

Description Abhishek Kumar 2017-06-12 07:09:09 UTC
Description of problem:

Glusterd should use ports above reserved ports (above 1024)

Version-Release number of selected component (if applicable):

RHGS 3.2

Actual results:

Many instances has been found that gluster is allocating ports below 1024 which are creating a lot of issue with other application like nfs or openshift which requires those reserved ports to function normally.

Expected results:

Gluster should always allocate ports above 1024.

Additional info:

~~~
tcp        0      0 10.100.10.76:925        10.100.30.109:49154     ESTABLISHED 12971/glusterfs
tcp        0      0 10.100.10.76:925        10.100.30.109:49154     ESTABLISHED 12971/glusterfs
tcp        0      0 10.100.10.76:883        10.100.30.109:49160     ESTABLISHED 12971/glusterfs
~~~

Comment 3 Raghavendra Talur 2017-06-12 08:16:49 UTC
The problem mentioned in the original bug description about openshift and gluster trying to use same ports is a problem that can't be solved as it is the problem of limited resources and more demand.

To fix the problem that is mentioned in comment 2 about gluster using a port that is used by rquotad, you can use the ip_local_reserved_ports feature of linux kernel as mentioned in https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt . We follow the same model that is used by linux kernel for port allocation. We don't bind to a port that is listed in the reserved ports list. Specifying 875 port in this list would ensure that gluster client processes don't use it.

There is a certain trust logic in Gluster that requires binding to a port under 1024 and it will take a few more releases before we can completely get rid of it.

Comment 4 Atin Mukherjee 2017-06-12 12:03:40 UTC
This issue is not anything specific to GlusterD. Moving it to core.

Comment 7 Raghavendra Talur 2017-06-16 10:30:26 UTC
I spoke to Atin and others too. There is no way to do this in our current version.

I still stand by my original answer that it is up to the admin to use ip_local_reserved_ports option to handle these errors and does not need any code change.

Comment 26 Amar Tumballi 2018-04-02 07:40:05 UTC
Looks like we have the solution as per #c25 here. Should we do anything else? Or close this as FIXED?

Comment 27 Amar Tumballi 2018-04-02 11:19:03 UTC
*** Bug 1294237 has been marked as a duplicate of this bug. ***

Comment 32 Amar Tumballi 2018-10-11 09:56:46 UTC
> Do we have this behaviour clearly documented in our gluster docs ? and if not do we have a document Bz raised to document this behaviour ?

Bipin, adding a needinfo for you! If you think it is documented enough, feel free to close this!

Note that all the customer cases are now in Closed state at present!

Comment 34 Amar Tumballi 2019-04-24 17:27:19 UTC
Bipin, while I understand that there are many customer issues raised about this issue, the fix for this would involve major changes at n/w layer at present. Hence we are inclined towards closing this as DEFERRED (ie, will pick it up only if this gets scoped for future releases (!3.5, but beyond).

Please check with PM on scoping and re-open if it gets prioritized from GSS. 

For now, our statement is same as comment#29, and hence we will close this as DEFERRED.


Note You need to log in before you can comment on or make changes to this bug.