Description of problem: Glusterd should use ports above reserved ports (above 1024) Version-Release number of selected component (if applicable): RHGS 3.2 Actual results: Many instances has been found that gluster is allocating ports below 1024 which are creating a lot of issue with other application like nfs or openshift which requires those reserved ports to function normally. Expected results: Gluster should always allocate ports above 1024. Additional info: ~~~ tcp 0 0 10.100.10.76:925 10.100.30.109:49154 ESTABLISHED 12971/glusterfs tcp 0 0 10.100.10.76:925 10.100.30.109:49154 ESTABLISHED 12971/glusterfs tcp 0 0 10.100.10.76:883 10.100.30.109:49160 ESTABLISHED 12971/glusterfs ~~~
The problem mentioned in the original bug description about openshift and gluster trying to use same ports is a problem that can't be solved as it is the problem of limited resources and more demand. To fix the problem that is mentioned in comment 2 about gluster using a port that is used by rquotad, you can use the ip_local_reserved_ports feature of linux kernel as mentioned in https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt . We follow the same model that is used by linux kernel for port allocation. We don't bind to a port that is listed in the reserved ports list. Specifying 875 port in this list would ensure that gluster client processes don't use it. There is a certain trust logic in Gluster that requires binding to a port under 1024 and it will take a few more releases before we can completely get rid of it.
This issue is not anything specific to GlusterD. Moving it to core.
I spoke to Atin and others too. There is no way to do this in our current version. I still stand by my original answer that it is up to the admin to use ip_local_reserved_ports option to handle these errors and does not need any code change.
Looks like we have the solution as per #c25 here. Should we do anything else? Or close this as FIXED?
*** Bug 1294237 has been marked as a duplicate of this bug. ***
> Do we have this behaviour clearly documented in our gluster docs ? and if not do we have a document Bz raised to document this behaviour ? Bipin, adding a needinfo for you! If you think it is documented enough, feel free to close this! Note that all the customer cases are now in Closed state at present!
Bipin, while I understand that there are many customer issues raised about this issue, the fix for this would involve major changes at n/w layer at present. Hence we are inclined towards closing this as DEFERRED (ie, will pick it up only if this gets scoped for future releases (!3.5, but beyond). Please check with PM on scoping and re-open if it gets prioritized from GSS. For now, our statement is same as comment#29, and hence we will close this as DEFERRED.