Red Hat Bugzilla – Bug 1012863
Gluster fuse client checks old firewall ports
Last modified: 2015-10-07 08:17:17 EDT
Description of problem:
A client mounts a gluster volume.
The firewall has so far whitelisted the exact ports needed.
An add brick command is run on the server, but the extra ports that the client will need open on the servers to get to those bricks aren't yet open.
The client starts contacting the server but gets blocked on those ports by the firewall.
A remove-brick operation on those same bricks is run.
The bricks remove successfully, however the client is still seen to be trying to contact the server on the old ports (which were never opened actually).
Not sure if this happens when the client isn't blocked.
I solved the issue by remounting the volume on the client.
Version-Release number of selected component (if applicable):
Tested on gluster 3.4
Steps to Reproduce:
1. See description above.
Client still tries to connect on old ports, even when they are no longer even listening on gluster hosts.
Once brick(s) are removed, client should stop trying to contact those old, no longer used ports.
Hope this helps.
AFAIK Gluster doesn't open firewall ports when you add a new brick... that would be up to the sysadmin of the server/clients in question...
See this for more info on what ports you need to open:
(In reply to Richard from comment #1)
> AFAIK Gluster doesn't open firewall ports when you add a new brick... that
> would be up to the sysadmin of the server/clients in question...
I think you misunderstood the issue. I'll try to explain it better. I don't expect gluster to open/close ports of course! I've whitelisted the exact ports gluster should need for a client to mount. However, when this list of ports that the client needs changes (because of a brick removed), I then change the firewall settings. I still see the client attempting to connect on *old* ports that shouldn't be used anymore. I see this because I have my firewall report any incorrect attempts at using closed ports. The client continues to try until a remount.
> See this for more info on what ports you need to open:
I know this of course ;) I'm the puppet-gluster guy :P https://github.com/purpleidea/puppet-gluster
lol, how spooky is that, sorry to link you back to your own pages ;-)
When you remove a brick, how do you do it? Do you just switch the brick off, or run the applicable "gluster volume remove-brick" command to nicely remove the brick from the volume?
(In reply to Richard from comment #3)
> lol, how spooky is that, sorry to link you back to your own pages ;-)
haha, actually I'm not jamescoyle that's someone else.
I'm pointing out that I've got the detailed firewall rules in my puppet module, so chance are I've already figured out this newbie problem ;)
> When you remove a brick, how do you do it? Do you just switch the brick off,
> or run the applicable "gluster volume remove-brick" command to nicely remove
> the brick from the volume?
The above results were obtained from running the remove-brick command...
After they were run, I subsequently updated the firewall to close the unneeded extra ports. I then saw logs of the firewall blocking requests on those ports from the client.
Obviously the client didn't get the message and was still uselessly connecting to old ports for a now missing brick.
Remounting the client fixed the issue.
Just pointing out the bug that the client should stop using bricks once they are no longer around of course.
oops, my mistake ;-)
After running the remove brick command did you detach the node from the pool?
"gluster peer detach hostname/ip"?
if not, the client node may check if that node is still connected to the pool?
(In reply to Richard from comment #5)
> oops, my mistake ;-)
> After running the remove brick command did you detach the node from the pool?
> "gluster peer detach hostname/ip"?
You don't necessarily want to detach the host, if there are remaining bricks on there that you're still using. So no.
> if not, the client node may check if that node is still connected to the
It should still connect to the remaining bricks if any.
so are there any remaining bricks on the host?
(In reply to Richard from comment #7)
> so are there any remaining bricks on the host?
There were, yes.
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.
This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "email@example.com".
If there is no response by the end of the month, this bug will get automatically closed.
GlusterFS 3.4.x has reached end-of-life.
If this bug still exists in a later release please reopen this and change the version or open a new bug.