Bug 1012863 - Gluster fuse client checks old firewall ports
Summary: Gluster fuse client checks old firewall ports
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: 3.4.0
Hardware: Unspecified
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-09-27 09:31 UTC by purpleidea
Modified: 2015-10-07 12:17 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-10-07 12:17:17 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description purpleidea 2013-09-27 09:31:07 UTC
Description of problem:

A client mounts a gluster volume.
The firewall has so far whitelisted the exact ports needed.
An add brick command is run on the server, but the extra ports that the client will need open on the servers to get to those bricks aren't yet open.
The client starts contacting the server but gets blocked on those ports by the firewall.
A remove-brick operation on those same bricks is run.
The bricks remove successfully, however the client is still seen to be trying to contact the server on the old ports (which were never opened actually).
Not sure if this happens when the client isn't blocked.
I solved the issue by remounting the volume on the client.


Version-Release number of selected component (if applicable):

Tested on gluster 3.4

How reproducible:

100%

Steps to Reproduce:
1. See description above.
2.
3.

Actual results:
Client still tries to connect on old ports, even when they are no longer even listening on gluster hosts.

Expected results:
Once brick(s) are removed, client should stop trying to contact those old, no longer used ports.


Additional info:
Hope this helps.

Cheers,
James

Comment 1 Richard 2013-10-07 12:04:58 UTC
AFAIK Gluster doesn't open firewall ports when you add a new brick... that would be up to the sysadmin of the server/clients in question...

See this for more info on what ports you need to open:
http://www.jamescoyle.net/how-to/457-glusterfs-firewall-rules

Richard.

Comment 2 purpleidea 2013-10-07 18:30:55 UTC
(In reply to Richard from comment #1)
Hi,


> AFAIK Gluster doesn't open firewall ports when you add a new brick... that
> would be up to the sysadmin of the server/clients in question...
I think you misunderstood the issue. I'll try to explain it better. I don't expect gluster to open/close ports of course! I've whitelisted the exact ports gluster should need for a client to mount. However, when this list of ports that the client needs changes (because of a brick removed), I then change the firewall settings. I still see the client attempting to connect on *old* ports that shouldn't be used anymore. I see this because I have my firewall report any incorrect attempts at using closed ports. The client continues to try until a remount.


> 
> See this for more info on what ports you need to open:
> http://www.jamescoyle.net/how-to/457-glusterfs-firewall-rules
I know this of course ;) I'm the puppet-gluster guy :P https://github.com/purpleidea/puppet-gluster

> 
> Richard.
James

Comment 3 Richard 2013-10-08 20:31:10 UTC
lol, how spooky is that, sorry to link you back to your own pages ;-)

When you remove a brick, how do you do it? Do you just switch the brick off, or run the applicable "gluster volume remove-brick" command to nicely remove the brick from the volume?
Thanks,
Rich

Comment 4 purpleidea 2013-10-08 20:39:05 UTC
(In reply to Richard from comment #3)
> lol, how spooky is that, sorry to link you back to your own pages ;-)
haha, actually I'm not jamescoyle that's someone else.
I'm pointing out that I've got the detailed firewall rules in my puppet module, so chance are I've already figured out this newbie problem ;)

> 
> When you remove a brick, how do you do it? Do you just switch the brick off,
> or run the applicable "gluster volume remove-brick" command to nicely remove
> the brick from the volume?
The above results were obtained from running the remove-brick command...
After they were run, I subsequently updated the firewall to close the unneeded extra ports. I then saw logs of the firewall blocking requests on those ports from the client.

Obviously the client didn't get the message and was still uselessly connecting to old ports for a now missing brick.

Remounting the client fixed the issue.

Just pointing out the bug that the client should stop using bricks once they are no longer around of course.

> Thanks,
> Rich

Cheers,
James

Comment 5 Richard 2013-10-08 20:46:53 UTC
oops, my mistake ;-)

After running the remove brick command did you detach the node from the pool?
"gluster peer detach hostname/ip"?

if not, the client node may check if that node is still connected to the pool?

Comment 6 purpleidea 2013-10-08 20:49:10 UTC
(In reply to Richard from comment #5)
> oops, my mistake ;-)
> 
> After running the remove brick command did you detach the node from the pool?
> "gluster peer detach hostname/ip"?
You don't necessarily want to detach the host, if there are remaining bricks on there that you're still using. So no.

> 
> if not, the client node may check if that node is still connected to the
> pool?

It should still connect to the remaining bricks if any.

Comment 7 Richard 2013-10-08 21:19:00 UTC
so are there any remaining bricks on the host?

Comment 8 purpleidea 2013-10-08 21:52:46 UTC
(In reply to Richard from comment #7)
> so are there any remaining bricks on the host?

There were, yes.

Comment 9 Niels de Vos 2015-05-17 21:59:22 UTC
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs".

If there is no response by the end of the month, this bug will get automatically closed.

Comment 10 Kaleb KEITHLEY 2015-10-07 12:17:17 UTC
GlusterFS 3.4.x has reached end-of-life.

If this bug still exists in a later release please reopen this and change the version or open a new bug.


Note You need to log in before you can comment on or make changes to this bug.