Bug 1012863
Summary: | Gluster fuse client checks old firewall ports | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | purpleidea |
Component: | core | Assignee: | bugs <bugs> |
Status: | CLOSED EOL | QA Contact: | |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.4.0 | CC: | bugs, gluster-bugs, purpleidea, redhat.bugs |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-10-07 12:17:17 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
purpleidea
2013-09-27 09:31:07 UTC
AFAIK Gluster doesn't open firewall ports when you add a new brick... that would be up to the sysadmin of the server/clients in question... See this for more info on what ports you need to open: http://www.jamescoyle.net/how-to/457-glusterfs-firewall-rules Richard. (In reply to Richard from comment #1) Hi, > AFAIK Gluster doesn't open firewall ports when you add a new brick... that > would be up to the sysadmin of the server/clients in question... I think you misunderstood the issue. I'll try to explain it better. I don't expect gluster to open/close ports of course! I've whitelisted the exact ports gluster should need for a client to mount. However, when this list of ports that the client needs changes (because of a brick removed), I then change the firewall settings. I still see the client attempting to connect on *old* ports that shouldn't be used anymore. I see this because I have my firewall report any incorrect attempts at using closed ports. The client continues to try until a remount. > > See this for more info on what ports you need to open: > http://www.jamescoyle.net/how-to/457-glusterfs-firewall-rules I know this of course ;) I'm the puppet-gluster guy :P https://github.com/purpleidea/puppet-gluster > > Richard. James lol, how spooky is that, sorry to link you back to your own pages ;-) When you remove a brick, how do you do it? Do you just switch the brick off, or run the applicable "gluster volume remove-brick" command to nicely remove the brick from the volume? Thanks, Rich (In reply to Richard from comment #3) > lol, how spooky is that, sorry to link you back to your own pages ;-) haha, actually I'm not jamescoyle that's someone else. I'm pointing out that I've got the detailed firewall rules in my puppet module, so chance are I've already figured out this newbie problem ;) > > When you remove a brick, how do you do it? Do you just switch the brick off, > or run the applicable "gluster volume remove-brick" command to nicely remove > the brick from the volume? The above results were obtained from running the remove-brick command... After they were run, I subsequently updated the firewall to close the unneeded extra ports. I then saw logs of the firewall blocking requests on those ports from the client. Obviously the client didn't get the message and was still uselessly connecting to old ports for a now missing brick. Remounting the client fixed the issue. Just pointing out the bug that the client should stop using bricks once they are no longer around of course. > Thanks, > Rich Cheers, James oops, my mistake ;-) After running the remove brick command did you detach the node from the pool? "gluster peer detach hostname/ip"? if not, the client node may check if that node is still connected to the pool? (In reply to Richard from comment #5) > oops, my mistake ;-) > > After running the remove brick command did you detach the node from the pool? > "gluster peer detach hostname/ip"? You don't necessarily want to detach the host, if there are remaining bricks on there that you're still using. So no. > > if not, the client node may check if that node is still connected to the > pool? It should still connect to the remaining bricks if any. so are there any remaining bricks on the host? (In reply to Richard from comment #7) > so are there any remaining bricks on the host? There were, yes. GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5. This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs". If there is no response by the end of the month, this bug will get automatically closed. GlusterFS 3.4.x has reached end-of-life. If this bug still exists in a later release please reopen this and change the version or open a new bug. |