Bug 1564600 - Client can create denial of service (DOS) conditions on server
Summary: Client can create denial of service (DOS) conditions on server
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: rpc
Version: mainline
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Milind Changire
QA Contact:
URL:
Whiteboard:
: 1416327 (view as bug list)
Depends On:
Blocks: 1563804
TreeView+ depends on / blocked
 
Reported: 2018-04-06 17:43 UTC by Milind Changire
Modified: 2018-10-25 08:29 UTC (History)
9 users (show)

Fixed In Version: glusterfs-v4.1.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1563804
Environment:
Last Closed: 2018-06-20 18:03:42 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Milind Changire 2018-04-06 17:47:59 UTC
Description:
Gluster setups with large number of bricks or large number of client connections causing a flood of connections to the glusterd process causes connections to be dropped and SYN Flood logs to be logged to the system logs.

Comment 2 Worker Ant 2018-04-06 17:54:11 UTC
REVIEW: https://review.gluster.org/19833 (rpc: rearm listener socket early) posted (#1) for review on master by Milind Changire

Comment 3 Worker Ant 2018-04-07 03:01:41 UTC
COMMIT: https://review.gluster.org/19833 committed in master by "Raghavendra G" <rgowdapp> with a commit message- rpc: rearm listener socket early

Problem:
On node reboot, when glusterd starts volumes, a setup with a large
number of bricks might cause SYN Flooding and connections to be dropped
if the connections are not accepted quickly enough.

Solution:
accept() the connection and rearm the listener socket early to receive
more connection requests as soon as possible.

Change-Id: Ibed421e50284c3f7a8fcdb4de7ac86cf53d4b74e
fixes: bz#1564600
Signed-off-by: Milind Changire <mchangir>

Comment 4 Worker Ant 2018-04-08 13:43:34 UTC
REVIEW: https://review.gluster.org/19834 (rpc: handle poll_err after rearming listener socket early) posted (#1) for review on master by Milind Changire

Comment 5 Worker Ant 2018-04-09 06:04:12 UTC
REVIEW: https://review.gluster.org/19836 (rpc: set listen-backlog to high value) posted (#1) for review on master by Milind Changire

Comment 6 Worker Ant 2018-04-13 03:24:14 UTC
COMMIT: https://review.gluster.org/19836 committed in master by "Raghavendra G" <rgowdapp> with a commit message- rpc: set listen-backlog to high value

Problem:
On node reboot, when glusterd starts volumes rapidly, there's a flood of
connections from the bricks to glusterd and from the self-heal daemons
to the bricks. This causes SYN Flooding and dropped connections when the
listen-backlog is not enough to hold the pending connections to
compensate for the rate at which connections are accepted by the RPC
layer.

Solution:
Increase the listen-backlog value to 1024. This is a partial solution.
Part of the solution is to rearm the listener socket early for quicker
accept() of connections.
See commit 6964640a977cb10c0c95a94e03c229918fa6eca8 (change 19833)

Change-Id: I62283d1f4990dd43839f9a6932cf8a36effd632c
fixes: bz#1564600
Signed-off-by: Milind Changire <mchangir>

Comment 7 Worker Ant 2018-04-16 05:58:18 UTC
REVIEW: https://review.gluster.org/19874 (glusterd: update listen-backlog value to 1024) posted (#1) for review on master by Milind Changire

Comment 8 Worker Ant 2018-04-18 14:43:10 UTC
COMMIT: https://review.gluster.org/19874 committed in master by "Atin Mukherjee" <amukherj> with a commit message- glusterd: update listen-backlog value to 1024

Update default value of listen-backlog to 1024 to reflect the changes in
socket.c

This keeps the actual implementation in socket.c and the help text in
glusterd-volume-set.c consistent

Change-Id: If04c9e0bb5afb55edcc7ca57bbc10922b85b7075
fixes: bz#1564600
Signed-off-by: Milind Changire <mchangir>

Comment 9 Shyamsundar 2018-06-20 18:03:42 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.

glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 10 Xavi Hernandez 2018-10-25 08:29:43 UTC
*** Bug 1416327 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.