Bug 1309551 - glusterd stopped working when executed parallel peer probe&detach a node and volume status in loop.
Summary: glusterd stopped working when executed parallel peer probe&detach a node an...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
low
low
Target Milestone: ---
: ---
Assignee: Satish Mohan
QA Contact: Byreddy
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-02-18 05:46 UTC by Byreddy
Modified: 2016-09-17 16:47 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-07-19 05:49:32 UTC
Target Upstream Version:


Attachments (Terms of Use)
Node-1-sosreport (10.50 MB, application/x-tar)
2016-02-19 04:35 UTC, Byreddy
no flags Details
Node-2-sosreport (9.93 MB, application/x-tar)
2016-02-19 04:37 UTC, Byreddy
no flags Details
Node-3-sosreport (9.74 MB, application/x-tar)
2016-02-19 04:39 UTC, Byreddy
no flags Details

Description Byreddy 2016-02-18 05:46:48 UTC
Description of problem:
=======================
GlusterD stopped working when i ran the two gluster commands parallel on two nodes cluster, on one node, executed a command peer probe&deprobing a node in loop and on other node executed a command gluster volume status in loop, after some time of execution, glusterd stopped running on the probing/deprobing node.

probing commands failed with error message:

peer probe: success. 
peer detach: success
peer probe: failed: Error through RPC layer, retry again later



Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.5-19


How reproducible:
=================
Every time

Steps to Reproduce:
===================
1.Have two nodes (node-1 and node-2)  cluster with any type of volume.
2. Execute 2a and 2b on two nodes parallel
   2a)On Node-1 execute the below command:
>> for i in `seq 1 150`; do gluster peer probe <node-3>;gluster peer detach  <node-3>;done

   2b)On node-2: execute the below command:
>> for i in `seq 1 200`;do gluster volume status; done

3. Observe the above two commands results 

Actual results:
===============
GlusterD stopped running when executing two commands parallel.


Expected results:
=================
GlusterD should not stop running when commands are executed parallel.

Additional info:

Comment 2 Atin Mukherjee 2016-02-18 05:50:02 UTC
Steps look to be for negative testing and in production this would never be done. Gaurav has already started looking into it. Hence lowering down the severity and priority.

Byreddy,

Please attach the sosreport along with the core file from both the nodes.

~Atin

Comment 3 Byreddy 2016-02-18 07:17:56 UTC
I will attach the sosreports

Comment 4 Byreddy 2016-02-19 04:35:26 UTC
Created attachment 1128427 [details]
Node-1-sosreport

Comment 5 Byreddy 2016-02-19 04:37:00 UTC
Created attachment 1128428 [details]
Node-2-sosreport

Comment 6 Byreddy 2016-02-19 04:39:10 UTC
Created attachment 1128430 [details]
Node-3-sosreport

Comment 8 Atin Mukherjee 2016-07-19 05:49:32 UTC
We do not have any plan to work on this in near future as the use case mentioned here is something which will not be hit in any production set up. I am closing this bug right away. If you think otherwise, reopen with a justification.


Note You need to log in before you can comment on or make changes to this bug.