Bug 1599702

Summary: stale glusterfs processes when quota is being enabled/disabled with glusterd being restarted
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Nag Pavan Chilakam <nchilaka>
Component: quotaAssignee: hari gowtham <hgowtham>
Status: CLOSED WONTFIX QA Contact: Rahul Hinduja <rhinduja>
Severity: high Docs Contact:
Priority: low    
Version: rhgs-3.3CC: amukherj, nchilaka, rhs-bugs, sankarshan, storage-qa-internal
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-04-22 12:23:08 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Nag Pavan Chilakam 2018-07-10 12:07:24 UTC
Description of problem:
======================
stale glusterfs processes are getting created when we do a glusterd restart while quota is being enabled and disabled on a volume in loop.


Version-Release number of selected component (if applicable):
---------------
3.8.4-54.14

How reproducible:
---------------
always

Steps to Reproduce:
1.have a 1x3 volume started
2.open a terminal say t1 for n1, and do glusterd restart in loop, with 10sec gap between iterations
(#while true;do service glusterd restart;sleep 10;done)
3.from terminal t2 login to n1 and do quota enable/disable in loop
(#while true; do gluster v quota ntap enable;sleep 0.5;gluster v quota ntap disable --mode=script; done)


Actual results:
---------------
you will see many stale glusterfs procs only on n1

Expected results:
----------------
should not be seeing stale processes


Additional info: