Bug 848342

Summary: Dbench exit with "Bad file descriptor" if the graph-change happens
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Vidya Sakar <vinaraya>
Component: fuseAssignee: Raghavendra G <rgowdapp>
Status: CLOSED ERRATA QA Contact: Vijaykumar Koppad <vkoppad>
Severity: high Docs Contact:
Priority: medium    
Version: 2.0CC: aavati, amarts, bbandari, csaba, gluster-bugs, rfortier, vkoppad
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 827405 Environment:
Last Closed: 2013-09-23 22:36:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 827405    
Bug Blocks:    

Description Vidya Sakar 2012-08-15 09:58:41 UTC
+++ This bug was initially created as a clone of Bug #827405 +++

Description of problem:
Running dbench on geo-rep setup with master config distributed-replicate and slave config distributed-replicat and parallely quota was being set and unset in loop. Dbench exited with "Bad file descriptor" 

100       686     4.19 MB/sec  execute 174 sec  latency 9912.135 ms
 100       686     4.16 MB/sec  execute 175 sec  latency 10343.618 ms
[656] write failed on handle 10011 (Bad file descriptor)
Child failed with status 1


Version-Release number of selected component (if applicable):3.3.0qa45


How reproducible:Frequently 


Steps to Reproduce:
1.Create geo-rep setup with master as distributed-replicate and slave as distribute-replicate 
2.Run dbench in the mount-point
3.enable and disable the quota in loop
  
Actual results:dbench exits with Bad file descriptor 


Expected results:dbench should complete properly.


Additional info:

volume info of the master 


Volume Name: master
Type: Distributed-Replicate
Volume ID: be06889e-66e2-4398-98be-a904652c7a42
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 172.17.251.151:/exportdir/d1
Brick2: 172.17.251.151:/exportdir/d2
Brick3: 172.17.251.151:/exportdir/d3
Brick4: 172.17.251.151:/exportdir/d4
Options Reconfigured:
features.quota: on
geo-replication.indexing: on

--- Additional comment from vkoppad on 2012-06-01 07:37:49 EDT ---

Created attachment 588419 [details]
Client log file.

--- Additional comment from vkoppad on 2012-06-01 07:38:54 EDT ---

In the steps to reproduce,
step 2 and step 3 should be done in parallel.

--- Additional comment from amarts on 2012-07-11 07:02:11 EDT ---

reducing the priority to medium as 'quota set/unset' in loop is not a practical use case.

Comment 2 Amar Tumballi 2012-08-23 06:45:05 UTC
This bug is not seen in current master branch (which will get branched as RHS 2.1.0 soon). To consider it for fixing, want to make sure this bug still exists in RHS servers. If not reproduced, would like to close this.

Comment 3 Amar Tumballi 2012-10-11 10:34:40 UTC
with proper graph change this should be fixed... in upstream we do have support for it.. and need a round of check.

Comment 5 Scott Haines 2013-09-23 22:36:19 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html