Bug 848342 - Dbench exit with "Bad file descriptor" if the graph-change happens
Dbench exit with "Bad file descriptor" if the graph-change happens
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: fuse (Show other bugs)
2.0
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Raghavendra G
Vijaykumar Koppad
:
Depends On: 827405
Blocks:
  Show dependency treegraph
 
Reported: 2012-08-15 05:58 EDT by Vidya Sakar
Modified: 2014-08-24 20:49 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 827405
Environment:
Last Closed: 2013-09-23 18:36:19 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Vidya Sakar 2012-08-15 05:58:41 EDT
+++ This bug was initially created as a clone of Bug #827405 +++

Description of problem:
Running dbench on geo-rep setup with master config distributed-replicate and slave config distributed-replicat and parallely quota was being set and unset in loop. Dbench exited with "Bad file descriptor" 

100       686     4.19 MB/sec  execute 174 sec  latency 9912.135 ms
 100       686     4.16 MB/sec  execute 175 sec  latency 10343.618 ms
[656] write failed on handle 10011 (Bad file descriptor)
Child failed with status 1


Version-Release number of selected component (if applicable):3.3.0qa45


How reproducible:Frequently 


Steps to Reproduce:
1.Create geo-rep setup with master as distributed-replicate and slave as distribute-replicate 
2.Run dbench in the mount-point
3.enable and disable the quota in loop
  
Actual results:dbench exits with Bad file descriptor 


Expected results:dbench should complete properly.


Additional info:

volume info of the master 


Volume Name: master
Type: Distributed-Replicate
Volume ID: be06889e-66e2-4398-98be-a904652c7a42
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 172.17.251.151:/exportdir/d1
Brick2: 172.17.251.151:/exportdir/d2
Brick3: 172.17.251.151:/exportdir/d3
Brick4: 172.17.251.151:/exportdir/d4
Options Reconfigured:
features.quota: on
geo-replication.indexing: on

--- Additional comment from vkoppad@redhat.com on 2012-06-01 07:37:49 EDT ---

Created attachment 588419 [details]
Client log file.

--- Additional comment from vkoppad@redhat.com on 2012-06-01 07:38:54 EDT ---

In the steps to reproduce,
step 2 and step 3 should be done in parallel.

--- Additional comment from amarts@redhat.com on 2012-07-11 07:02:11 EDT ---

reducing the priority to medium as 'quota set/unset' in loop is not a practical use case.
Comment 2 Amar Tumballi 2012-08-23 02:45:05 EDT
This bug is not seen in current master branch (which will get branched as RHS 2.1.0 soon). To consider it for fixing, want to make sure this bug still exists in RHS servers. If not reproduced, would like to close this.
Comment 3 Amar Tumballi 2012-10-11 06:34:40 EDT
with proper graph change this should be fixed... in upstream we do have support for it.. and need a round of check.
Comment 5 Scott Haines 2013-09-23 18:36:19 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Note You need to log in before you can comment on or make changes to this bug.