Bug 827405 - Dbench exit with "Bad file descriptor" if the graph-change happens
Dbench exit with "Bad file descriptor" if the graph-change happens
Status: CLOSED DUPLICATE of bug 804592
Product: GlusterFS
Classification: Community
Component: fuse (Show other bugs)
mainline
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Raghavendra G
: Triaged
Depends On:
Blocks: 848342
  Show dependency treegraph
 
Reported: 2012-06-01 07:34 EDT by Vijaykumar Koppad
Modified: 2014-08-24 20:49 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 848342 (view as bug list)
Environment:
Last Closed: 2012-08-29 05:17:15 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Client log file. (1005.78 KB, text/x-log)
2012-06-01 07:37 EDT, Vijaykumar Koppad
no flags Details

  None (edit)
Description Vijaykumar Koppad 2012-06-01 07:34:29 EDT
Description of problem:
Running dbench on geo-rep setup with master config distributed-replicate and slave config distributed-replicat and parallely quota was being set and unset in loop. Dbench exited with "Bad file descriptor" 

100       686     4.19 MB/sec  execute 174 sec  latency 9912.135 ms
 100       686     4.16 MB/sec  execute 175 sec  latency 10343.618 ms
[656] write failed on handle 10011 (Bad file descriptor)
Child failed with status 1


Version-Release number of selected component (if applicable):3.3.0qa45


How reproducible:Frequently 


Steps to Reproduce:
1.Create geo-rep setup with master as distributed-replicate and slave as distribute-replicate 
2.Run dbench in the mount-point
3.enable and disable the quota in loop
  
Actual results:dbench exits with Bad file descriptor 


Expected results:dbench should complete properly.


Additional info:

volume info of the master 


Volume Name: master
Type: Distributed-Replicate
Volume ID: be06889e-66e2-4398-98be-a904652c7a42
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 172.17.251.151:/exportdir/d1
Brick2: 172.17.251.151:/exportdir/d2
Brick3: 172.17.251.151:/exportdir/d3
Brick4: 172.17.251.151:/exportdir/d4
Options Reconfigured:
features.quota: on
geo-replication.indexing: on
Comment 1 Vijaykumar Koppad 2012-06-01 07:37:49 EDT
Created attachment 588419 [details]
Client log file.
Comment 2 Vijaykumar Koppad 2012-06-01 07:38:54 EDT
In the steps to reproduce,
step 2 and step 3 should be done in parallel.
Comment 3 Amar Tumballi 2012-07-11 07:02:11 EDT
reducing the priority to medium as 'quota set/unset' in loop is not a practical use case.
Comment 4 Raghavendra G 2012-08-29 05:17:15 EDT

*** This bug has been marked as a duplicate of bug 804592 ***

Note You need to log in before you can comment on or make changes to this bug.