Bug 1177167 - ctdb's ping_pong lock tester fails with input/output error on disperse volume mounted with glusterfs
Summary: ctdb's ping_pong lock tester fails with input/output error on disperse volume...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: 3.6.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1221145 1221906
TreeView+ depends on / blocked
 
Reported: 2014-12-24 17:16 UTC by dberger.dev
Modified: 2015-09-22 05:05 UTC (History)
8 users (show)

Fixed In Version: v3.7.4
Clone Of:
: 1221145 (view as bug list)
Environment:
Last Closed: 2015-09-22 05:05:22 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
screenshot with the test working (187.95 KB, image/png)
2015-09-22 05:04 UTC, Pranith Kumar K
no flags Details

Description dberger.dev 2014-12-24 17:16:12 UTC
Description of problem:
ctdb's ping_pong lock tester fails with input/output error on disperse volume mounted with glusterfs.

It apparently works when ping_pong is launched on various hosts where the volume is mounted. As soon as more then on ping_pong is launched on the same host, the tool shows input/output error.

The problem doesn't appear with replica volumes.

Version-Release number of selected component (if applicable):
3.6.1

How reproducible:
Always

Steps to Reproduce:
1. Create a disperse volume (I used 2+1) and mount it as glusterfs
   Problem shows up if the bricks are on a single host too. One single host
   can be used to reproduce it.
2. cd to mount point. Launch 2 simultaneous "ping_pong test 1"
3. 

Actual results:
$ ping_pong test 1
unlock at 0 failed! - Input/output error
lock at 0 failed! - Input/output error
unlock at 0 failed! - Input/output error
lock at 0 failed! - Input/output error
unlock at 0 failed! - Input/output error


Expected results:
$ ping_pong test 1
nnnnn locks/sec

Additional info:
$ gluster volume info test
 
Volume Name: test
Type: Disperse
Volume ID: c41b2c0b-a876-487f-9bf0-01e83027f9da
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.114.177:/gluster/cluster/brick.test
Brick2: 192.168.114.13:/gluster/cluster/brick.test
Brick3: 192.168.114.171:/gluster/cluster/brick.test
Options Reconfigured:
nfs.disable: off

Comment 1 Pranith Kumar K 2015-09-22 05:04:32 UTC
Created attachment 1075638 [details]
screenshot with the test working

This bug seems to be working fine on 3.7.4. Will close the bug.


Note You need to log in before you can comment on or make changes to this bug.