Bug 961673

Summary: file creation fails on the mount point while adding a brick to the volume
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: shylesh <shmohan>
Component: glusterfsAssignee: shishir gowda <sgowda>
Status: CLOSED ERRATA QA Contact: shylesh <shmohan>
Severity: high Docs Contact:
Priority: medium    
Version: unspecifiedCC: amarts, nsathyan, rhs-bugs, shmohan, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.4.0.9rhs Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-09-23 22:35:30 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description shylesh 2013-05-10 08:59:53 UTC
Description of problem:
While files are getting created on the mount point , adding bricks to the volume make file creation failure

Version-Release number of selected component (if applicable):

[root@rhs3-alpha ~]# rpm -qa | grep gluster
glusterfs-server-3.4.0.5rhs-1.el6rhs.x86_64
glusterfs-fuse-3.4.0.5rhs-1.el6rhs.x86_64
glusterfs-debuginfo-3.4.0.5rhs-1.el6rhs.x86_64
glusterfs-3.4.0.5rhs-1.el6rhs.x86_64

How reproducible:
always

Steps to Reproduce:
1.created a 2 brick distributed volume 
2. ran dd command in a loop
for i in {1..100}
do
 dd if=/dev/urandom of=$i bs=1M count=10 
done
3. kept on adding pair of  bricks at a time

 
  
Actual results:
File creation fails on the mount point 
dd: opening `62': No such file or directory
dd: opening `63': No such file or directory
dd: opening `64': No such file or directory
dd: opening `65': No such file or directory
dd: opening `66': No such file or directory
dd: opening `67': No such file or directory
dd: opening `68': No such file or directory
dd: opening `69': No such file or directory
dd: opening `70': No such file or directory
dd: opening `71': No such file or directory
dd: opening `72': No such file or directory
dd: opening `73': No such file or directory
dd: opening `74': No such file or directory
dd: opening `75': No such file or directory
dd: opening `76': No such file or directory
dd: opening `77': No such file or directory
dd: opening `78': No such file or directory
dd: opening `79': No such file or directory
dd: opening `80': No such file or directory
dd: opening `81': No such file or directory
dd: opening `82': No such file or directory
dd: opening `83': No such file or directory
dd: opening `84': No such file or directory
dd: opening `85': No such file or directory
dd: opening `86': No such file or directory
dd: opening `87': No such file or directory
dd: opening `88': No such file or directory
dd: opening `89': No such file or directory
dd: opening `90': No such file or directory
dd: opening `91': No such file or directory
dd: opening `92': No such file or directory
dd: opening `93': No such file or directory
dd: opening `94': No such file or directory
dd: opening `95': No such file or directory
dd: opening `96': No such file or directory
dd: opening `97': No such file or directory
dd: opening `98': No such file or directory
dd: opening `99': No such file or directory
dd: opening `100': No such file or directory

 

Additional info:

RHS nodes
=========
10.70.35.64
10.70.35.62

mount 
======
10.70.35.203



[root@rhs3-alpha ~]# gluster v info dist
 
Volume Name: dist
Type: Distribute
Volume ID: a334a38f-b34d-465e-8352-aee7b4c4c53b
Status: Started
Number of Bricks: 21
Transport-type: tcp
Bricks:
Brick1: 10.70.35.64:/brick1/d1
Brick2: 10.70.35.62:/brick1/d2
Brick3: 10.70.35.64:/brick1/d3
Brick4: 10.70.35.64:/brick1/d4
Brick5: 10.70.35.64:/brick1/d5
Brick6: 10.70.35.64:/brick1/d6
Brick7: 10.70.35.64:/brick1/d7
Brick8: 10.70.35.64:/brick1/d8
Brick9: 10.70.35.64:/brick1/d9
Brick10: 10.70.35.64:/brick1/d10
Brick11: 10.70.35.64:/brick1/d11
Brick12: 10.70.35.64:/brick1/d12
Brick13: 10.70.35.64:/brick1/d13
Brick14: 10.70.35.64:/brick1/d14
Brick15: 10.70.35.64:/brick1/d15
Brick16: 10.70.35.64:/brick1/d16
Brick17: 10.70.35.64:/brick1/d17
Brick18: 10.70.35.64:/brick1/d18
Brick19: 10.70.35.64:/brick1/d19
Brick20: 10.70.35.64:/brick1/d21
Brick21: 10.70.35.64:/brick1/d22



attached the sosreport
[2013-05-10 08:58:25.111077] I [client-handshake.c:1468:client_setvolume_cbk] 12-dist-client-8: Server and Client lk-version numbers are not same, reo
pening the fds
[2013-05-10 08:58:25.114155] I [client-handshake.c:450:client_set_lk_version_cbk] 12-dist-client-8: Server lk version = 1
[2013-05-10 08:58:25.114187] I [client-handshake.c:450:client_set_lk_version_cbk] 12-dist-client-7: Server lk version = 1
[2013-05-10 08:58:25.114201] I [rpc-clnt.c:1648:rpc_clnt_reconfig] 12-dist-client-9: changing port to 49165 (from 0)
[2013-05-10 08:58:25.114439] I [rpc-clnt.c:1648:rpc_clnt_reconfig] 12-dist-client-10: changing port to 49166 (from 0)
[2013-05-10 08:58:25.114526] I [rpc-clnt.c:1648:rpc_clnt_reconfig] 12-dist-client-11: changing port to 49167 (from 0)
[2013-05-10 08:58:25.114774] W [socket.c:515:__socket_rwv] 12-dist-client-9: readv on 10.70.35.64:24007 failed (No data available)
[2013-05-10 08:58:25.122362] I [rpc-clnt.c:1648:rpc_clnt_reconfig] 12-dist-client-12: changing port to 49168 (from 0)
[2013-05-10 08:58:25.122439] W [socket.c:515:__socket_rwv] 12-dist-client-10: readv on 10.70.35.64:24007 failed (No data available)
[2013-05-10 08:58:25.128757] W [socket.c:515:__socket_rwv] 12-dist-client-11: readv on 10.70.35.64:24007 failed (No data available)
[2013-05-10 08:58:25.135019] W [socket.c:515:__socket_rwv] 12-dist-client-12: readv on 10.70.35.64:24007 failed (No data available)

Comment 5 shishir gowda 2013-06-04 10:38:40 UTC
Not able to reproduce the issue on glusterfs-3.4.0.8rhs. Can you please check if the issue is still hit the latest release.

Comment 6 shylesh 2013-06-10 06:34:43 UTC
Not able to reproduce on 3.4.0.9rhs-1.el6rhs.x86_64, looks like some other fix fixed the bug

Comment 7 Scott Haines 2013-09-23 22:35:30 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html