Description of problem: When the I/O's are running on smb mount point(attached script is used for running I/O's), doing add-brick operation makes I/O to fail and it gives bad file descriptor error. Creating directory at Z:\file2\TestDir0\TestDir0\TestDir0 Creating files in Z:\file2\TestDir0\TestDir0\TestDir0...... Cannot write to a1 - Bad file descriptor .. Not every add-brick hits this issue.its happening inconsistently. Version-Release number of selected component (if applicable): [root@dhcp159-197 glusterfs]# rpm -qa | grep glusterfs glusterfs-devel-3.5qa2-0.369.git500a656.el6rhs.x86_64 glusterfs-geo-replication-3.5qa2-0.369.git500a656.el6rhs.x86_64 glusterfs-libs-3.5qa2-0.369.git500a656.el6rhs.x86_64 glusterfs-fuse-3.5qa2-0.369.git500a656.el6rhs.x86_64 glusterfs-api-3.5qa2-0.369.git500a656.el6rhs.x86_64 glusterfs-server-3.5qa2-0.369.git500a656.el6rhs.x86_64 glusterfs-debuginfo-3.5qa2-0.340.gitc193996.el6rhs.x86_64 samba-glusterfs-3.6.9-168.1.el6rhs.x86_64 glusterfs-rdma-3.5qa2-0.369.git500a656.el6rhs.x86_64 glusterfs-3.5qa2-0.369.git500a656.el6rhs.x86_64 glusterfs-cli-3.5qa2-0.369.git500a656.el6rhs.x86_64 glusterfs-api-devel-3.5qa2-0.369.git500a656.el6rhs.x86_64 How reproducible: Not every time.Inconsistent. Steps to Reproduce: 1.Create a 6X2 dis-rep volume 2.Mount it via smb on windows client 3.run attached script to start I/O's.(it creates deep directories with given level and files inside these dirs) Actual results: The i/o's fails with the bad file descriptor error. Creating directory at Z:\file2\TestDir0\TestDir0\TestDir0 Creating files in Z:\file2\TestDir0\TestDir0\TestDir0...... Cannot write to a1 - Bad file descriptor to glusterd until brick's port is available [2014-04-29 11:03:09.015571] E [afr-common.c:3965:afr_notify] 4-test-vol-replicate-5: All subvolumes are down. Going offline until atleast one of them comes back up. [2014-04-29 11:03:12.806885] W [client-rpc-fops.c:1170:client3_3_fgetxattr_cbk] 5-test-vol-client-2: remote operation failed: No data available [2014-04-29 11:03:12.807559] W [client-rpc-fops.c:1170:client3_3_fgetxattr_cbk] 5-test-vol-client-3: remote operation failed: No data available [2014-04-29 11:03:15.435681] W [client-rpc-fops.c:1170:client3_3_fgetxattr_cbk] 5-test-vol-client-3: remote operation failed: No data available [2014-04-29 11:03:15.436261] W [client-rpc-fops.c:1170:client3_3_fgetxattr_cbk] 5-test-vol-client-2: remote operation failed: No data available [2014-04-29 11:03:15.908158] W [client-rpc-fops.c:866:client3_3_writev_cbk] 5-test-vol-client-2: remote operation failed: Bad file descriptor [2014-04-29 11:03:15.908360] W [client-rpc-fops.c:866:client3_3_writev_cbk] 5-test-vol-client-3: remote operation failed: Bad file descriptor [2014-04-29 11:03:15.908834] W [client-rpc-fops.c:1811:client3_3_fxattrop_cbk] 5-test-vol-client-2: remote operation failed: Bad file descriptor [2014-04-29 11:03:15.908881] W [client-rpc-fops.c:1811:client3_3_fxattrop_cbk] 5-test-vol-client-3: remote operation failed: Bad file descriptor [2014-04-29 11:03:15.909281] W [client-rpc-fops.c:1579:client3_3_finodelk_cbk] 5-test-vol-client-2: remote operation failed: Bad file descriptor [2014-04-29 11:03:15.909356] I [afr-lk-common.c:676:afr_unlock_inodelk_cbk] 5-test-vol-replicate-1: (null): unlock failed on subvolume test-vol-client-2 with lock owner 3457d4b0097f0000 [2014-04-29 11:03:15.909396] W [client-rpc-fops.c:1579:client3_3_finodelk_cbk] 5-test-vol-client-3: remote operation failed: Bad file descriptor [2014-04-29 11:03:15.909418] I [afr-lk-common.c:676:afr_unlock_inodelk_cbk] 5-test-vol-replicate-1: (null): unlock failed on subvolume test-vol-client-3 with lock owner 3457d4b0097f0000 [2014-04-29 11:03:16.072740] I [afr-self-heal-common.c:2811:afr_log_self_heal_completion_status] 5-test-vol-replicate-1: backgroung data self heal is successfully completed, data self heal from test-vol-client-3 to sinks test-vol-client-2, with 3432448 bytes on test-vol-client-2, 3432448 bytes on test-vol-client-3, data - Pending matrix: [ [ 1 1 ] [ 1 1 ] ] on /rhsdata01/file2/TestDir0/TestDir0/TestDir0/a1 Expected results: I/O's should not fail. Additional info: Trying the test on fuse mount.
Created attachment 890785 [details] script used to run I/O
Dev ack to 3.0 RHS BZs
This bug is fixed recently in DHT and already verified another BZ which is similar to this one.https://bugzilla.redhat.com/show_bug.cgi?id=1279830. Closing this as DUP of 1279830 *** This bug has been marked as a duplicate of bug 1279830 ***