Hide Forgot
Description of problem: ======================== On a distribute-replicate volume running dbench on cifs mount continuously reports : "[2137] datasync directory "./clients/client7/~dmtmp/PARADOX" failed: Invalid argument" Version-Release number of selected component (if applicable): ============================================================== glusterfs 3.4.0.31rhs built on Sep 5 2013 08:23:16 How reproducible: ================== Often Steps to Reproduce: ======================== 1.Create a distribute-replicate volume (2 x 2). Start the volume. 2.Create cifs mount. 3.From cifs mount execute: "dbench -s -F -S --one-byte-write-fix --stat-check 10" Actual results: ================ 10 789 0.11 MB/sec execute 27 sec latency 1339.201 ms 10 794 0.12 MB/sec execute 28 sec latency 1624.620 ms 10 794 0.11 MB/sec execute 29 sec latency 1414.908 ms [811] datasync directory "./clients/client3/~dmtmp/PWRPNT" failed: Invalid argument [811] datasync directory "./clients/client0/~dmtmp/PWRPNT" failed: Invalid argument [811] datasync directory "./clients/client6/~dmtmp/PWRPNT" failed: Invalid argument [811] datasync directory "./clients/client2/~dmtmp/PWRPNT" failed: Invalid argument 10 805 0.11 MB/sec execute 30 sec latency 1689.299 ms 10 807 0.11 MB/sec execute 31 sec latency 875.535 ms [811] datasync directory "./clients/client5/~dmtmp/PWRPNT" failed: Invalid argument [811] datasync directory "./clients/client9/~dmtmp/PWRPNT" failed: Invalid argument 10 812 0.11 MB/sec execute 32 sec latency 749.606 ms 10 815 0.10 MB/sec execute 33 sec latency 999.102 ms [811] datasync directory "./clients/client4/~dmtmp/PWRPNT" failed: Invalid argument [811] datasync directory "./clients/client8/~dmtmp/PWRPNT" failed: Invalid argument [811] datasync directory "./clients/client1/~dmtmp/PWRPNT" failed: Invalid argument Expected results: =================== dbench shouldn't fail. Additional info: ================ The same case works fine on fuse and nfs mount.
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/ If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.