Description of problem: ======================= Running "rpc" sanity test fails on fuse mount for a replicate, distribute-replicate volumes Version-Release number of selected component (if applicable): ============================================================ [11/28/12 - 09:11:22 root@rhs-gp-srv12 system_light]# glusterfs --version glusterfs 3.3.0rhsvirt1 built on Nov 7 2012 10:11:13 [11/28/12 - 09:11:31 root@rhs-gp-srv12 system_light]# rpm -qa | grep gluster glusterfs-fuse-3.3.0rhsvirt1-8.el6.x86_64 glusterfs-3.3.0rhsvirt1-8.el6.x86_64 How reproducible: ================= Often Steps to Reproduce: ================= 1. Create a replicate volume (1x2) with 2 servers and 1 brick on each server. This is the storage for the VM's. 2. Set the volume option "group" to "virt" 3. Set storage.owner-uid , storage.owner-gid to 36. 4. start the volume. 5. create a host from RHEVM 6. create a storage domain from RHEVM for the above created volume. 7. On the host mount point to the volume, run "rpc" sanity test : a. create nfs mount to qa tools ( mount -t nfs 10.70.34.114:/opt /opt ) b. cd /opt/qa/tools/system_light c. ./run.sh -w <mount_point> -l <log_file_name> -t "rpc" Actual results: ================== Changing to the specified mountpoint /rhev/data-center/mnt/rhs-client1.lab.eng.blr.redhat.com:_replicate/run19621 executing rpc start: 08:57:51 real 0m0.036s user 0m0.004s sys 0m0.011s end: 08:57:51 rpc failed 0 Total 0 tests were successful Switching over to the previous working directory Removing /rhev/data-center/mnt/rhs-client1.lab.eng.blr.redhat.com:_replicate//run19621/ rmdir: failed to remove `/rhev/data-center/mnt/rhs-client1.lab.eng.blr.redhat.com:_replicate//run19621/': Directory not empty rmdir failed:Directory not empty Expected results: ================== RPC sanity test should pass. Additional info: =============== Volume Name: replicate Type: Replicate Volume ID: d93217ad-aa06-49df-80bf-b0539e5eba72 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: rhs-client1:/disk1 Brick2: rhs-client16:/disk1 Options Reconfigured: storage.owner-gid: 36 storage.owner-uid: 36 cluster.eager-lock: enable storage.linux-aio: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off Note: The same test passes on regular file system =================================================== [11/28/12 - 09:06:56 root@rhs-gp-srv12 system_light]# mkdir /rpc_testdir [11/28/12 - 09:09:21 root@rhs-gp-srv12 system_light]# ./run.sh -w /rpc_testdir/ -l /fs_sanity_logs/fs_sanity_replicate_3.3.0rhsvirt1-8.el6.x86_64_`hostname`_`date '+%Y'`"_"`date '+%m'`"_"`date '+%d'`"_"`date '+%H'`"_"`date '+%M'`"_"`date '+%S'`_rpc.log -t "rpc" /opt/qa/tools/system_light Tests available: arequal bonnie compile_kernel dbench dd ffsb fileop fs_mark fsx glusterfs_build iozone locks ltp multiple_files openssl posix_compliance postmark read_large rpc syscallbench tiobench ===========================TESTS RUNNING=========================== Changing to the specified mountpoint /rpc_testdir/run20297 executing rpc start: 09:09:44 real 0m6.245s user 0m0.033s sys 0m0.088s end: 09:09:50 1 Total 1 tests were successful Switching over to the previous working directory Removing /rpc_testdir//run20297/
It is also failing for DHT volume- FUSE mount
*** This bug has been marked as a duplicate of bug 856467 ***