Hide Forgot
Description of problem: Slow performance on root of the mount dir after setting up geo-replication for disperse volume mount dir -> /mnt/v2 time ll /mnt/v2/ total 0 -rw-r--r--. 1 root root 0 Feb 10 14:16 t1 -rw-r--r--. 1 root root 0 Feb 10 14:16 t2 -rw-r--r--. 1 root root 0 Feb 10 14:16 t3 -rw-r--r--. 1 root root 0 Feb 10 14:16 t4 -rw-r--r--. 1 root root 0 Feb 10 14:16 t5 -rw-r--r--. 1 root root 0 Feb 10 14:13 test -rw-r--r--. 1 root root 0 Feb 10 14:17 test2 real 0m5.026s user 0m0.001s sys 0m0.006s time ll /mnt/v2/temp total 0 -rw-r--r--. 1 root root 0 Feb 12 15:16 1 -rw-r--r--. 1 root root 0 Feb 12 15:16 10 -rw-r--r--. 1 root root 0 Feb 12 15:16 2 -rw-r--r--. 1 root root 0 Feb 12 15:16 3 -rw-r--r--. 1 root root 0 Feb 12 15:16 4 -rw-r--r--. 1 root root 0 Feb 12 15:16 5 -rw-r--r--. 1 root root 0 Feb 12 15:16 6 -rw-r--r--. 1 root root 0 Feb 12 15:16 7 -rw-r--r--. 1 root root 0 Feb 12 15:16 8 -rw-r--r--. 1 root root 0 Feb 12 15:16 9 -rw-r--r--. 1 root root 0 Feb 12 15:22 t2 real 0m0.018s user 0m0.000s sys 0m0.007s Its taking longer time for all the operations. Don't see this issue with dist-rep volume [ 2 x 2] Version-Release number of selected component (if applicable): 3.7.1-16 How reproducible: Always Steps to Reproduce: 1. Create a (4 + 2) disperse volume and mount [fuse] on client 2. Set up and start geo rep session with slave cluster 3. Tried to list / create files on mount point Actual results: Performance is slow, taking longer time [4-6 sec]. Expected results: Should not take longer time. Additional info:
Issue is re-creatable on the reported version and later. Observation on glusterfs 3.8.4: 1. Before starting geo-replication on a EC volume. [root@rhs-cli-16 test]# time ls real 0m0.005s user 0m0.000s sys 0m0.002s [root@rhs-cli-16 test]# gluster volume get ec-src all | grep eager cluster.eager-lock on disperse.eager-lock on [root@rhs-cli-16 test]# 2. After starting geo-replication on a EC volume. [root@rhs-cli-16 test]# time ls real 0m6.739s user 0m0.001s sys 0m0.001s [root@rhs-cli-16 test]# gluster volume set ec-src disperse.eager-lock off volume set: success [root@rhs-cli-16 test]# gluster volume get ec-src all | grep eager cluster.eager-lock on disperse.eager-lock off [root@rhs-cli-16 test]# time ls real 0m0.006s user 0m0.000s sys 0m0.003s [root@rhs-cli-16 test]# Turning eager lock off fixes the issue. We have option to enable/disable eager lock on v3.7.9-2 and above.