Description of problem: ======================= On virtual longevity setup, performed stat on each file from fuse client. For 18k files, it took more than 7min. Following is the pattern: <1> stat is successful on few files <2> It gets paused for 30sec-1min <3> It resumes and follow step 1 and step 2 again until completion Once ctr is disabled, the lookup on each file completed in ~3min. default ctr-enabled: ==================== [root@mia fuse]# find . | wc -l 18733 [root@mia fuse]# [root@dj fuse]# time find . | xargs stat : : : : : : : : : : : : : : : : : : : : : Modify: 2015-10-19 02:53:53.228000000 +0530 Change: 2015-10-20 00:32:21.784000000 +0530 Birth: - File: ‘./s.644’ Size: 1048576 Blocks: 2048 IO Block: 131072 regular file Device: 24h/36d Inode: 12947151012847994339 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:fusefs_t:s0 Access: 2015-10-19 02:54:03.722000000 +0530 Modify: 2015-10-19 02:54:04.446000000 +0530 Change: 2015-10-20 00:32:46.345000000 +0530 Birth: - File: ‘./dd.972’ Size: 1048576 Blocks: 2048 IO Block: 131072 regular file Device: 24h/36d Inode: 9723623769459433783 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:fusefs_t:s0 Access: 2015-10-15 04:45:57.534000000 +0530 Modify: 2015-10-15 04:45:57.570000000 +0530 Change: 2015-10-20 00:33:14.208000000 +0530 Birth: - real 7m41.900s user 0m1.510s sys 0m3.217s [root@dj fuse]# Disabled ctr: "ctr-enabled off" =============================== [root@dj fuse]# [root@dj fuse]# time find . | xargs stat : : : : : : : : : : : : : : : : : : : : : Modify: 2015-10-19 03:21:01.914000000 +0530 Change: 2015-10-20 01:54:02.067000000 +0530 Birth: - File: ‘./dd.259’ Size: 1048576 Blocks: 2048 IO Block: 131072 regular file Device: 24h/36d Inode: 12399192322619829042 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:fusefs_t:s0 Access: 2015-10-15 02:38:20.231000000 +0530 Modify: 2015-10-15 02:38:20.281000000 +0530 Change: 2015-10-20 01:54:05.228000000 +0530 Birth: - File: ‘./s.246’ Size: 1048576 Blocks: 2048 IO Block: 131072 regular file Device: 24h/36d Inode: 12784199456817109142 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:fusefs_t:s0 Access: 2015-10-19 01:39:07.175000000 +0530 Modify: 2015-10-19 01:39:07.848000000 +0530 Change: 2015-10-20 01:54:12.182000000 +0530 Birth: - real 3m11.742s user 0m1.244s sys 0m2.482s [root@dj fuse]# Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.7.5-0.19.git0f5c3e8.el7.centos.x86_64 How reproducible: ================= Always Steps Carried: ============== 1. Created 12 node cluster 2. Create tiered volume with Hot tier as (6 x 2) and Cold tier as (2 x (6 + 2) = 16) 3. Fuse Mount the volume on 3 clients RHEL7.2,RHEl7.1 and RHEL6.7 4. Start creating data from each client: Client 1: ========= [root@dj ~]# crefi --multi -n 10 -b 10 -d 10 --max=1024k --min=5k --random -T 5 -t text -I 5 --fop=create /mnt/fuse/ Client 2: ========= [root@mia ~]# cd /mnt/fuse/ [root@mia fuse]# for i in {1..10}; do cp -rf /etc etc.$i ; sleep 100 ; done Client 3: ========= [root@wingo fuse]# for i in {1..999}; do dd if=/dev/zero of=dd.$i bs=1M count=1 ; sleep 10 ; done 5. After a while, the data creation of client 1 and client 2 should be completed while the data creation from client 3 will still be inprogress 6. At this point the data creation will be of only 1 file from client 3 in every 10 sec. 7. Once data creation is completed. Perform "find . | xargs stat" from any of one client . Note the time it takes to complete this task. 8. disable ctr "ctr-enabled off" 9. Perform "find . | xargs stat" from any of one client . Note the time it takes to complete this task. Actual results: =============== With default ctr-enabled, the lookup from clients are too slow.
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life. Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS. If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.