Description of problem: ======================= Volume has lots of data and when trying to write a 10MB file every 10secs from fuse client. CPU usage for glusterfsd process jumps to 200-300%age as: Tasks: 184 total, 2 running, 182 sleeping, 0 stopped, 0 zombie %Cpu(s): 23.6 us, 47.8 sy, 0.0 ni, 25.4 id, 1.2 wa, 0.0 hi, 1.1 si, 1.0 st KiB Mem : 3882184 total, 1610416 free, 870436 used, 1401332 buff/cache KiB Swap: 2097148 total, 2097148 free, 0 used. 2681116 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14248 root 20 0 2111708 57584 4664 S 229.9 1.5 85:12.25 glusterfsd 14230 root 20 0 2185444 60196 4696 S 63.5 1.6 130:21.93 glusterfsd 14359 root 20 0 2498816 62376 4672 S 5.3 1.6 12:43.21 glusterfsd 14445 root 20 0 1065268 126640 4728 S 0.3 3.3 3:51.64 glusterfs 15170 root 20 0 146144 2052 1420 R 0.3 0.1 0:00.12 top 1 root 20 0 191352 6524 3892 S 0.0 0.2 0:02.98 systemd Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.7.5-0.19.git0f5c3e8.el7.centos.x86_64 How reproducible: ================= Always Steps Carried: ============== 1. Created 12 node cluster 2. Create tiered volume with Hot tier as (6 x 2) and Cold tier as (2 x (6 + 2) = 16) 3. Fuse Mount the volume on 3 clients RHEL7.2,RHEl7.1 and RHEL6.7 4. Start creating data from each client: Client 1: ========= [root@dj ~]# crefi --multi -n 10 -b 10 -d 10 --max=1024k --min=5k --random -T 5 -t text -I 5 --fop=create /mnt/fuse/ Client 2: ========= [root@mia ~]# cd /mnt/fuse/ [root@mia fuse]# for i in {1..10}; do cp -rf /etc etc.$i ; sleep 100 ; done Client 3: ========= [root@wingo fuse]# for i in {1..999}; do dd if=/dev/zero of=dd.$i bs=1M count=1 ; sleep 10 ; done 5. After a while, the data creation of client 1 and client 2 should be completed while the data creation from client 3 will still be inprogress 6. At this point the data creation will be of only 1 file from client 3 in every 10 sec. 7. Monitor the cpu usage using top Actual results: =============== CPU usage jumps to >200%age for a while, reduces to 20-30%age and again jumps to >200%age
Can you check if this happens with quota disabled? With top on your configuration, I have noticed the brick processes hit high CPU, but the tier migration process does not.
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life. Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS. If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.