Bug 1197707 - nfs: 5 GB file creation taking long time(seems some performance impact)
Summary: nfs: 5 GB file creation taking long time(seems some performance impact)
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-nfs
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Niels de Vos
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-03-02 13:00 UTC by Saurabh
Modified: 2018-04-16 17:59 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-16 17:59:36 UTC
Embargoed:


Attachments (Terms of Use)
sosreport of RHS VM in consideration (8.84 MB, application/x-xz)
2015-03-02 13:05 UTC, Saurabh
no flags Details

Description Saurabh 2015-03-02 13:00:34 UTC
Description of problem:
started creating a 5GB file and the file creation has not finished,
top command shows that nfs process is consuming 22% mem,
[root@vm1 ~]# top
 
top - 10:55:19 up 7 days,  3:26,  3 users,  load average: 1.00, 1.00, 1.02
Tasks: 183 total,   1 running, 182 sleeping,   0 stopped,   0 zombie
Cpu(s): 25.0%us,  0.1%sy,  0.0%ni, 74.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   8191480k total,  7939284k used,   252196k free,   109196k buffers
Swap:  4882428k total,      780k used,  4881648k free,  4231908k cached
 
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                
28851 root      20   0 2609m 1.7g 3748 S  0.3 22.0  34:45.59 glusterfs                                                                                
28824 root      20   0 2322m 373m 3248 S 100.1  4.7 220:30.74 glusterfsd                                                                              
 1120 root      20   0  860m 208m 3228 S  0.0  2.6   1:53.80 glusterfs                                                                                
28858 root      20   0  957m  80m 2572 S  0.0  1.0   0:26.40 glusterfs                                                                                
28867 root      20   0  754m  74m 2640 S  0.0  0.9   0:37.93 glusterfs                                                                                
28808 root      20   0 2201m  65m 3188 S  0.0  0.8   8:48.33 glusterfsd                                                                              
28792 root      20   0 2206m  65m 3168 S  0.0  0.8   9:51.47 glusterfsd                                                                              
28645 root      20   0  631m  20m 3240 S  0.0  0.3   0:21.46 glusterd     


also the brick process in which the file is getting created is consuming 100% of CPU

volume info,
Volume Name: vol0
Type: Distributed-Replicate
Volume ID: 6dfb9bfc-2682-4596-80eb-8149f0e681dd
Status: Started
Snap Volume: no
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.187:/rhs/brick1/d1r1
Brick2: 10.70.37.207:/rhs/brick1/d1r2
Brick3: 10.70.37.179:/rhs/brick1/d2r1
Brick4: 10.70.37.71:/rhs/brick1/d2r2
Brick5: 10.70.37.187:/rhs/brick1/d3r1
Brick6: 10.70.37.207:/rhs/brick1/d3r2
Brick7: 10.70.37.179:/rhs/brick1/d4r1
Brick8: 10.70.37.71:/rhs/brick1/d4r2
Brick9: 10.70.37.187:/rhs/brick1/d5r1
Brick10: 10.70.37.207:/rhs/brick1/d5r2
Brick11: 10.70.37.179:/rhs/brick1/d6r1
Brick12: 10.70.37.71:/rhs/brick1/d6r2
Options Reconfigured:
features.quota-deem-statfs: on
client.event-threads: 5
server.event-threads: 5
features.quota: on
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256

Version-Release number of selected component (if applicable):
glusterfs-3.6.0.47-1.el6rhs.x86_64

How reproducible:
happen to be seen on this build

Steps to Reproduce:
1. create a volume of type 6x2, start it
2. set epoll related options to 5
3. enable quota and set limits on the directory
4. try to create a file of 5GB in size

Actual results:
the 5GB file creation does not finish in two hours also.
the top command related output is mentioned above.

the strace of dd command displays that it is not moving ahead,
[root@rhsauto014 ~]# strace -p 16832
Process 16832 attached - interrupt to quit


Expected results:
the 5GB file should not take this much time.

Additional info:

meminfo from the vm in consideration,
[root@vm1 ~]# free -tg
             total       used       free     shared    buffers     cached
Mem:             7          7          0          0          0          4
-/+ buffers/cache:          3          4
Swap:            4          0          4
Total:          12          7          4
[root@vm1 ~]# echo 3>/proc/sys/vm/drop_caches 

[root@vm1 ~]# free -tg
             total       used       free     shared    buffers     cached
Mem:             7          7          0          0          0          4
-/+ buffers/cache:          3          4
Swap:            4          0          4
Total:          12          7          4

Comment 1 Saurabh 2015-03-02 13:05:59 UTC
Created attachment 997076 [details]
sosreport of RHS VM in consideration


Note You need to log in before you can comment on or make changes to this bug.