Bug 990513 - NFS: nfs performance is very slow with glusterfs-3.4.0.13rhs-1.el6_4.x86_64
NFS: nfs performance is very slow with glusterfs-3.4.0.13rhs-1.el6_4.x86_64
Status: CLOSED CURRENTRELEASE
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.1
x86_64 Linux
medium Severity urgent
: ---
: ---
Assigned To: Vivek Agarwal
Sudhir D
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-31 06:54 EDT by Rahul Hinduja
Modified: 2016-02-17 19:03 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-09-03 09:57:42 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
nfs-profile-info (5.20 KB, text/plain)
2013-07-31 07:41 EDT, Saurabh
no flags Details

  None (edit)
Description Rahul Hinduja 2013-07-31 06:54:52 EDT
Description of problem:
=======================

nfs performance is very slow with glusterfs-3.4.0.13rhs-1.el6_4.x86_64

Version-Release number of selected component (if applicable):
==============================================================

glusterfs-3.4.0.13rhs-1.el6_4.x86_64

Steps Carried:
==============
1. Created and started 6*2 volume.
2. Mounted on client (Fuse and NFS)
3. Created directories named f and n from fuse mount.
4. cd to dir f from fuse mount
5. cd to dir n from nfs mount
6. From both the mounted directories (f and n) ran the following command to copy /etc in loop

for i in {1..100}; do cp -rf /etc etc.$i ; done

Fuse mount completed the above command in ~15 minutes, while nfs finished copying in 1 hour 10 minutes.

Actual results:
===============

nfs was too slow in comparison with the fuse

Additional info:
================
root@darrel [Jul-31-2013-16:10:33] >pwd
/mnt/nfs/n
root@darrel [Jul-31-2013-16:10:38] >ls
etc.1    etc.13  etc.18  etc.23  etc.28  etc.32  etc.37  etc.41  etc.46  etc.50  etc.55  etc.6   etc.64  etc.69  etc.73  etc.78  etc.82  etc.87  etc.91  etc.96
etc.10   etc.14  etc.19  etc.24  etc.29  etc.33  etc.38  etc.42  etc.47  etc.51  etc.56  etc.60  etc.65  etc.7   etc.74  etc.79  etc.83  etc.88  etc.92  etc.97
etc.100  etc.15  etc.20  etc.25  etc.3   etc.34  etc.39  etc.43  etc.48  etc.52  etc.57  etc.61  etc.66  etc.70  etc.75  etc.8   etc.84  etc.89  etc.93  etc.98
etc.11   etc.16  etc.21  etc.26  etc.30  etc.35  etc.4   etc.44  etc.49  etc.53  etc.58  etc.62  etc.67  etc.71  etc.76  etc.80  etc.85  etc.9   etc.94  etc.99
etc.12   etc.17  etc.22  etc.27  etc.31  etc.36  etc.40  etc.45  etc.5   etc.54  etc.59  etc.63  etc.68  etc.72  etc.77  etc.81  etc.86  etc.90  etc.95
root@darrel [Jul-31-2013-16:10:39] >cd ..
root@darrel [Jul-31-2013-16:10:41] >du -sh n 
2.2G	n
root@darrel [Jul-31-2013-16:13:48] >
root@darrel [Jul-31-2013-16:11:29] >ls -R * | wc
 239876  217503 2786851
root@darrel [Jul-31-2013-16:15:05] >


Server: 

 >cat /proc/meminfo 
MemTotal:       16436140 kB
MemFree:        11132280 kB
Buffers:          125584 kB
Cached:          2663660 kB
SwapCached:            0 kB
Active:           650920 kB
Inactive:        2656200 kB
Active(anon):     495648 kB
Inactive(anon):    26908 kB
Active(file):     155272 kB
Inactive(file):  2629292 kB
Unevictable:       43872 kB
Mlocked:           17300 kB
SwapTotal:       8224760 kB
SwapFree:        8224760 kB
Dirty:                24 kB
Writeback:             0 kB
AnonPages:        561828 kB
Mapped:            20040 kB
Shmem:               248 kB
Slab:            1422848 kB
SReclaimable:    1086680 kB
SUnreclaim:       336168 kB
KernelStack:        6464 kB
PageTables:         7740 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    16442828 kB
Committed_AS:    1219728 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      322312 kB
VmallocChunk:   34349451280 kB
HardwareCorrupted:     0 kB
AnonHugePages:    413696 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:        4108 kB
DirectMap2M:     2064384 kB
DirectMap1G:    14680064 kB


> cpu : 24

processor	: 23
vendor_id	: GenuineIntel
cpu family	: 6
model		: 45
model name	: Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
stepping	: 7
cpu MHz		: 1200.000
cache size	: 15360 KB
physical id	: 1
siblings	: 12
core id		: 5
cpu cores	: 6
apicid		: 43
initial apicid	: 43
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid
bogomips	: 3999.44
clflush size	: 64
cache_alignment	: 64
address sizes	: 46 bits physical, 48 bits virtual
power management:




Client:
======

>cat /proc/cpuinfo 
processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 13
model name	: QEMU Virtual CPU version (cpu64-rhel6)
stepping	: 3
cpu MHz		: 3065.794
cache size	: 4096 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 4
wp		: yes
flags		: fpu de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm unfair_spinlock pni cx16 hypervisor lahf_lm
bogomips	: 6131.58
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:

processor	: 1
vendor_id	: GenuineIntel
cpu family	: 6
model		: 13
model name	: QEMU Virtual CPU version (cpu64-rhel6)
stepping	: 3
cpu MHz		: 3065.794
cache size	: 4096 KB
fpu		: yes
fpu_exception	: yes
cpuid level	: 4
wp		: yes
flags		: fpu de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm unfair_spinlock pni cx16 hypervisor lahf_lm
bogomips	: 6131.58
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:

root@darrel [Jul-31-2013-16:23:34] >cat /proc/meminfo 
MemTotal:        3922928 kB
MemFree:          154400 kB
Buffers:            9024 kB
Cached:          1380804 kB
SwapCached:        14452 kB
Active:          1449220 kB
Inactive:        1600740 kB
Active(anon):    1369164 kB
Inactive(anon):  1339720 kB
Active(file):      80056 kB
Inactive(file):   261020 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:       4063224 kB
SwapFree:        3948556 kB
Dirty:                12 kB
Writeback:             0 kB
AnonPages:       1646484 kB
Mapped:             8404 kB
Shmem:           1048752 kB
Slab:             664352 kB
SReclaimable:     349084 kB
SUnreclaim:       315268 kB
KernelStack:        1776 kB
PageTables:         9652 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     6024688 kB
Committed_AS:    2050240 kB
VmallocTotal:   34359738367 kB
VmallocUsed:       19688 kB
VmallocChunk:   34359715060 kB
HardwareCorrupted:     0 kB
AnonHugePages:    882688 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:        8180 kB
DirectMap2M:     4186112 kB
Comment 5 Saurabh 2013-07-31 07:41:48 EDT
Created attachment 781046 [details]
nfs-profile-info
Comment 6 Vivek Agarwal 2013-08-01 02:16:38 EDT
Do we have some benchmark with respect to the older release to make sure if we have become slower in the new release?
Comment 7 rjoseph 2013-09-02 03:30:35 EDT
As per Rahul the performance issue is seen only in the glusterfs-3.4.0.13rhs-1.el6_4.x86_64 build.
Comment 8 Vivek Agarwal 2013-09-03 09:57:42 EDT
 Let us reopen if we see the degradation, based on the previous comment am closing it.

Note You need to log in before you can comment on or make changes to this bug.