Bug 983581 - [cifs] Compile kernel over cifs is taking 6 hours to complete.
[cifs] Compile kernel over cifs is taking 6 hours to complete.
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: samba (Show other bugs)
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Poornima G
Ben Turner
: ZStream
Depends On:
  Show dependency treegraph
Reported: 2013-07-11 09:58 EDT by Ben Turner
Modified: 2015-12-03 12:13 EST (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2015-12-03 12:13:19 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Ben Turner 2013-07-11 09:58:38 EDT
Description of problem:

I was certifying a 6.4 client against glusterfs- and my compile kernel tests against a 6x2 volume:

Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 8550d873-05f9-4f0d-a0aa-c7c2ca00cca1
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Brick1: storage-qe03.lab.eng.rdu2.redhat.com:/bricks/testvol_brick0
Brick2: storage-qe04.lab.eng.rdu2.redhat.com:/bricks/testvol_brick1
Brick3: storage-qe05.lab.eng.rdu2.redhat.com:/bricks/testvol_brick2
Brick4: storage-qe06.lab.eng.rdu2.redhat.com:/bricks/testvol_brick3
Brick5: storage-qe03.lab.eng.rdu2.redhat.com:/bricks/testvol_brick4
Brick6: storage-qe04.lab.eng.rdu2.redhat.com:/bricks/testvol_brick5
Brick7: storage-qe05.lab.eng.rdu2.redhat.com:/bricks/testvol_brick6
Brick8: storage-qe06.lab.eng.rdu2.redhat.com:/bricks/testvol_brick7
Brick9: storage-qe03.lab.eng.rdu2.redhat.com:/bricks/testvol_brick8
Brick10: storage-qe04.lab.eng.rdu2.redhat.com:/bricks/testvol_brick9
Brick11: storage-qe05.lab.eng.rdu2.redhat.com:/bricks/testvol_brick10
Brick12: storage-qe06.lab.eng.rdu2.redhat.com:/bricks/testvol_brick11

Version-Release number of selected component (if applicable):


How reproducible:

I ran this twice again 6x2 volume on RHEL 6.4 and I hit it both times.  I ran the same test on a RHEL 5.9 mounted 2x1 volume and the kernel compile finished in a normal amount of time:

executing compile_kernel
real	32m26.945s
user	7m54.876s
sys	3m59.259s
removed kernel

Steps to Reproduce:
1.  Mount a 6x2 volume over cifs.
2.  Compile a kernel on the cifs mount

Actual results:

6 hours to complete.

Expected results:

Less than an hour to complete.

Additional info:

I hit this in automated testing twice, I am going to reproduce manually and collect more data.  I'll update the BZ shortly with the info.
Comment 2 Christopher R. Hertel 2013-07-23 04:33:43 EDT
This is basically a performance issue, and the test is a bit of a shotgun approach.  There are several variables involved.  We should compare against the same configuration over XFS.  Also against FUSE without Samba (direct access to GlusterFS).

Finally, this problem may have been fixed by recent work done as a result of PE testing.
Comment 3 Christopher R. Hertel 2013-07-31 14:56:42 EDT
Reporter was going to reproduce manually and collect more data.
Comment 5 Christopher R. Hertel 2013-08-12 19:12:07 EDT
Ben: The ISOs being created for testing purposes have different features added and removed based upon what is currently considered working and what is not.  In addition there were some regressions introduced (since fixed) that also impacted performance.

Overall, however, we have seen a major improvement over RHS2.0, so I want to make sure that you are testing on the latest and greatest.  Please coordinate via e'mail so that we can make sure that you are using the latest ISO images.
Comment 6 Christopher R. Hertel 2013-09-12 01:39:19 EDT
Please provide an update based on the latest release.
Comment 7 Ben Turner 2013-09-20 17:18:58 EDT
Hi Chris so sorry for the delay.  At release time we got compile kernel down to:

real	86m36.565s
user	8m41.510s
sys	3m44.155s
removed kernel

As a point of reference I will post the time of the same script on glusterfs on the GA bits:

real	51m51.441s
user	10m9.124s
sys	3m13.455s

And on NFS:

real	46m19.364s
user	8m31.956s
sys	2m18.645s

This script extracts the tarball and compiles it, so this is the aggregate of both untar and compile.

So as of now cifs is taking about 30 minutes longer than NFS and cifs to untar and compile the kernel.  Is there any profile information or anyhting else you would like me to collect?
Comment 8 Scott Haines 2013-09-23 19:34:55 EDT
Targeting for 2.1.z U2 (Corbett) release.
Comment 9 Poornima G 2013-11-14 01:40:48 EST
Below are the results of running kernel compile on Gluster Fuse mount and CIFS mount:

# gluster --version
glusterfs built on Nov 11 2013 12:29:52

# smbd --version
Version 3.6.9-160.7.el6rhs

On Gluster FUSE mount:
# time tar -xJf linux-3.11.8.tar.xz 

real	2m17.243s
user	0m8.685s
sys	0m13.351s

# time make
real	192m19.758s
user	46m29.837s
sys	11m5.763s

On CIFS mount:
# time tar -xJf linux-3.11.8.tar.xz

real	5m18.711s
user	0m9.574s
sys	0m21.515s

# time make
real	157m4.420s
user	45m43.176s
sys	12m9.022s

I did collect the gluster vol profile info as well, both the results are comparable. 
Can you please try with the latest patches to see if CIFS takes longer than FUSE?
Comment 10 surabhi 2014-01-24 06:36:54 EST
Compile kernel on cifs mount results as follows:

executing compile_kernel

real    179m17.164s
user    13m13.796s
sys     5m58.333s

** The test is executed on a ctdb setup.

Running on FUSE mount : Will update the results.
Comment 11 Raghavendra Talur 2014-01-31 06:43:16 EST

Were you able to run the same test on FUSE mount?
It would be good to compare the times.
Comment 12 surabhi 2014-02-07 07:04:40 EST
Compile kernel on fuse mount results are as follows:

executing compile_kernel

real    65m23.856s
user    13m10.453s
sys     5m50.246s

# glusterfs --version
glusterfs built on Feb  4 2014 08:44:11

# smbd --version
Version 3.6.9-167.10.el6rhs
Comment 13 Alok 2014-03-05 05:27:01 EST
Recommendation by Ira: QE to test kernel compile on xfs share over pure samba and populate the results.
Comment 14 Vivek Agarwal 2015-12-03 12:13:19 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.