Description of problem: I was certifying a 6.4 client against glusterfs-3.3.0.11rhs-1.el6rhs.x86_64 and my compile kernel tests against a 6x2 volume: Volume Name: testvol Type: Distributed-Replicate Volume ID: 8550d873-05f9-4f0d-a0aa-c7c2ca00cca1 Status: Started Number of Bricks: 6 x 2 = 12 Transport-type: tcp Bricks: Brick1: storage-qe03.lab.eng.rdu2.redhat.com:/bricks/testvol_brick0 Brick2: storage-qe04.lab.eng.rdu2.redhat.com:/bricks/testvol_brick1 Brick3: storage-qe05.lab.eng.rdu2.redhat.com:/bricks/testvol_brick2 Brick4: storage-qe06.lab.eng.rdu2.redhat.com:/bricks/testvol_brick3 Brick5: storage-qe03.lab.eng.rdu2.redhat.com:/bricks/testvol_brick4 Brick6: storage-qe04.lab.eng.rdu2.redhat.com:/bricks/testvol_brick5 Brick7: storage-qe05.lab.eng.rdu2.redhat.com:/bricks/testvol_brick6 Brick8: storage-qe06.lab.eng.rdu2.redhat.com:/bricks/testvol_brick7 Brick9: storage-qe03.lab.eng.rdu2.redhat.com:/bricks/testvol_brick8 Brick10: storage-qe04.lab.eng.rdu2.redhat.com:/bricks/testvol_brick9 Brick11: storage-qe05.lab.eng.rdu2.redhat.com:/bricks/testvol_brick10 Brick12: storage-qe06.lab.eng.rdu2.redhat.com:/bricks/testvol_brick11 Version-Release number of selected component (if applicable): glusterfs-3.3.0.11rhs-1.el6rhs.x86_64 How reproducible: I ran this twice again 6x2 volume on RHEL 6.4 and I hit it both times. I ran the same test on a RHEL 5.9 mounted 2x1 volume and the kernel compile finished in a normal amount of time: executing compile_kernel real 32m26.945s user 7m54.876s sys 3m59.259s removed kernel Steps to Reproduce: 1. Mount a 6x2 volume over cifs. 2. Compile a kernel on the cifs mount 3. Actual results: 6 hours to complete. Expected results: Less than an hour to complete. Additional info: I hit this in automated testing twice, I am going to reproduce manually and collect more data. I'll update the BZ shortly with the info.
This is basically a performance issue, and the test is a bit of a shotgun approach. There are several variables involved. We should compare against the same configuration over XFS. Also against FUSE without Samba (direct access to GlusterFS). Finally, this problem may have been fixed by recent work done as a result of PE testing.
Reporter was going to reproduce manually and collect more data.
Ben: The ISOs being created for testing purposes have different features added and removed based upon what is currently considered working and what is not. In addition there were some regressions introduced (since fixed) that also impacted performance. Overall, however, we have seen a major improvement over RHS2.0, so I want to make sure that you are testing on the latest and greatest. Please coordinate via e'mail so that we can make sure that you are using the latest ISO images.
Please provide an update based on the latest release.
Hi Chris so sorry for the delay. At release time we got compile kernel down to: real 86m36.565s user 8m41.510s sys 3m44.155s end:14:29:41 removed kernel As a point of reference I will post the time of the same script on glusterfs on the GA bits: real 51m51.441s user 10m9.124s sys 3m13.455s end:13:04:31 And on NFS: real 46m19.364s user 8m31.956s sys 2m18.645s end:14:01:27 This script extracts the tarball and compiles it, so this is the aggregate of both untar and compile. So as of now cifs is taking about 30 minutes longer than NFS and cifs to untar and compile the kernel. Is there any profile information or anyhting else you would like me to collect?
Targeting for 2.1.z U2 (Corbett) release.
Below are the results of running kernel compile on Gluster Fuse mount and CIFS mount: # gluster --version glusterfs 3.4.0.43rhs built on Nov 11 2013 12:29:52 # smbd --version Version 3.6.9-160.7.el6rhs On Gluster FUSE mount: # time tar -xJf linux-3.11.8.tar.xz real 2m17.243s user 0m8.685s sys 0m13.351s # time make .... ... real 192m19.758s user 46m29.837s sys 11m5.763s On CIFS mount: # time tar -xJf linux-3.11.8.tar.xz real 5m18.711s user 0m9.574s sys 0m21.515s # time make .... ... real 157m4.420s user 45m43.176s sys 12m9.022s I did collect the gluster vol profile info as well, both the results are comparable. Can you please try with the latest patches to see if CIFS takes longer than FUSE?
Compile kernel on cifs mount results as follows: executing compile_kernel real 179m17.164s user 13m13.796s sys 5m58.333s ** The test is executed on a ctdb setup. Running on FUSE mount : Will update the results.
Surabhi, Were you able to run the same test on FUSE mount? It would be good to compare the times.
Compile kernel on fuse mount results are as follows: executing compile_kernel real 65m23.856s user 13m10.453s sys 5m50.246s # glusterfs --version glusterfs 3.4.0.59rhs built on Feb 4 2014 08:44:11 # smbd --version Version 3.6.9-167.10.el6rhs
Recommendation by Ira: QE to test kernel compile on xfs share over pure samba and populate the results.
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/ If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.