Red Hat Bugzilla – Bug 929388
serious CPU time regressions in the glibc math library
Last modified: 2016-11-24 11:00:37 EST
Created attachment 718234 [details]
test case used for getting the timings
Description of problem: When upgrading from RHEL 6.3 to 6.4 we noticed that our code ran noticeably longer, sometimes up to a factor two longer. We could track this down to serious CPU time regressions in the glibc math library. We have investigated 5 routines in the math library: exp(), pow(), sin(), and cos() all showed CPU time regression to varying degrees, while expf() clearly is faster (though we rarely use that in our code). We have not investigated other math routines, but we suspect that many more routines will show CPU time regressions.
These are the timings (in seconds) we got for the attached program:
RHEL6.3 RHEL6.4 AMD
exp(): 12.17 47.03 9.38
Using a slightly modified version of the program we found:
pow(): 28.94 63.57 24.05 [x += pow(x,-0.9)]
sin(): 158.91 192.35 14.43 [x += sin(x)]
cos(): 157.27 191.47 14.38 [x += cos(x)]
expf(): 93.92 10.58 8.35 [x += expf(-x)]
Though the speedup of the single-precision routine expf() is certainly welcome, it looks like it has been achieved at the expense of serious regressions in the double-precision routines (exp() almost by a factor 4, pow() more than a factor 2). Since the use of the double-precision variants will likely be dominant (it certainly is in our code) this needs to be fixed urgently. It is also obvious that the sin() and cos() functions (and likely other trigonometric functions) are embarrassingly slower than the AMD versions...
Going over the CPU times, I noticed that the absolute difference in run time for the double-precision test cases is nearly constant: exp(): +34.86 sec, pow(): +34.63 sec, sin(): +33.44 sec, cos(): +34.20 sec. This suggests that common code has been added to all double-precision routines that gobbles up a large amount of CPU time.
How reproducible: always
Steps to Reproduce: Compile the attached program with "g++ -O2 -ffast-math", then run with "time ./a.out". The timings were done using the glibc math libraries from RHEL 6.3 (glibc-2.12-1.80.el6_3.5.x86_64) and 6.4 (glibc-2.12-1.107.el6.x86_64), as well as the AMD math library v3.0.2 available here: http://developer.amd.com/tools/cpu-development/libm/
Timings were done on an otherwise empty system (with an Intel Xeon E5-2687W processor), and were repeated 3 times to check for accuracy. The median "user" run time is reported. Choosing the math library was done by setting the LD_LIBRARY_PATH variable. The executable was not modified in the process.
Actual & Expected results: the timings are shown above. Ideally the glibc math library should be as fast as the AMD version or faster, but at least it should not be slower then the RHEL 6.3 version.
Could you please try the packages here and see if they solve your problem:
Those are just for testing, so please don't deploy them on your production systems.
With package glibc-2.12-1.107.el6.1.bz892635.x86_64.rpm I get the following timings:
So the CPU regression is largely solved in this version, though exp() and pow() are still somewhat slower than their RHEL6.3 counterparts.
OK, thanks for confirming that.
*** Bug 892635 has been marked as a duplicate of this bug. ***
requested information was already supplied in Comment 2.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.