Created attachment 596238 [details] Simple program to reproduce the problem Description of problem: The attached program prints correct values when running without valgrind and incorrect ones when running with valgrind. Version-Release number of selected component (if applicable): valgrind-3.7.0-4.fc17.i686 gcc-c++-4.7.0-5.fc17.i686 boost-devel-1.48.0-11.fc17.i686 How reproducible: 100% Steps to Reproduce: 1. Compile attached program: g++ -std=c++11 -o lexical_cast lexical_cast.cpp 2. Run it without valgrind: ./lexical_cast 3. Run it with valgrind: valgrind ./lexical_cast Actual results: 2nd value printed when running with valgrind is 8589.9345900000117 instead of 8589.9345900000008 Expected results: All the values printed are the same Additional info: The problem doesn't happen if option -std=c++11 is not passed to g++ The problem happens also with: valgrind-3.6.1-4.fc15.i686 gcc-c++-4.6.3-2.fc15.i686 boost-devel-1.48.0-13.fc15.i686 (rebuild from SRPM on Fedora 15) The problem doesn't happen with: valgrind-3.6.1-4.fc15.i686 gcc-c++-4.6.3-2.fc15.i686 boost-devel-1.46.0-3.fc15.i686
Btw, this could be a bug in boost not in valgrind.
This can also be replicated on x86_64 with the latest valgrind. When running under gdb you see: (gdb) start Temporary breakpoint 1 at 0x401535: file lexical_cast.cpp, line 9. Starting program: /tmp/lexical_cast Temporary breakpoint 1, main () at lexical_cast.cpp:9 9 const std::string foo( "8589.9345900000008" ); (gdb) n 16 const double tmp1( boost::lexical_cast< double >( foo ) ); (gdb) 17 std::cout << std::setprecision( 17 ) << tmp1 << std::endl; (gdb) print tmp1 $1 = 8589.9345900000008 (gdb) info registers xmm0 xmm0 {v4_float = {0x0, 0x6, 0x0, 0x0}, v2_double = {0x218d, 0x0}, v16_int8 = {0x96, 0x26, 0xa5, 0xa0, 0xf7, 0xc6, 0xc0, 0x40, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, v8_int16 = {0x2696, 0xa0a5, 0xc6f7, 0x40c0, 0x0, 0x0, 0x0, 0x0}, v4_int32 = {0xa0a52696, 0x40c0c6f7, 0x0, 0x0}, v2_int64 = { 0x40c0c6f7a0a52696, 0x0}, uint128 = 0x000000000000000040c0c6f7a0a52696} When running under valgrind under gdb (you see: 0x0000000004001530 in _start () from /lib64/ld-linux-x86-64.so.2 (gdb) break main Breakpoint 1 at 0x401535: file lexical_cast.cpp, line 9. (gdb) c Continuing. Breakpoint 1, main () at lexical_cast.cpp:9 9 const std::string foo( "8589.9345900000008" ); (gdb) n 16 const double tmp1( boost::lexical_cast< double >( foo ) ); (gdb) n 17 std::cout << std::setprecision( 17 ) << tmp1 << std::endl; (gdb) print tmp1 $1 = 8589.9345900000117 (gdb) info registers xmm0 xmm0 {v4_float = {0x0, 0x6, 0x0, 0x0}, v2_double = {0x218d, 0x0}, v16_int8 = {0x9c, 0x26, 0xa5, 0xa0, 0xf7, 0xc6, 0xc0, 0x40, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, v8_int16 = {0x269c, 0xa0a5, 0xc6f7, 0x40c0, 0x0, 0x0, 0x0, 0x0}, v4_int32 = {0xa0a5269c, 0x40c0c6f7, 0x0, 0x0}, v2_int64 = { 0x40c0c6f7a0a5269c, 0x0}, uint128 = 0x000000000000000040c0c6f7a0a5269c} Hopefully relevant disassembly: 0x0000000000401569 <+61>: callq 0x401c0d <boost::lexical_cast<double, std::string>(std::string const&)> 0x000000000040156e <+66>: movsd %xmm0,-0x38(%rbp) 0x0000000000401573 <+71>: mov -0x38(%rbp),%rax 0x0000000000401577 <+75>: mov %rax,-0x18(%rbp) => 0x000000000040157b <+79>: mov $0x11,%edi 0x0000000000401580 <+84>: callq 0x401aca <std::setprecision(int)> 0x0000000000401585 <+89>: mov %eax,%esi 0x0000000000401587 <+91>: mov $0x6048e0,%edi 0x000000000040158c <+96>: callq 0x4011f0 <_ZStlsIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_St13_Setprecision@plt>
This looks like upstream bug "80-bit floats are not supported on x86 and x86-64" https://bugs.kde.org/show_bug.cgi?id=197915 Also note the following section from the Valgrind manual: http://www.valgrind.org/docs/manual/manual-core.html "Precision: There is no support for 80 bit arithmetic. Internally, Valgrind represents all such "long double" numbers in 64 bits, and so there may be some differences in results. Whether or not this is critical remains to be seen." And the above upstream valgrind bug report notes: "adding support for 80-bit floats is low priority, because (1) AIUI the majority of floating point code is portable and restricts itself to 64-bit values, and (2) doing 80-bit support will soak up a considerable amount of engineering effort. So it's not an easy case to make, and we are already extremely resource-constrained w.r.t. development effort."
Based on the references in comment #3 it is unlikely this will be fixed for fedora directly unless it is fixed/worked on upstream. Please follow the upstream bug https://bugs.kde.org/show_bug.cgi?id=197915 to track progress.