I'm unable to use valgrind to debug an application due to the 1000 limit, and others have experienced this, too. Can we please bump to 4000, or similar? At the moment, I'm using a custom built valgrind on RHEL6 to avoid it just aborting.
This is already tracked by upstream bugzilla.
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development. This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.
$ valgrind -q stap -v -l 'module("*").function("*_exit")'
Pass 1: parsed user script and 99 library script(s) using 525708virt/314040res/4068shr/338080data kb, in 8820usr/180sys/9060real ms.
--12587:0:aspacem Valgrind: FATAL: VG_N_SEGNAMES is too low.
--12587:0:aspacem Increase it and rebuild. Exiting now.
The upstream setting of VG_N_SEGMENTS and VG_N_SEGNAMES have been kept low to support devices with just 512MB of memory that run large C++ applications under valgrind anyway. With the current upstream setting of 5000 for VG_N_SEGMENTS and 1000 for VG_N_SEGNAMES the static segnames and nsegments take up ~1.2MB. For server and workstation machines it should be fine to increase these (suggested values VG_N_SEGMENTS 50000 and VG_N_SEGNAMES 25000) to make these use up to ~26MB.
Note: above reproducer will take a couple of minutes to complete under valgrind (with a patch to increase VG_N_SEGMENTS and VG_N_SEGNAMES).
scratch build available
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.