This bug is created as a clone of upstream ticket:
When processing large search filters which are applied to every entry in the search result set, the filter is normalized anew each time a new entry is tested. For substring filters, a regular expression must be created, compiled, and freed each time the substring filter is tested, in addition to normalizing the values. For example, if the search filter contains 1000 substring sub-filters, for each entry tested with the filter, this will require 1000 filter normalizations followed by 1000 regex creation, compilation, and cleanup. If there are 1000 entries in the search result set, this will require a million such operations.
please add steps to reproduce/verify this issue
svn ci -m "added tests for Bug 772777 - pre compile and normalize search filter" data/DS/6.0/filter/en/bigdneq.filt data/DS/6.0/filter/en/bigdnsub.filt data/DS/6.0/filter/en/filters.ldif testcases/DS/6.0/filter/tet_scen.sh testcases/DS/6.0/filter/filter.sh
Transmitting file data .....
Committed revision 6471.
Filter test suit is passing 100%, hence marking this bug as VERIFIED.
Technical note added. If any revisions are required, please edit the "Technical Notes" field
accordingly. All revisions will be proofread by the Engineering Content Services team.
Cause: Using search filters with many substring filters and attributes that require a lot of normalization (such as DN syntax value).
Consequence: Poor performance due to excessive normalization and regex compilation.
Change: The code will now pre-compile and pre-normalize such search filters.
Result: Better performance for search filters with many substring filters and attributes that require a lot of normalization.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.