Bug 1779325 - when NSS_SDB_USE_CACHE not set, after curl access https, dentry increase but never released - consider alternative algorithm for benchmarking ACCESS call in sdb_measureAccess
Summary: when NSS_SDB_USE_CACHE not set, after curl access https, dentry increase but ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: nss
Version: 7.7
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: nss-nspr-maint
QA Contact: Hubert Kario
URL:
Whiteboard:
Depends On: 1571183 1804262
Blocks: 1879249
TreeView+ depends on / blocked
 
Reported: 2019-12-03 18:46 UTC by Dave Wysochanski
Modified: 2022-11-14 09:40 UTC (History)
22 users (show)

Fixed In Version: nss-3.53.1-3.el7_9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1571183
: 1879249 (view as bug list)
Environment:
Last Closed: 2020-09-29 21:18:42 UTC
Target Upstream Version:


Attachments (Terms of Use)
patch to remove negative dcache entries post-test (3.40 KB, patch)
2019-12-03 21:24 UTC, Eric Sandeen
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Mozilla Foundation 1603801 0 P1 RESOLVED [patch] Avoid dcache pollution from sdb_measureAccess() 2021-01-29 18:15:18 UTC
Red Hat Issue Tracker CRYPTO-7870 0 None None None 2022-07-09 13:51:07 UTC
Red Hat Product Errata RHSA-2020:4076 0 None None None 2020-09-29 21:19:20 UTC

Description Dave Wysochanski 2019-12-03 18:46:05 UTC
+++ This bug was initially created as a clone of Bug #1571183 +++

Description of problem:
When use curl to access https web site, the value of dentry in slabtop continue increase and never release, but after I set export NSS_SDB_USE_CACHE=no or export NSS_SDB_USE_CACHE=yes, access https doesn't increase any more

Version-Release number of selected component (if applicable):
nss-3.28.4-4.el6_9
curl-7.19.7-53.el6_9
libcurl-7.19.7-53.el6_9

How reproducible:
Open slabtop on one terminal, then run "curl https://bugzilla.redhat.com/show_bug.cgi?id=1044666" in another terminal, dentry increase and never release

Steps to Reproduce:
Scenario 1
1.run "unset NSS_SDB_USE_CACHE"
2.run "curl https://bugzilla.redhat.com/show_bug.cgi?id=1044666"
Scenario 2
1.run "export NSS_SDB_USE_CACHE=yes"
2.run "curl https://bugzilla.redhat.com/show_bug.cgi?id=1044666"
Scenario 3
1.run "export NSS_SDB_USE_CACHE=no"
2.run "curl https://bugzilla.redhat.com/show_bug.cgi?id=1044666"


Actual results:
for Scenario 1, dentry increase
for Scenario 2, dentry not increase
for Scenario 3, dentry not increase
if you try serveral times for Scenario 1, dentry will be very high.

Expected results:
dentry not increase for all the Scenario.

Additional info:
There are a related bug: https://bugzilla.redhat.com/show_bug.cgi?id=1044666
But seems not totally the same.
And one curl bug: https://curl.haxx.se/mail/tracker-2013-06/0114.html

...
--- Additional comment from Robert Krawitz on 2019-05-06 16:20:28 UTC ---

This is a serious problem that will cause memory accounting (and possibly unnecessary eviction) for OpenShift pods that use curl repeatedly.  This really does need to get fixed.

--- Additional comment from Hubert Kario on 2019-05-22 20:06:10 UTC ---
...


--- Additional comment from Hubert Kario on 2019-11-19 19:30:26 UTC ---

> Should we clone this issue to RHEL7 or a later release?

on RHEL-8 curl is using openssl so it shouldn't be a problem there

RHEL-7 is on Maintenance Support 1, and this is at most medium severity issue, so it technically does not qualify there either. That being said, we will be rebasing NSS for the next Firefox ESR, so if fix for this is merged upstream before that, it could be delivered in RHEL-7. It's unlikely we would work on it though.

--- Additional comment from Bob Relyea on 2019-11-21 19:31:34 UTC ---

So this is no longer the default behavior of NSS either upstream or on RHEL 8. In both cases we default to no cache behavior unless we detect that we are on a networked file system. The change of behavior generated a lot of upstream noise from people who were negatively impacted by the change. Because of this we retained the old behavior in RHEL6 and RHEL7 on purpose. RHEL6 is basically done with updates, but we could update RHEL7 on the next NSS rebase to drop the patch if it's determined that the number of service requests for people who are on slow filesystems (and how have already working systems) could be negatively impacted by this change verses the number of people who are making new deployments and my run into this issue.

In all cases applications can select their preference with an environment variable and get the behavior they prefer. I think this is really up to the support team on which they prefer (break existing customers or have friction with new ones).

--- Additional comment from Dave Wysochanski on 2019-11-21 20:27:55 UTC ---

(In reply to Bob Relyea from comment #15)
> So this is no longer the default behavior of NSS either upstream or on RHEL
> 8. In both cases we default to no cache behavior unless we detect that we
> are on a networked file system. The change of behavior generated a lot of
> upstream noise from people who were negatively impacted by the change.

Can you elaborate on the specific problem this caused?

> Because of this we retained the old behavior in RHEL6 and RHEL7 on purpose.
> RHEL6 is basically done with updates, but we could update RHEL7 on the next
> NSS rebase to drop the patch if it's determined that the number of service
> requests for people who are on slow filesystems (and how have already
> working systems) could be negatively impacted by this change verses the
> number of people who are making new deployments and my run into this issue.
> 
> In all cases applications can select their preference with an environment
> variable and get the behavior they prefer. I think this is really up to the
> support team on which they prefer (break existing customers or have friction
> with new ones).


There must be some other way to achieve the performance benchmark logic other than to effectively 'stat' a ton of non-existent paths?  Please understand the current algorithm will cause a side-effect performance problem because you're creating dentries no one is going to use.

Has someone considered an alternative algorithm?

Earlier this claim was made:
2. dcache entries can be freed asynchronously, before umount is attempted

As far as I know there is no mechanism to do this other than to dump the dentry cache completely, which could have adverse effects elsewhere.  I.e. there is no kernel interface to "unbloat" negative dentries, and there's no plans for it.  There are plans to limit the number of negative dentries, but no finer grained interface to dump just negative ones.

--- Additional comment from Dave Wysochanski on 2019-11-21 21:10:17 UTC ---

Ok so here's a possibility for an alternative algorithm.  Based on my understanding of why NSS_SDB_USE_CACHE was added in:
https://hg.mozilla.org/projects/nss/rev/9934c8faef29

Original function was added:
https://hg.mozilla.org/projects/nss/rev/198b92c76b08e85ca9a2a4dd1bbb2e572a7fc228

But the assumption was that we needed to measure access time because that was what the profile was based on:
https://bugzilla.mozilla.org/show_bug.cgi?id=391294
"1) Creation of a cached copy of the database using sqlite's temporary table functionality. This cache is only used for read operations, and only in a shared filesystem. Whether or not the sqlite database is in a "shared filesystem" is determined by the relative performance between getting the access status of a non-existant file between the filesystem where the sqlite database resides and the local temp filesystem. Profile determined the majority of the time spent on shared filesystems was trying to determine if a rollback file existed."

The flaw in the patch was thinking the benchmark needed to measure access time as the benchmark did.  Essentially, we need an alternative / replacement for sdb_measureAccess():
https://hg.mozilla.org/projects/nss/rev/198b92c76b08e85ca9a2a4dd1bbb2e572a7fc228#l1.273

I would argue we really do not need to measure access time in the manner it was originally done.  Instead we could write is_shared_filesystem(path) that:
a) look for more direct way to determine whether it is local or shared - i.e. could be a lot of things include a new API - we have a new mount API coming upstream
b) look for alternative performance metric to measure relative performance between local temp filesystem and filesystem where squlite database resides

--- Additional comment from Hubert Kario on 2019-11-22 13:00:46 UTC ---

(In reply to Dave Wysochanski from comment #16)
> (In reply to Bob Relyea from comment #15)
> > So this is no longer the default behavior of NSS either upstream or on RHEL
> > 8. In both cases we default to no cache behavior unless we detect that we
> > are on a networked file system. The change of behavior generated a lot of
> > upstream noise from people who were negatively impacted by the change.
> 
> Can you elaborate on the specific problem this caused?
> 
> > Because of this we retained the old behavior in RHEL6 and RHEL7 on purpose.
> > RHEL6 is basically done with updates, but we could update RHEL7 on the next
> > NSS rebase to drop the patch if it's determined that the number of service
> > requests for people who are on slow filesystems (and how have already
> > working systems) could be negatively impacted by this change verses the
> > number of people who are making new deployments and my run into this issue.
> > 
> > In all cases applications can select their preference with an environment
> > variable and get the behavior they prefer. I think this is really up to the
> > support team on which they prefer (break existing customers or have friction
> > with new ones).
> 
> 
> There must be some other way to achieve the performance benchmark logic
> other than to effectively 'stat' a ton of non-existent paths?  Please
> understand the current algorithm will cause a side-effect performance
> problem because you're creating dentries no one is going to use.

entries which will be freed by the kernel as soon as it feels any kind of memory pressure...

> Has someone considered an alternative algorithm?

there was a proposal to create a directory and do the tests there, then delete the directory, that should clear-up the negative entries
 
> Earlier this claim was made:
> 2. dcache entries can be freed asynchronously, before umount is attempted
> 
> As far as I know there is no mechanism to do this other than to dump the
> dentry cache completely, which could have adverse effects elsewhere. 

less adverse than making the umount hang...


> a) look for more direct way to determine whether it is local or shared - i.e. could be a lot of things include a new API - we have a new mount API coming upstream

between overlayfs, mount --bind, nbd, lvm and iscsi, there is no reliable way to get that information

--- Additional comment from Dave Wysochanski on 2019-11-22 13:17:19 UTC ---

(In reply to Hubert Kario from comment #18)
> (In reply to Dave Wysochanski from comment #16)
> > (In reply to Bob Relyea from comment #15)
> > > So this is no longer the default behavior of NSS either upstream or on RHEL
> > > 8. In both cases we default to no cache behavior unless we detect that we
> > > are on a networked file system. The change of behavior generated a lot of
> > > upstream noise from people who were negatively impacted by the change.
> > 
> > Can you elaborate on the specific problem this caused?
> > 
> > > Because of this we retained the old behavior in RHEL6 and RHEL7 on purpose.
> > > RHEL6 is basically done with updates, but we could update RHEL7 on the next
> > > NSS rebase to drop the patch if it's determined that the number of service
> > > requests for people who are on slow filesystems (and how have already
> > > working systems) could be negatively impacted by this change verses the
> > > number of people who are making new deployments and my run into this issue.
> > > 
> > > In all cases applications can select their preference with an environment
> > > variable and get the behavior they prefer. I think this is really up to the
> > > support team on which they prefer (break existing customers or have friction
> > > with new ones).
> > 
> > 
> > There must be some other way to achieve the performance benchmark logic
> > other than to effectively 'stat' a ton of non-existent paths?  Please
> > understand the current algorithm will cause a side-effect performance
> > problem because you're creating dentries no one is going to use.
> 
> entries which will be freed by the kernel as soon as it feels any kind of
> memory pressure...
> 
creating a ton of garbage dentries into the kernel for purposes of benchmarking should not be viewed as optimal userspace behavior

> > Has someone considered an alternative algorithm?
> 
> there was a proposal to create a directory and do the tests there, then
> delete the directory, that should clear-up the negative entries
>  
that sounds like a good proposal


> > Earlier this claim was made:
> > 2. dcache entries can be freed asynchronously, before umount is attempted
> > 
> > As far as I know there is no mechanism to do this other than to dump the
> > dentry cache completely, which could have adverse effects elsewhere. 
> 
> less adverse than making the umount hang...
> 
that is your opinion

> 
> > a) look for more direct way to determine whether it is local or shared - i.e. could be a lot of things include a new API - we have a new mount API coming upstream
> 
> between overlayfs, mount --bind, nbd, lvm and iscsi, there is no reliable
> way to get that information

true it is probably not possible due to all the layers - some alternative performance evaluation method seems best approach

Comment 1 Eric Sandeen 2019-12-03 18:51:51 UTC
The tmpdir proposal should be trivial to implement as a way for the benchmark to clean up after itself:

# cat /proc/sys/fs/dentry-state
610245	584728	45	0	24190	0
# mkdir tmpdir
# for I in `seq 1 10000`; do stat tmpdir/file$I &>/dev/null; done
# cat /proc/sys/fs/dentry-state
620095	594578	45	0	34197	0
# rmdir tmpdir
# cat /proc/sys/fs/dentry-state
610097	584580	45	0	24200	0

(negative dcache drops from ~34000 to ~24000 when the tmp dir in their path gets removed)

-Eric

Comment 2 Eric Sandeen 2019-12-03 21:24:49 UTC
Created attachment 1641806 [details]
patch to remove negative dcache entries post-test

Here's a patch that works for me, it could use review.  Happy to send it to an upstream list if that's appropriate.

Comment 3 Hubert Kario 2019-12-04 11:48:48 UTC
Including in the code comments why the temp files are created in subdirectory would likely make the code easier to understand in the future.

Yes, filing a bug upstream and attaching the patch there is the best course of action (feel free to CC Robert Relya, Daiki Ueno and me on it, or just reference it from this bug and I'll fix up the CC's).

Comment 4 Hubert Kario 2019-12-04 11:52:08 UTC
sorry, I meant Robert Relyea (missed 'e')

Comment 5 Eric Sandeen 2019-12-11 18:26:12 UTC
I don't know where the upstream bugtracker is...

Comment 6 Hubert Kario 2019-12-12 11:31:14 UTC
https://bugzilla.mozilla.org/home

Comment 7 Eric Sandeen 2019-12-13 16:09:14 UTC
Thanks, I guess that should have been obvious but I wasn't sure - not my area.  :)

Comment 8 Eric Sandeen 2019-12-13 17:51:49 UTC
Ok, if you'd like to add relevant red hatters to cc, the bug is here https://bugzilla.mozilla.org/show_bug.cgi?id=1603801

Comment 13 Eric Sandeen 2020-04-22 13:26:45 UTC
This patch is now merged upstream

https://hg.mozilla.org/projects/nss/rev/928721f7016463b6aefef8239ba204bd809416c5

Comment 20 Bob Relyea 2020-09-11 21:41:59 UTC
fixed in   	nss-softokn-3.53.1-6.el7_9

Comment 21 Bob Relyea 2020-09-11 21:43:02 UTC
Builds attached to errata

Comment 30 errata-xmlrpc 2020-09-29 21:18:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: nss and nspr security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:4076

Comment 31 Eric Sandeen 2021-01-12 17:59:22 UTC
Did the bug noted in comment #19 ever get reported and resolved upstream?  It doesn't seem like it.


Note You need to log in before you can comment on or make changes to this bug.