Bug 668702 - [RHEL 6]dbm_* calls should call the underlying gdbm_* calls using then GDBM_NOLOCK flag
[RHEL 6]dbm_* calls should call the underlying gdbm_* calls using then GDBM_N...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: gdbm (Show other bugs)
6.0
Unspecified Linux
urgent Severity high
: rc
: ---
Assigned To: Marek Skalický
Robin Hack
Maxim Svistunov
: Patch
: 1178101 (view as bug list)
Depends On: 663932 1178101 1208437
Blocks: 836160 1172231 668689 840699 1269927
  Show dependency treegraph
 
Reported: 2011-01-11 05:29 EST by Sachin Prabhu
Modified: 2016-05-10 19:59 EDT (History)
10 users (show)

See Also:
Fixed In Version: gdbm-1.8.0-39.el6
Doc Type: Bug Fix
Doc Text:
Applications no longer access database files on a NFS share ineffectively Prior to this update, some applications performed poorly when performing operations on database files hosted on a NFS share. This was caused by the frequent invalidations of cache on the NFS client. This update introduces a new environment variable `NDBM_LOCK`, which prevents cache invalidation. As a result, the relevant applications no longer perform poorly in the described scenario.
Story Points: ---
Clone Of: 663932
Environment:
Last Closed: 2016-05-10 19:59:18 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
the same patch as used in RHEL-5 (2.85 KB, patch)
2011-07-21 04:27 EDT, Honza Horak
no flags Details | Diff
Easy reproducer (638 bytes, text/x-csrc)
2015-12-15 08:09 EST, Marek Skalický
no flags Details

  None (edit)
Description Sachin Prabhu 2011-01-11 05:29:49 EST
Cloning RHEL 5 bz 663932 for RHEL 6.

+++ This bug was initially created as a clone of Bug #663932 +++

We have a user who reported a problem with bad performance of their application when using the dbm_* calls to perform operations on database files hosted on a NFS share.

The problem was traced to flock calls made by the gdbm_* calls which in turn were called by the dbm_*() functions used in the application. We noticed thousands of flock calls being made. These are inefficient when used over a NFS share since it results in a call being made over the wire and also results in the cache on the nfs client being invalidated. Locking calls are used as a cache coherency point on NFS clients and the cached data is invalidated every time a lock call is made. This results in a large number of READ calls over the NFS share which causes huge performance problems.

These flock calls were unnecessary for this application since it first obtains a lock on a file which acts as a gatekeeper in the directory before performing any data operation on those files. So these flock calls being made only result in performance issues on this application without actually being required to provide any security against data corruption.

The user has confirmed that using gdbm_*() calls directly with the GDBM_NOLOCK option instead of dbm_*() calls shows great improvements in performance.

Note that the man page for dbm_* calls does mention that the user should not expect the functions to provide any locking when working on the data files and the users themselves should use other locking mechanisms.
From man dbm_open
--
they do  not  protect  against multi-user access (in other words they do not   lock records or files),
--


This request is to have either

1) dbm_* calls should use the GDBM_NOLOCK option when calling the corresponding gdbm_* calls OR 

2) Allow an option so that when used, the dbm_*() calls to the corresponding gdbm_*() calls use the GDBM_NOLOCK option. A suggested way of passing this option is to check for 'GDBM_NOLOCK' environmental variable in the dbm_* call and if present, set the GDBM_NOLOCK flags for the corresponding gdbm_*() calls.

--- Additional comment from kklic@redhat.com on 2011-01-03 14:14:54 EST ---

Created attachment 471542 [details]
Proposed patch

dbm_open function reads an environment variable NDBM_LOCK and if this variable is set to false (NDBM_LOCK=false/no/off/0), then the dbm_open does not lock the database.

The NDBM prefix has been chosen because the variable affects only the NDBM interface. Neither the oldest DBM (dbminit) nor the GDBM (gdbm_open) interface is affected.

The gdbm(3) man page has been extended to document the environment variable.

The patch can be easily tested via strace and testndbm program from gdbm upstream archive.

--- Additional comment from kklic@redhat.com on 2011-01-03 14:16:09 EST ---

Test build: http://brewweb.devel.redhat.com/brew/taskinfo?taskID=3004545

The patch has been applied to Fedora Rawhide.

--- Additional comment from sprabhu@redhat.com on 2011-01-04 10:11:50 EST ---

I have confirmed that the patch works as expected using a user provided test program.

Test1: Run with env variable $NDBM_LOCK not set.

[root@vm22 test]# echo $NDBM_LOCK

#So $NDBM_LOCK is not set.

#Run test program understrace to track all syscalls.

[root@vm22 test]# strace -fxvto out ./test
..

[root@vm22 test]# grep flock out |wc -l
1300064

Test2: Set NDBM_LOCK=no and run the same test

# Now for test after getting dbm_* to pass the GDBM_NOLOCK option to gdbm calls.

[root@vm22 test]# export NDBM_LOCK=no
[root@vm22 test]# echo $NDBM_LOCK
no
[root@vm22 test]# strace -fxvto out ./test
..
[root@vm22 test]# grep flock out |wc -l
0


We can see that the test program which does a number of calls to dbm_open and dbm_read. The calls to the gdbm_*() functions by the dbm_*() calls result in a significant number of flock calls(1300064) as counted in the strace output.
By setting NDBM_LOCK=no, the number of flock calls seen in the strace output drops to 0.

--- Additional comment from hhorak@redhat.com on 2011-01-11 04:23:24 EST ---

Committed to CVS, moving to modified.
http://post-office.corp.redhat.com/archives/cvs-commits-list/2011-January/msg01325.html

--- Additional comment from pm-rhel@redhat.com on 2011-01-11 04:26:18 EST ---

This bug has been copied as 5.6 z-stream (EUS) bug #668689 and now must be
resolved in the current update release, set blocker flag.
Comment 2 RHEL Product and Program Management 2011-07-05 21:31:15 EDT
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated
in the current release, Red Hat is unfortunately unable to
address this request at this time. Red Hat invites you to
ask your support representative to propose this request, if
appropriate and relevant, in the next release of Red Hat
Enterprise Linux. If you would like it considered as an
exception in the current release, please ask your support
representative.
Comment 5 Suzanne Yeghiayan 2012-02-14 18:06:06 EST
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated
in the current release, Red Hat is unfortunately unable to
address this request at this time. Red Hat invites you to
ask your support representative to propose this request, if
appropriate and relevant, in the next release of Red Hat
Enterprise Linux. If you would like it considered as an
exception in the current release, please ask your support
representative.
Comment 6 RHEL Product and Program Management 2012-09-18 14:30:44 EDT
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated
in the current release, Red Hat is unable to address this
request at this time.

Red Hat invites you to ask your support representative to
propose this request, if appropriate, in the next release of
Red Hat Enterprise Linux.
Comment 7 RHEL Product and Program Management 2013-10-13 21:12:03 EDT
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated
in the current release, Red Hat is unable to address this
request at this time.

Red Hat invites you to ask your support representative to
propose this request, if appropriate, in the next release of
Red Hat Enterprise Linux.
Comment 8 Honza Horak 2015-01-05 10:46:41 EST
*** Bug 1178101 has been marked as a duplicate of this bug. ***
Comment 10 Marek Skalický 2015-12-15 08:09 EST
Created attachment 1106020 [details]
Easy reproducer
Comment 11 Marek Skalický 2015-12-15 08:15:29 EST
gdbm in RHEL 6 uses fcntl instead of flock (see #581524 ).

So the detection that the problem is fixed is slightly different from #663932.

Compile the "Easy reproducer" program with the following command
gcc -o dbm_test dbm_test.c -lgdbm

First test on old library or with NDBM_LOCK unset.
# echo $NDBM_LOCK
# strace ./dbm_test 2>&1|grep fcntl
fcntl(3, F_SETLK, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}) = 0
fcntl(3, F_SETLK, {type=F_UNLCK, whence=SEEK_SET, start=0, len=0}) = 0

This shows that 2 fcntl calls were made. These are made by the gdbm library calls which were made by dbm calls.

Install the new package and set NDBM_LOCK
# export NDBM_LOCK=no
# strace ./dbm_test 2>&1|grep flock

We do not see any flock calls being made.
Comment 13 Marek Skalický 2015-12-15 08:28:50 EST
Sorry I haven't changed this testing procedure properly.
This is the correct one:

Compile the "Easy reproducer" program with the following command
gcc -o dbm_test dbm_test.c -lgdbm

First test on old library or with NDBM_LOCK unset.
# echo $NDBM_LOCK
# strace ./dbm_test 2>&1|grep fcntl
fcntl(3, F_SETLK, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}) = 0
fcntl(3, F_SETLK, {type=F_UNLCK, whence=SEEK_SET, start=0, len=0}) = 0

This shows that 2 fcntl calls were made. These are made by the gdbm library calls which were made by dbm calls.

Install the new package and set NDBM_LOCK
# export NDBM_LOCK=no
# strace ./dbm_test 2>&1|grep fcntl

We do not see any fcntl calls being made.
Comment 17 errata-xmlrpc 2016-05-10 19:59:18 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0863.html

Note You need to log in before you can comment on or make changes to this bug.