Bug 663932 - dbm_* calls should call the underlying gdbm_* calls using then GDBM_NOLOCK flag
dbm_* calls should call the underlying gdbm_* calls using then GDBM_NOLOCK flag
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: gdbm (Show other bugs)
5.5
Unspecified Linux
urgent Severity high
: rc
: ---
Assigned To: Honza Horak
qe-baseos-daemons
: ZStream
Depends On:
Blocks: 668689 668702 1178101 1208437
  Show dependency treegraph
 
Reported: 2010-12-17 07:32 EST by Sachin Prabhu
Modified: 2015-04-02 05:19 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 668702 1178101 1208437 (view as bug list)
Environment:
Last Closed: 2013-09-23 07:02:09 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Proposed patch (2.85 KB, patch)
2011-01-03 14:14 EST, Karel Klíč
no flags Details | Diff
Easy reproducer (638 bytes, text/plain)
2011-01-17 08:28 EST, Sachin Prabhu
no flags Details

  None (edit)
Description Sachin Prabhu 2010-12-17 07:32:48 EST
We have a user who reported a problem with bad performance of their application when using the dbm_* calls to perform operations on database files hosted on a NFS share.

The problem was traced to flock calls made by the gdbm_* calls which in turn were called by the dbm_*() functions used in the application. We noticed thousands of flock calls being made. These are inefficient when used over a NFS share since it results in a call being made over the wire and also results in the cache on the nfs client being invalidated. Locking calls are used as a cache coherency point on NFS clients and the cached data is invalidated every time a lock call is made. This results in a large number of READ calls over the NFS share which causes huge performance problems.

These flock calls were unnecessary for this application since it first obtains a lock on a file which acts as a gatekeeper in the directory before performing any data operation on those files. So these flock calls being made only result in performance issues on this application without actually being required to provide any security against data corruption.

The user has confirmed that using gdbm_*() calls directly with the GDBM_NOLOCK option instead of dbm_*() calls shows great improvements in performance.

Note that the man page for dbm_* calls does mention that the user should not expect the functions to provide any locking when working on the data files and the users themselves should use other locking mechanisms.
From man dbm_open
--
they do  not  protect  against multi-user access (in other words they do not   lock records or files),
--


This request is to have either

1) dbm_* calls should use the GDBM_NOLOCK option when calling the corresponding gdbm_* calls OR 

2) Allow an option so that when used, the dbm_*() calls to the corresponding gdbm_*() calls use the GDBM_NOLOCK option. A suggested way of passing this option is to check for 'GDBM_NOLOCK' environmental variable in the dbm_* call and if present, set the GDBM_NOLOCK flags for the corresponding gdbm_*() calls.
Comment 2 Karel Klíč 2011-01-03 14:14:54 EST
Created attachment 471542 [details]
Proposed patch

dbm_open function reads an environment variable NDBM_LOCK and if this variable is set to false (NDBM_LOCK=false/no/off/0), then the dbm_open does not lock the database.

The NDBM prefix has been chosen because the variable affects only the NDBM interface. Neither the oldest DBM (dbminit) nor the GDBM (gdbm_open) interface is affected.

The gdbm(3) man page has been extended to document the environment variable.

The patch can be easily tested via strace and testndbm program from gdbm upstream archive.
Comment 4 Sachin Prabhu 2011-01-04 10:11:50 EST
I have confirmed that the patch works as expected using a user provided test program.

Test1: Run with env variable $NDBM_LOCK not set.

[root@vm22 test]# echo $NDBM_LOCK

#So $NDBM_LOCK is not set.

#Run test program understrace to track all syscalls.

[root@vm22 test]# strace -fxvto out ./test
..

[root@vm22 test]# grep flock out |wc -l
1300064

Test2: Set NDBM_LOCK=no and run the same test

# Now for test after getting dbm_* to pass the GDBM_NOLOCK option to gdbm calls.

[root@vm22 test]# export NDBM_LOCK=no
[root@vm22 test]# echo $NDBM_LOCK
no
[root@vm22 test]# strace -fxvto out ./test
..
[root@vm22 test]# grep flock out |wc -l
0


We can see that the test program which does a number of calls to dbm_open and dbm_read. The calls to the gdbm_*() functions by the dbm_*() calls result in a significant number of flock calls(1300064) as counted in the strace output.
By setting NDBM_LOCK=no, the number of flock calls seen in the strace output drops to 0.
Comment 11 Sachin Prabhu 2011-01-17 08:28:13 EST
Created attachment 473838 [details]
Easy reproducer

Simple reproducer to test the changes.

Compile the test program with the following command
gcc -o dbm_test dbm_test.c -lgdbm

First test on old library or with NDBM_LOCK unset.
# echo $NDBM_LOCK
# strace ./dbm_test 2>&1|grep flock
flock(3, LOCK_EX|LOCK_NB)               = 0
flock(3, LOCK_UN)                       = 0

This shows that 2 flock calls were made. These are made by the gdbm library calls which were made by dbm calls.

Install the new package and set NDBM_LOCK
# export NDBM_LOCK=no
# strace ./dbm_test 2>&1|grep flock

We do not see any flock calls being made.

Since the flock calls on the NFS share are the ones responsible for causing performance problems, by being able to switch this off, we can eliminate the unnecessary performance hit when using gdbm database files over NFS.

Note You need to log in before you can comment on or make changes to this bug.