Bug 663932

Summary: dbm_* calls should call the underlying gdbm_* calls using then GDBM_NOLOCK flag
Product: Red Hat Enterprise Linux 5 Reporter: Sachin Prabhu <sprabhu>
Component: gdbmAssignee: Honza Horak <hhorak>
Status: CLOSED CURRENTRELEASE QA Contact: qe-baseos-daemons
Severity: high Docs Contact:
Priority: urgent    
Version: 5.5CC: azelinka, jwest, pmuller, rvokal, tsmetana
Target Milestone: rcKeywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 668702 1178101 1208437 (view as bug list) Environment:
Last Closed: 2013-09-23 11:02:09 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 668689, 668702, 1178101, 1208437    
Attachments:
Description Flags
Proposed patch
none
Easy reproducer none

Description Sachin Prabhu 2010-12-17 12:32:48 UTC
We have a user who reported a problem with bad performance of their application when using the dbm_* calls to perform operations on database files hosted on a NFS share.

The problem was traced to flock calls made by the gdbm_* calls which in turn were called by the dbm_*() functions used in the application. We noticed thousands of flock calls being made. These are inefficient when used over a NFS share since it results in a call being made over the wire and also results in the cache on the nfs client being invalidated. Locking calls are used as a cache coherency point on NFS clients and the cached data is invalidated every time a lock call is made. This results in a large number of READ calls over the NFS share which causes huge performance problems.

These flock calls were unnecessary for this application since it first obtains a lock on a file which acts as a gatekeeper in the directory before performing any data operation on those files. So these flock calls being made only result in performance issues on this application without actually being required to provide any security against data corruption.

The user has confirmed that using gdbm_*() calls directly with the GDBM_NOLOCK option instead of dbm_*() calls shows great improvements in performance.

Note that the man page for dbm_* calls does mention that the user should not expect the functions to provide any locking when working on the data files and the users themselves should use other locking mechanisms.
From man dbm_open
--
they do  not  protect  against multi-user access (in other words they do not   lock records or files),
--


This request is to have either

1) dbm_* calls should use the GDBM_NOLOCK option when calling the corresponding gdbm_* calls OR 

2) Allow an option so that when used, the dbm_*() calls to the corresponding gdbm_*() calls use the GDBM_NOLOCK option. A suggested way of passing this option is to check for 'GDBM_NOLOCK' environmental variable in the dbm_* call and if present, set the GDBM_NOLOCK flags for the corresponding gdbm_*() calls.

Comment 2 Karel Klíč 2011-01-03 19:14:54 UTC
Created attachment 471542 [details]
Proposed patch

dbm_open function reads an environment variable NDBM_LOCK and if this variable is set to false (NDBM_LOCK=false/no/off/0), then the dbm_open does not lock the database.

The NDBM prefix has been chosen because the variable affects only the NDBM interface. Neither the oldest DBM (dbminit) nor the GDBM (gdbm_open) interface is affected.

The gdbm(3) man page has been extended to document the environment variable.

The patch can be easily tested via strace and testndbm program from gdbm upstream archive.

Comment 4 Sachin Prabhu 2011-01-04 15:11:50 UTC
I have confirmed that the patch works as expected using a user provided test program.

Test1: Run with env variable $NDBM_LOCK not set.

[root@vm22 test]# echo $NDBM_LOCK

#So $NDBM_LOCK is not set.

#Run test program understrace to track all syscalls.

[root@vm22 test]# strace -fxvto out ./test
..

[root@vm22 test]# grep flock out |wc -l
1300064

Test2: Set NDBM_LOCK=no and run the same test

# Now for test after getting dbm_* to pass the GDBM_NOLOCK option to gdbm calls.

[root@vm22 test]# export NDBM_LOCK=no
[root@vm22 test]# echo $NDBM_LOCK
no
[root@vm22 test]# strace -fxvto out ./test
..
[root@vm22 test]# grep flock out |wc -l
0


We can see that the test program which does a number of calls to dbm_open and dbm_read. The calls to the gdbm_*() functions by the dbm_*() calls result in a significant number of flock calls(1300064) as counted in the strace output.
By setting NDBM_LOCK=no, the number of flock calls seen in the strace output drops to 0.

Comment 11 Sachin Prabhu 2011-01-17 13:28:13 UTC
Created attachment 473838 [details]
Easy reproducer

Simple reproducer to test the changes.

Compile the test program with the following command
gcc -o dbm_test dbm_test.c -lgdbm

First test on old library or with NDBM_LOCK unset.
# echo $NDBM_LOCK
# strace ./dbm_test 2>&1|grep flock
flock(3, LOCK_EX|LOCK_NB)               = 0
flock(3, LOCK_UN)                       = 0

This shows that 2 flock calls were made. These are made by the gdbm library calls which were made by dbm calls.

Install the new package and set NDBM_LOCK
# export NDBM_LOCK=no
# strace ./dbm_test 2>&1|grep flock

We do not see any flock calls being made.

Since the flock calls on the NFS share are the ones responsible for causing performance problems, by being able to switch this off, we can eliminate the unnecessary performance hit when using gdbm database files over NFS.