+++ This bug was initially created as a clone of Bug #765094 +++ I was running sanity on distributed-replicate volume. Only 5 tests passed out of 20 tests. The list of tests failing: kernel compile failed dd failed Large file reading failed dbench failed glusterfs build failed openssl failed postmark failed multiple files failed fsx failed arequal failed syscallbench failed tiobench failed locktests failed Along with sanity I was running the below script in parallel to initiate graph change. VOLNAME=vol while [ 1 ] do gluster volume set $VOLNAME stat-prefetch off sleep 300; gluster volume set $VOLNAME read-ahead off sleep 300; gluster volume set $VOLNAME quick-read off sleep 300; gluster volume set $VOLNAME io-cache off sleep 300; gluster volume set $VOLNAME write-behind off sleep 300; gluster volume set $VOLNAME read-ahead off sleep 600; echo 3 > /proc/sys/vm/drop_caches; gluster volume reset $VOLNAME sleep 1200; done; Attached the client log file. --- Additional comment from amarts on 2012-05-28 06:39:44 EDT --- Recent Pranith's fixes with locks should fix this. Can any one confirm?
Ran FS sanity on 2*2 volume (FUSE Mount) in parallel with the graph change script mentioned in the bug. FS sanity is successful Verified with build: ==================== [12/07/12 - 08:45:47 root@dhcp159-57 ~]# gluster --version glusterfs 3.3.0.5rhs built on Nov 8 2012 22:30:35 (glusterfs-3.3.0.5rhs-37.el6rhs.x86_64) Moving the bug to verified state
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html