Bug 854622 - [c92f45c7420ced52bdfdadbfb15f296ac6c9e109]: Frame bailing out at FINODELK
[c92f45c7420ced52bdfdadbfb15f296ac6c9e109]: Frame bailing out at FINODELK
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.0
x86_64 Linux
medium Severity medium
: ---
: ---
Assigned To: Amar Tumballi
Rahul Hinduja
:
Depends On: 765094
Blocks:
  Show dependency treegraph
 
Reported: 2012-09-05 09:09 EDT by Vidya Sakar
Modified: 2013-12-18 19:08 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 765094
Environment:
Last Closed: 2013-09-23 18:33:18 EDT
Type: ---
Regression: ---
Mount Type: fuse
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Vidya Sakar 2012-09-05 09:09:15 EDT
+++ This bug was initially created as a clone of Bug #765094 +++

I was running sanity on distributed-replicate volume. Only 5 tests passed out of 20 tests.

The list of tests failing:
kernel compile failed
dd failed
Large file reading failed
dbench failed
glusterfs build failed
openssl failed
postmark failed
multiple files failed
fsx failed
arequal failed
syscallbench failed
tiobench failed
locktests failed

Along with sanity I was running the below script in parallel to initiate graph change.

VOLNAME=vol

while [ 1 ]
do
  gluster volume set $VOLNAME stat-prefetch off
  sleep 300;
  gluster volume set $VOLNAME read-ahead off
  sleep 300;
  gluster volume set $VOLNAME quick-read off
  sleep 300;
  gluster volume set $VOLNAME io-cache off
  sleep 300;
  gluster volume set $VOLNAME write-behind off
  sleep 300;
  gluster volume set $VOLNAME read-ahead off
  sleep 600;
  echo 3 > /proc/sys/vm/drop_caches;
  gluster volume reset $VOLNAME
  sleep 1200;
done;

Attached the client log file.

--- Additional comment from amarts@redhat.com on 2012-05-28 06:39:44 EDT ---

Recent Pranith's fixes with locks should fix this. Can any one confirm?
Comment 2 Rahul Hinduja 2012-12-07 04:33:17 EST
Ran FS sanity on 2*2 volume (FUSE Mount) in parallel with the graph change script mentioned in the bug.

FS sanity is successful

Verified with build:
====================

[12/07/12 - 08:45:47 root@dhcp159-57 ~]# gluster --version 
glusterfs 3.3.0.5rhs built on Nov  8 2012 22:30:35

(glusterfs-3.3.0.5rhs-37.el6rhs.x86_64)

Moving the bug to verified state
Comment 5 Scott Haines 2013-09-23 18:33:18 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Note You need to log in before you can comment on or make changes to this bug.