Bug 982532 - nfs: dbench failed on NFS mount when the bricks were offlined -> onlined
nfs: dbench failed on NFS mount when the bricks were offlined -> onlined
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gluster-nfs (Show other bugs)
2.1
x86_64 Unspecified
medium Severity high
: ---
: ---
Assigned To: Niels de Vos
storage-qa-internal@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-09 05:03 EDT by Rahul Hinduja
Modified: 2015-12-03 12:20 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 12:20:44 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Rahul Hinduja 2013-07-09 05:03:11 EDT
Description of problem:
=======================

While the dbench was in-progress from fuse and nfs mount, brought down the bricks from each replica pair in 6*2 setup. Brought the bricks back after sometime. Dbench failed on nfs mount with following error

[19027] unlink ./clients/client6/~dmtmp/WORD/~WRL1146.TMP failed (No such file or directory) - expected NT_STATUS_OK
ERROR: child 6 failed at line 19027
Child failed with status 1
[root@darrel n]# [19196] unlink ./clients/client8/~dmtmp/PARADOX/__S31.DB failed (No such file or directory) - expected NT_STATUS_OK
ERROR: child 8 failed at line 19196

[root@darrel n]# 



Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.4.0.12rhs.beta3-1.el6.x86_64
glusterfs-fuse-3.4.0.12rhs.beta3-1.el6.x86_64
glusterfs-rdma-3.4.0.12rhs.beta3-1.el6.x86_64
glusterfs-debuginfo-3.4.0.12rhs.beta3-1.el6.x86_64
glusterfs-devel-3.4.0.12rhs.beta3-1.el6.x86_64


Steps Carried:
==============
1. Created and started 6*2 volume from 4 servers ( server1-4)
2. Mount the volume on client (Fuse and NFS)
3. create directories f and n from fuse mount
4. cd to f from fuse mount and cd to n from nfs mount
5. Turn the metadata,data and entry to off using

"for i in {'metadata','data','entry'} ; do gluster volume set <volume_name> $i-self-heal off ; done"


6. Start the dbench from both fuse (f) and nfs (n) directories using

dbench -s -F -S --stat-check 10

7. While the dbench was in progress, brought down the bricks from server1 using kill -9 and powered down the server3

8. After few seconds (approx 30) brought back the server3
9. After 2 minutes started the volume forcefully (gluster volume starte vol-dr force)
10. Dbench failed on nfs

Actual results:
===============

On fuse mount is running successfully

On NFS mount it failed with error

  10     19579     5.05 MB/sec  execute 167 sec  latency 4854.036 ms
  10     19579     5.02 MB/sec  execute 168 sec  latency 5854.099 ms
[19027] unlink ./clients/client6/~dmtmp/WORD/~WRL1146.TMP failed (No such file or directory) - expected NT_STATUS_OK
ERROR: child 6 failed at line 19027
Child failed with status 1
[root@darrel n]# [19196] unlink ./clients/client8/~dmtmp/PARADOX/__S31.DB failed (No such file or directory) - expected NT_STATUS_OK
ERROR: child 8 failed at line 19196

Expected results:
=================

dbench should not fail, it may hung for few seconds as the process was restarted(via start force0 and than should continue.It should not fail.
Comment 3 rjoseph 2013-08-26 06:11:53 EDT
Can you please mention from which server NFS was mounted?
Comment 4 Rahul Hinduja 2013-09-02 03:22:22 EDT
NFS was mounted from rhs-client11 server.
Comment 5 Vivek Agarwal 2015-12-03 12:20:44 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.