Bug 1197682 - nfs: E [rpc-clnt.c:201:call_bail] 0-vol0-client-8: bailing out frame type(GlusterFS 3.3) op(FXATTROP(34))
Summary: nfs: E [rpc-clnt.c:201:call_bail] 0-vol0-client-8: bailing out frame type(Gl...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-nfs
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Niels de Vos
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-03-02 11:49 UTC by Saurabh
Modified: 2018-04-16 18:03 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)

Description Saurabh 2015-03-02 11:49:35 UTC
Description of problem:
I see some logs in the nfs.log of type,
[2015-03-02 04:33:51.838193] E [rpc-clnt.c:201:call_bail] 0-vol0-client-8: bailing out frame type(GlusterFS 3.3) op(FXATTROP(34)) xid = 0xc79da sent = 2015-03-02 04:03:41.129922. timeout = 1800 for 10.70.37.187:49154
[2015-03-02 04:33:51.838208] W [client-rpc-fops.c:1804:client3_3_fxattrop_cbk] 0-vol0-client-8: remote operation failed: Transport endpoint is not connected
[2015-03-02 04:33:51.838324] E [rpc-clnt.c:201:call_bail] 0-vol0-client-8: bailing out frame type(GlusterFS 3.3) op(FXATTROP(34)) xid = 0xc79d9 sent = 2015-03-02 04:03:41.054829. timeout = 1800 for 10.70.37.187:49154
[2015-03-02 04:33:51.838341] W [client-rpc-fops.c:1804:client3_3_fxattrop_cbk] 0-vol0-client-8: remote operation failed: Transport endpoint is not connected
[2015-03-02 04:33:51.838436] E [rpc-clnt.c:201:call_bail] 0-vol0-client-8: bailing out frame type(GlusterFS 3.3) op(FXATTROP(34)) xid = 0xc79d8 sent = 2015-03-02 04:03:41.022197. timeout = 1800 for 10.70.37.187:49154
[2015-03-02 04:33:51.838452] W [client-rpc-fops.c:1804:client3_3_fxattrop_cbk] 0-vol0-client-8: remote operation failed: Transport endpoint is not connected

A similar type of BZ was earlier filed,
https://bugzilla.redhat.com/show_bug.cgi?id=905415

Version-Release number of selected component (if applicable):
glusterfs-3.6.0.47-1.el6rhs.x86_64

How reproducible:
seen on this build

Steps to Reproduce:
1. create a 6x2 type volume start it
2. enable quota and set limit on directories
3. also set event threads to 5,
4. start creating a file such as 5GB one.

Actual results:
as mentioned above

Expected results:
these kind of logs are spurrious, and presently does make sure about the problems, when the issue operations are going on.

Additional info:

Volume Name: vol0
Type: Distributed-Replicate
Volume ID: 6dfb9bfc-2682-4596-80eb-8149f0e681dd
Status: Started
Snap Volume: no
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.187:/rhs/brick1/d1r1
Brick2: 10.70.37.207:/rhs/brick1/d1r2
Brick3: 10.70.37.179:/rhs/brick1/d2r1
Brick4: 10.70.37.71:/rhs/brick1/d2r2
Brick5: 10.70.37.187:/rhs/brick1/d3r1
Brick6: 10.70.37.207:/rhs/brick1/d3r2
Brick7: 10.70.37.179:/rhs/brick1/d4r1
Brick8: 10.70.37.71:/rhs/brick1/d4r2
Brick9: 10.70.37.187:/rhs/brick1/d5r1
Brick10: 10.70.37.207:/rhs/brick1/d5r2
Brick11: 10.70.37.179:/rhs/brick1/d6r1
Brick12: 10.70.37.71:/rhs/brick1/d6r2
Options Reconfigured:
features.quota-deem-statfs: on
client.event-threads: 5
server.event-threads: 5
features.quota: on
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256


Note You need to log in before you can comment on or make changes to this bug.