Bug 763997 - (GLUSTER-2265) fuse mount hangs
fuse mount hangs
Status: CLOSED NOTABUG
Product: GlusterFS
Classification: Community
Component: rdma (Show other bugs)
3.1.1
All Linux
urgent Severity high
: ---
: ---
Assigned To: Raghavendra G
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-12-31 05:27 EST by Saurabh
Modified: 2015-12-01 11:45 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: ---
Regression: ---
Mount Type: fuse
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Saurabh 2010-12-31 05:27:31 EST
The volume mount over fuse via rdma does not work properly,
even cd to mount hangs,  

Have tried out on different clients, it persists

log result,
[saurabh@client10 ~]$ cd /mnt/saurabh/glusterfs-test






From other terminal session,

[saurabh@client10 sbin]$ ./glusterfs --version

glusterfs 3.1.2qa3 built on Dec 30 2010 04:20:42

Repository revision: v3.1.1-52-gcbba1c3

Copyright (c) 2006-2010 Gluster Inc. <http://www.gluster.com>

GlusterFS comes with ABSOLUTELY NO WARRANTY.

You may redistribute copies of GlusterFS under the terms of the GNU Affero General Public License.

[saurabh@client10 sbin]$ 


[saurabh@client10 ~]$ ps -eaf | grep gluster

root     16222     1  1 02:17 ?        00:00:00 /home/saurabh/glusterfs/3.1.2qa3/inst/sbin/glusterfs --log-level=NORMAL --volfile-id=/repdist --volfile-server=10.1.10.31 /mnt/saurabh/glusterfs-test/

saurabh  16231 16113  0 02:18 pts/12   00:00:00 grep gluster

[saurabh@client10 ~]$ 



gluster dump from the client,
      1 
      2 [mallinfo]
      3 mallinfo_arena=67686400
      4 mallinfo_ordblks=8187
      5 mallinfo_smblks=0
      6 mallinfo_hblks=21
      7 mallinfo_hblkhd=55668736
      8 mallinfo_usmblks=0
      9 mallinfo_fsmblks=0
     10 mallinfo_uordblks=38578480
     11 mallinfo_fordblks=29107920
     12 mallinfo_keepcost=272880
     13 
     14 [iobuf.global]
     15 iobuf.global.iobuf_pool=0x6181b20
     16 iobuf.global.iobuf_pool.page_size=131072
     17 iobuf.global.iobuf_pool.arena_size=8388608
     18 iobuf.global.iobuf_pool.arena_cnt=1
     19 
     20 [iobuf.global.iobuf_pool.arena.1]
     21 iobuf.global.iobuf_pool.arena.1.mem_base=0x2b40b71c8000
     22 iobuf.global.iobuf_pool.arena.1.active_cnt=1
     23 iobuf.global.iobuf_pool.arena.1.passive_cnt=63
     24 
     25 [iobuf.global.iobuf_pool.arena.1.active_iobuf.1]
     26 iobuf.global.iobuf_pool.arena.1.active_iobuf.1.ref=1
     27 iobuf.global.iobuf_pool.arena.1.active_iobuf.1.ptr=0x2b40b79a8000
     28 
     29 [global.callpool]
     30 global.callpool=0x6182710
     31 global.callpool.cnt=1
     32 
     33 [global.callpool.stack.1]
     34 global.callpool.stack.1.uid=0
     35 global.callpool.stack.1.gid=0
     36 global.callpool.stack.1.pid=0
     37 global.callpool.stack.1.unique=0
     38 global.callpool.stack.1.type=0
     39 global.callpool.stack.1.cnt=1
     40 
     41 [global.callpool.stack.1.frame.1]
     42 global.callpool.stack.1.frame.1.ref_count=0
     43 global.callpool.stack.1.frame.1.translator=glusterfs
     44 global.callpool.stack.1.frame.1.complete=0
     45 
     46 [xlator.mount.fuse.priv]
     47 xlator.mount.fuse.priv.fd=5
     48 xlator.mount.fuse.priv.proto_minor=0
     49 xlator.mount.fuse.priv.volfile=None
     50 xlator.mount.fuse.volfile_size=0
     51 xlator.mount.fuse.mount_point=/mnt/saurabh/glusterfs-test/
     52 xlator.mount.fuse.iobuf=0
     53 xlator.mount.fuse.fuse_thread_started=0
     54 xlator.mount.fuse.direct_io_mode=2
     55 xlator.mount.fuse.entry_timeout=1.000000
     56 xlator.mount.fuse.attribute_timeout=1.000000
     57 xlator.mount.fuse.init_recvd=0
     58 xlator.mount.fuse.strict_volfile_check=0


Please let me know if one needs more information.
Comment 1 Raghavendra G 2011-01-03 20:34:37 EST
ib fabric was not up and running when glusterfs was started. Hence marking this bug as invalid

Note You need to log in before you can comment on or make changes to this bug.