Bug 765375 - (GLUSTER-3643) Core dump on mkdir
Core dump on mkdir
Status: CLOSED WORKSFORME
Product: GlusterFS
Classification: Community
Component: access-control (Show other bugs)
3.2.3
i386 Linux
low Severity high
: ---
: ---
Assigned To: shishir gowda
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2011-09-27 03:17 EDT by jrroca
Modified: 2015-12-01 11:45 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-01-25 05:33:06 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description jrroca 2011-09-27 03:17:21 EDT
When you try an mkdir a coredump is send. Here are de logs an the config.


Given volfile:
+------------------------------------------------------------------------------+
  1: volume desafs-client-0
  2:     type protocol/client
  3:     option remote-host bzfsdesa
  4:     option remote-subvolume /almacen.gluster
  5:     option transport-type tcp
  6: end-volume
  7:
  8: volume desafs-write-behind
  9:     type performance/write-behind
 10:     subvolumes desafs-client-0
 11: end-volume
 12:
 13: volume desafs-read-ahead
 14:     type performance/read-ahead
 15:     subvolumes desafs-write-behind
 16: end-volume
 17:
 18: volume desafs-io-cache
 19:     type performance/io-cache
 20:     subvolumes desafs-read-ahead
 21: end-volume
 22:
 23: volume desafs-quick-read
 24:     type performance/quick-read
 25:     subvolumes desafs-io-cache
 26: end-volume
 27:
 28: volume desafs-stat-prefetch
 29:     type performance/stat-prefetch
 30:     subvolumes desafs-quick-read
 31: end-volume
 32:
 33: volume desafs
 34:     type debug/io-stats
 35:     option latency-measurement off
 36:     option count-fop-hits off
 37:     subvolumes desafs-stat-prefetch
 38: end-volume

+------------------------------------------------------------------------------+

pending frames:
frame : type(1) op(MKDIR)
frame : type(1) op(MKDIR)

patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2011-09-26 18:26:00
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.2.3
[0xb788a400]
/usr/local/lib/glusterfs/3.2.3/xlator/system/posix-acl.so(posix_acl_inherit+0x97)[0xb5746bd7]
/usr/local/lib/glusterfs/3.2.3/xlator/system/posix-acl.so(posix_acl_inherit_dir+0x3a)[0xb57474ca]
/usr/local/lib/glusterfs/3.2.3/xlator/system/posix-acl.so(posix_acl_mkdir+0x145)[0xb5747615]
/usr/local/lib/glusterfs/3.2.3/xlator/mount/fuse.so(fuse_mkdir_resume+0x18d)[0xb6a9fa6d]
/usr/local/lib/glusterfs/3.2.3/xlator/mount/fuse.so(fuse_resolve_and_resume+0x5c)[0xb6a8d88c]
/usr/local/lib/glusterfs/3.2.3/xlator/mount/fuse.so(+0xb151)[0xb6a94151]
/usr/local/lib/glusterfs/3.2.3/xlator/mount/fuse.so(+0xd304)[0xb6a96304]
/lib/i686/cmov/libpthread.so.0(+0x5955)[0xb77e9955]
/lib/i686/cmov/libc.so.6(clone+0x5e)[0xb7769e7e]

[ 6666.710357] glusterfs[18047]: segfault at 0 ip b7725393 sp bfab8c80 error 4 in libglusterfs.so.0.0.0[b7710000+51000]


Mount line:

bzfsdesa:/desafs        /almacen        glusterfs       defaults,acl,_netdev,log-level=WARNING,log-file=/var/log/gluster.log 0  0

Command line:


host:/almacen/homes# mkdir p
mkdir: no se puede crear el directorio «p»: El programa provocó el fin de la conexión
host:/almacen/homes#
host:/almacen/homes# ls
ls: no se puede abrir el directorio .: El otro extremo de la conexión no está conectado
Comment 1 shishir gowda 2011-09-27 23:50:29 EDT
Can you please provide more info on your setup?
Is the issue reproducible on every call to mkdir?
If yes, ways to reproduce the issue.
Also please attach the client logs so that we can look into the issue.
Comment 2 shishir gowda 2012-01-09 07:39:19 EST
Please provide more details.
Due to non-availability of any more information, we may have to close the bug.
We are not able to triage the issue any further.
Comment 3 shishir gowda 2012-01-25 05:33:06 EST
We have not received any more info from the bug reporter, and we are not able to reproduce the bug locally. Please reopen the bug if its reproduced, and please provide logs.

Note You need to log in before you can comment on or make changes to this bug.