Bug 1286039 - glusterd process crashed while setting the option "cluster.extra-hash-regex"
glusterd process crashed while setting the option "cluster.extra-hash-regex"
Status: CLOSED DUPLICATE of bug 1286038
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
3.1
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
storage-qa-internal@redhat.com
glusterd
:
Depends On: 1067455
Blocks:
  Show dependency treegraph
 
Reported: 2015-11-27 05:08 EST by Susant Kumar Palai
Modified: 2015-12-03 11:37 EST (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1067455
Environment:
Last Closed: 2015-12-03 11:37:35 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Susant Kumar Palai 2015-11-27 05:08:59 EST
+++ This bug was initially created as a clone of Bug #1067455 +++

Description of problem: glusterd process crashed while setting the option "cluster.extra-hash-regex" 

bt in glusterd log,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

>2014-02-20 13:29:34.036067] I [glusterd-ping.c:297:glusterd_ping_cbk] 0-management: defaulting ping-timeout to 10s
[2014-02-20 13:29:34.084337] I [glusterd-ping.c:297:glusterd_ping_cbk] 0-management: defaulting ping-timeout to 10s
[2014-02-20 13:29:34.098620] I [glusterd-ping.c:297:glusterd_ping_cbk] 0-management: defaulting ping-timeout to 10s
pending frames:
frame : type(0) op(0)

patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2014-02-20 13:29:43configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.4.0.59rhs
/lib64/libc.so.6[0x328d632960]
/lib64/libc.so.6(_IO_vfprintf+0x3e5c)[0x328d6480ec]
/lib64/libc.so.6(__vasprintf_chk+0xcc)[0x328d701eac]
/usr/lib64/libglusterfs.so.0(_gf_log+0x4e1)[0x7fb52382da81]
/usr/lib64/libglusterfs.so.0(yyerror+0xbe)[0x7fb523873f7e]
/usr/lib64/libglusterfs.so.0(yyparse+0x3a3)[0x7fb5238745c3]
/usr/lib64/libglusterfs.so.0(glusterfs_graph_construct+0x417)[0x7fb5238752e7]
/usr/lib64/glusterfs/3.4.0.59rhs/xlator/mgmt/glusterd.so(glusterd_check_topology_identical+0xa9)[0x7fb51fa181c9]
/usr/lib64/glusterfs/3.4.0.59rhs/xlator/mgmt/glusterd.so(glusterd_check_nfs_topology_identical+0x14a)[0x7fb51fa3d7ca]
/usr/lib64/glusterfs/3.4.0.59rhs/xlator/mgmt/glusterd.so(glusterd_reconfigure_nfs+0x40)[0x7fb51fa19ba0]
/usr/lib64/glusterfs/3.4.0.59rhs/xlator/mgmt/glusterd.so(glusterd_nodesvcs_batch_op+0x51)[0x7fb51fa153c1]
/usr/lib64/glusterfs/3.4.0.59rhs/xlator/mgmt/glusterd.so(glusterd_op_commit_perform+0x15f7)[0x7fb51fa0cd47]
/usr/lib64/glusterfs/3.4.0.59rhs/xlator/mgmt/glusterd.so(gd_commit_op_phase+0xbe)[0x7fb51fa685fe]
/usr/lib64/glusterfs/3.4.0.59rhs/xlator/mgmt/glusterd.so(gd_sync_task_begin+0x2c2)[0x7fb51fa6a262]
/usr/lib64/glusterfs/3.4.0.59rhs/xlator/mgmt/glusterd.so(glusterd_op_begin_synctask+0x3b)[0x7fb51fa6a39b]
/usr/lib64/glusterfs/3.4.0.59rhs/xlator/mgmt/glusterd.so(__glusterd_handle_set_volume+0x496)[0x7fb51f9f5d36]
/usr/lib64/glusterfs/3.4.0.59rhs/xlator/mgmt/glusterd.so(glusterd_big_locked_handler+0x3f)[0x7fb51f9f378f]
/usr/lib64/libglusterfs.so.0(synctask_wrap+0x12)[0x7fb52385c192]
/lib64/libc.so.6[0x328d643bb0]

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>


Version-Release number of selected component (if applicable): glusterfs-3.4.0.59rhs-1


How reproducible: doesn't happen everytime. 


Steps to Reproduce:
1. create and start a volume (6x2 with 4 nodes)
2. run command  "gluster volume set <VOL> cluster.extra-hash-regex '(.*)\.swp*'


Actual results: glusterd crashed 


Expected results: glusterd shouldn't crash. 



--- Additional comment from Atin Mukherjee on 2015-11-17 14:23:51 MVT ---

Since the option is a DHT tunable, moving this bug to zteam

--- Additional comment from Susant Kumar Palai on 2015-11-27 15:07:21 MVT ---

Cloning this bug in 3.1. To be fixed in future release.
Comment 2 SATHEESARAN 2015-12-03 11:37:35 EST

*** This bug has been marked as a duplicate of bug 1286038 ***

Note You need to log in before you can comment on or make changes to this bug.