Hide Forgot
Description of problem: ====================== glusterd crashed on few nodes while creating 2 different CG's simultaneusly on different volumes Version-Release number of selected component (if applicable): ============================================================= glusterfs 3.4.0.snap.dec03.2013git How reproducible: Steps to Reproduce: =================== 1.Create 4 distribute replicate volumes and start it 2.Mount the volume and create files 3.Create a CG with first 2 volumes from the first node and simultaneously create another CG with 3rd and 4th volumes from a different session first node: ---------- gluster snapshot create volume3 volume4 -n CG1 -d "this is new CG" snapshot create: CG1: consistency group created successfully [first node : different session ] ---------------------------------- gluster snapshot create volume5 volume6 -n CG2 -d "this is new CG" snapshot create: CG2: consistency group created successfully Both the CG s were created successfully , but glusterd crashed on few nodes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~` ESC[?1034h(gdb) bt #0 0x0000003f7944812c in vfprintf () from /lib64/libc.so.6 #1 0x0000003f795000eb in __fprintf_chk () from /lib64/libc.so.6 #2 0x0000003ad20488e0 in fprintf (fd=60, key=0x7f2bd65b0134 "username", value=0x6c6f760032656d61 <Address 0x6c6f760032656d61 out of bounds>) at /usr/include/bits/stdio2.h:98 #3 gf_store_save_value (fd=60, key=0x7f2bd65b0134 "username", value=0x6c6f760032656d61 <Address 0x6c6f760032656d61 out of bounds>) at store.c:325 #4 0x00007f2bd655cf9c in glusterd_volume_exclude_options_write (fd=60, volinfo=0x7f2bcc1ba720) at glusterd-store.c:666 #5 0x00007f2bd655d1aa in glusterd_store_volinfo_write (fd=60, volinfo=0x7f2bcc1ba720) at glusterd-store.c:840 #6 0x00007f2bd655f797 in glusterd_store_perform_snap_volume_store (volinfo=0x7f2bcc00d3e0, snap_volinfo=0x7f2bcc1ba720) at glusterd-store.c:1356 #7 0x00007f2bd655f83f in glusterd_store_snap_volume (volinfo=0x7f2bcc00d3e0, snap=0x7f2bcc1bb770) at glusterd-store.c:1401 #8 0x00007f2bd655fb13 in glusterd_store_perform_snap_store (volinfo=0x7f2bcc00d3e0) at glusterd-store.c:1502 #9 0x00007f2bd659dc19 in glusterd_do_snap (volinfo=0x7f2bcc00d3e0, snapname=0x7f2bc00304e0 "CG3_volume5_snap", dict=0x7f2bd87cd650, cg=<value optimized out>, cg_id=0x7f2bc000b340, volcount=1, snap_volid=0x7f2bc000b2c0 "$B\277\311\033GI\275\201\346\243QX'\376\257 ") at glusterd-snapshot.c:3132 #10 0x00007f2bd659f5a5 in glusterd_snapshot_create_commit (dict=<value optimized out>, op_errstr=0x1d94b70, rsp_dict=<value optimized out>) at glusterd-snapshot.c:3370 #11 0x00007f2bd659f92e in glusterd_snapshot (dict=0x7f2bd87cd650, op_errstr=0x1d94b70, rsp_dict=0x7f2bd87cdce0) at glusterd-snapshot.c:3588 #12 0x00007f2bd65a358e in gd_mgmt_v3_commit_fn (op=GD_OP_SNAP, dict=0x7f2bd87cd650, op_errstr=0x1d94b70, rsp_dict=0x7f2bd87cdce0) at glusterd-mgmt.c:174 #13 0x00007f2bd65a0813 in glusterd_handle_commit_fn (req=0x7f2bd6492818) at glusterd-mgmt-handler.c:546 #14 0x00007f2bd6519d4f in glusterd_big_locked_handler (req=0x7f2bd6492818, actor_fn=0x7f2bd65a05d0 <glusterd_handle_commit_fn>) at glusterd-handler.c:78 #15 0x0000003ad204cdd2 in synctask_wrap (old_task=<value optimized out>) at syncop.c:293 #16 0x0000003f79443bf0 in ?? () from /lib64/libc.so.6 #17 0x0000000000000000 in ?? () (gdb) f 3 #3 gf_store_save_value (fd=60, key=0x7f2bd65b0134 "username", value=0x6c6f760032656d61 <Address 0x6c6f760032656d61 out of bounds>) at store.c:325 325 ret = fprintf (fp, "%s=%s\n", key, value); (gdb) p key $1 = 0x7f2bd65b0134 "username" (gdb) p value $2 = 0x6c6f760032656d61 <Address 0x6c6f760032656d61 out of bounds> (gdb) quit ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Actual results: =============== glusterd crashed while creating 2 different CG's simultaneusly on different volumes Expected results: Additional info:
http://rhsqe-repo.lab.eng.blr.redhat.com/bugs_necessary_info/snapshots/1042795/
T
Marking snapshot BZs to RHS 3.0.
Fixing RHS 3.0 flags.
CG code is removed from snapshot code-base. Therefore this bug does not hold good. This bug needs to be closed.