Description of problem: ======================= After starting the georep session between master and slave, the master cluster hits segfault and generates lots of core. dmesg output: ============= python[6122]: segfault at 308 ip 0000003f11c23128 sp 00007f76cbffd530 error 4 in libglusterfs.so.0.0.1[3f11c00000+a3000] python[6103]: segfault at 308 ip 0000003f11c23128 sp 00007f9352d1e530 error 4 in libglusterfs.so.0.0.1[3f11c00000+a3000] SELinux: initialized (dev fuse, type fuse), uses genfs_contexts SELinux: initialized (dev fuse, type fuse), uses genfs_contexts python[6281]: segfault at 308 ip 0000003f11c23128 sp 00007fddbadfb530 error 4 in libglusterfs.so.0.0.1[3f11c00000+a3000] python[6268]: segfault at 308 ip 0000003f11c23128 sp 00007fcbbb459530 error 4 in libglusterfs.so.0.0.1[3f11c00000+a3000] SELinux: initialized (dev fuse, type fuse), uses genfs_contexts SELinux: initialized (dev fuse, type fuse), uses genfs_contexts python[6384]: segfault at 308 ip 0000003f11c23128 sp 00007fc76fffd530 error 4 in libglusterfs.so.0.0.1[3f11c00000+a3000] python[6405]: segfault at 308 ip 0000003f11c23128 sp 00007fbd11cdf530 error 4 in libglusterfs.so.0.0.1[3f11c00000+a3000] 1441 cores are generated: ========================= [root@georep1 ~]# ls /core* | wc 1441 1441 16862 [root@georep1 ~]# Version-Release number of selected component (if applicable): ============================================================= glusterfs-server-3.7dev-0.810.gitbf8a5c9.el6.x86_64 How reproducible: ================= 1/1 Steps to Reproduce: =================== 1. Create master cluster 2. Create slave cluster 3. Create session between master and slave volume 4. Start the session between master and slave volume Actual results: =============== Status is shown as NA and segmentaion fault is observed. Expected results: ================= Status show be Active/Passive. It should not observe segmentation fault.
REVIEW: http://review.gluster.org/10074 (libgfchangelog: Pass correct 'this' pointer to gf_history_consume) posted (#1) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10074 (libgfchangelog: Use correct 'this' pointer on new thread creation) posted (#2) for review on master by Kotresh HR (khiremat)
The two patches which handles the 'this' pointer correctly in libgfchangelog fixes this. http://review.gluster.org/#/c/9993/ (BUG: 1170075 : core seen with geo-rep start) http://review.gluster.org/10074 (The core seen with history crawl)
REVIEW: http://review.gluster.org/10074 (libgfchangelog: Use correct 'this' pointer on new thread creation) posted (#3) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10074 (libgfchangelog: Use correct 'this' pointer on new thread creation) posted (#4) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10074 (libgfchangelog: Use correct 'this' pointer on new thread creation) posted (#5) for review on master by Kotresh HR (khiremat)
COMMIT: http://review.gluster.org/10074 committed in master by Venky Shankar (vshankar) ------ commit 00d4125a5cb7102efeb23873cbaf155a71faa9dd Author: Kotresh HR <khiremat> Date: Tue Mar 31 20:13:59 2015 +0530 libgfchangelog: Use correct 'this' pointer on new thread creation When libgfchangelog is linked with non xlator application, it should point to 'master' xlator which is initiated separately. When ever a new thread is created, 'THIS' points to the global xlator. 'THIS' should point to corresponding xlator even then. This patch adjusts the pointer accordingly. Change-Id: I2a199bb3c73146a0329540aedcbae697a00f6f0a BUG: 1207643 Signed-off-by: Kotresh HR <khiremat> Reviewed-on: http://review.gluster.org/10074 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Venky Shankar <vshankar> Tested-by: Venky Shankar <vshankar>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user