REVIEW: http://review.gluster.org/9305 (uss/snapd: Handle readlink fops on snap view server) posted (#1) for review on release-3.6 by Sachin Pandit (spandit)
REVIEW: http://review.gluster.org/9305 (uss/snapd: Handle readlink fops on snap view server.) posted (#2) for review on release-3.6 by Sachin Pandit (spandit)
REVIEW: http://review.gluster.org/9305 (uss/snapd: Handle readlink fops on snap view server.) posted (#3) for review on release-3.6 by Sachin Pandit (spandit)
REVIEW: http://review.gluster.org/9305 (uss/snapd: Handle readlink fops on snap view server) posted (#4) for review on release-3.6 by Sachin Pandit (spandit)
COMMIT: http://review.gluster.org/9305 committed in release-3.6 by Raghavendra Bhat (raghavendra) ------ commit e1c977a9f9eb98c104d2e28fccaf7813be15eaf9 Author: Avra Sengupta <asengupt> Date: Wed Nov 12 12:02:44 2014 +0000 uss/snapd: Handle readlink fops on snap view server Handle readlink fops in case of symlinks on snap view server BUG: 1175756 Change-Id: Ia08e9e9c1c61e06132732aa580c5a9fd5e7c449b Signed-off-by: Avra Sengupta <asengupt> Reviewed-on: http://review.gluster.org/9102 Reviewed-by: Vijaikumar Mallikarjuna <vmallika> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur> Signed-off-by: Sachin Pandit <spandit> Reviewed-on: http://review.gluster.org/9305 Reviewed-by: Raghavendra Bhat <raghavendra>
escription of problem: ====================== Snapshot daemon crashed while trying to access snap directory under .snaps directory Version-Release number of selected component (if applicable): ============================================================ glusterfs 3.6.0.30 How reproducible: ================ 1/1 Steps to Reproduce: ================== 1.Create a 2x2 dist rep volume and start it 2.Fuse and NFS mount the volume 3. Enbale USS on the volume 4.Create some IO Fuse mount : for i in {1..10} ; do cp -rvf /etc etc.$i ; done NFS mount : for i in {1..10} ; do cp -rvf /etc nfs_etc.$i ; done 3.While IO is going on, create few snapshots on the volume for i in {1..10}; do gluster snapshot create snap"$i" vol0 ;done 4.After snapshot creation is completed, from fuse mount, cd to .snaps [root@dhcp-0-97 .snaps]# ll total 0 d---------. 0 root root 0 Jan 1 1970 snap1 d---------. 0 root root 0 Jan 1 1970 snap10 d---------. 0 root root 0 Jan 1 1970 snap2 d---------. 0 root root 0 Jan 1 1970 snap3 d---------. 0 root root 0 Jan 1 1970 snap4 d---------. 0 root root 0 Jan 1 1970 snap5 d---------. 0 root root 0 Jan 1 1970 snap6 d---------. 0 root root 0 Jan 1 1970 snap7 d---------. 0 root root 0 Jan 1 1970 snap8 d---------. 0 root root 0 Jan 1 1970 snap9 cd to snap1 and list the files and directories under them, resulted in snapd crash [root@dhcp-0-97 .snaps]# cd snap1 [root@dhcp-0-97 snap1]# ls ls: cannot read symbolic link rc4.d: Transport endpoint is not connected ls: cannot access cups: Transport endpoint is not connected ls: cannot access cron.weekly: Transport endpoint is not connected ls: cannot access quotatab: Transport endpoint is not connected ls: reading directory .: File descriptor in bad state aliases.db cron.weekly environment magic my.cnf PackageKit printcap rc4.d shells xdg cron.hourly cups gshadow- modprobe.d oddjob plymouth quotatab rc.d statetab.d yum.conf [root@dhcp-0-97 snap1]# ll ls: cannot open directory .: Transport endpoint is not connected [root@dhcp-0-97 snap1]# ls ls: cannot open directory .: Transport endpoint is not connected [root@dhcp-0-97 snap1]# ls ls: cannot open directory .: Transport endpoint is not connected [root@dhcp-0-97 snap1]# cd .. bash: cd: ..: Transport endpoint is not connected [root@dhcp-0-97 snap1]# cd .. bash: cd: ..: Transport endpoint is not connected gluster v status vol0 Status of volume: vol0 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick snapshot13.lab.eng.blr.redhat.com:/rhs/brick1/b1 49152 Y 16104 Brick snapshot14.lab.eng.blr.redhat.com:/rhs/brick1/b1 49152 Y 14350 Brick snapshot15.lab.eng.blr.redhat.com:/rhs/brick1/b1 49152 Y 14379 Brick snapshot16.lab.eng.blr.redhat.com:/rhs/brick1/b1 49152 Y 14046 Snapshot Daemon on localhost N/A N 16315 NFS Server on localhost 2049 Y 16330 Self-heal Daemon on localhost N/A Y 16257 Snapshot Daemon on snapshot14.lab.eng.blr.redhat.com 49160 Y 14527 NFS Server on snapshot14.lab.eng.blr.redhat.com 2049 Y 14542 Self-heal Daemon on snapshot14.lab.eng.blr.redhat.com N/A Y 14480 Snapshot Daemon on snapshot16.lab.eng.blr.redhat.com 49160 Y 14227 NFS Server on snapshot16.lab.eng.blr.redhat.com 2049 Y 14242 Self-heal Daemon on snapshot16.lab.eng.blr.redhat.com N/A Y 14179 Snapshot Daemon on snapshot15.lab.eng.blr.redhat.com 49160 Y 14560 NFS Server on snapshot15.lab.eng.blr.redhat.com 2049 Y 14567 Self-heal Daemon on snapshot15.lab.eng.blr.redhat.com N/A Y 14506 Task Status of Volume vol0 ------------------------------------------------------------------------------ Actual results: =============== snapd crash while trying to access the snap directory under .snaps Expected results: ================ Accessing snaps under .snaps should not result in any crash Additional info: =============== snapd log snippet: ~~~~~~~~~~~~~~~~~~ [2014-11-04 05:58:04.581582] I [snapview-server-mgmt.c:27:mgmt_cbk_snap] 0-mgmt: list of snapshots changed [2014-11-04 05:58:15.695738] W [dict.c:1307:dict_get_with_ref] (-->/usr/lib64/libglusterfs.so.0(default_lookup_resume+0x12c) [0x396aa271dc] (-->/usr/lib64/glusterfs/3.6.0.30/xlator/features/snapview-server.so(svs_lookup+0x2e3) [0x7f47ce276f03] (-->/usr/lib64/libglusterfs.so.0(dict_get_str_boolean+0x1f) [0x396aa1aabf]))) 0-dict: dict OR key (entry-point) is NULL [2014-11-04 05:58:19.481945] W [dict.c:1307:dict_get_with_ref] (-->/usr/lib64/libglusterfs.so.0(default_lookup_resume+0x12c) [0x396aa271dc] (-->/usr/lib64/glusterfs/3.6.0.30/xlator/features/snapview-server.so(svs_lookup+0x2e3) [0x7f47ce276f03] (-->/usr/lib64/libglusterfs.so.0(dict_get_str_boolean+0x1f) [0x396aa1aabf]))) 0-dict: dict OR key (entry-point) is NULL pending frames: frame : type(0) op(2) patchset: git://git.gluster.com/glusterfs.git signal received: 11 time of crash: 2014-11-04 05:58:19 configuration details: argp 1 backtrace 1 dlfcn 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs 3.6.1 /usr/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xb6)[0x396aa1ff06] /usr/lib64/libglusterfs.so.0(gf_print_trace+0x33f)[0x396aa3a59f] /lib64/libc.so.6[0x343c8326a0] /usr/lib64/libglusterfs.so.0(default_readlink+0x32)[0x396aa25a52] /usr/lib64/libglusterfs.so.0(default_readlink_resume+0x137)[0x396aa293f7] /usr/lib64/libglusterfs.so.0(call_resume+0x54e)[0x396aa41cde] /usr/lib64/glusterfs/3.6.1/xlator/performance/io-threads.so(iot_worker+0x158)[0x7f47ce069348] /lib64/libpthread.so.0[0x343cc079d1] /lib64/libc.so.6(clone+0x6d)[0x343c8e89dd]
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.2, please reopen this bug report. glusterfs-3.6.2 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should already be or become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. The fix for this bug likely to be included in all future GlusterFS releases i.e. release > 3.6.2. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/5978 [2] http://news.gmane.org/gmane.comp.file-systems.gluster.user [3] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137