Description of problem: If we are trying to print snapd status alone using gluster volume status volname snapd, then glusterd crashed in local node. Version-Release number of selected component (if applicable): mainline How reproducible: 100%. Steps to Reproduce: 1.create a volume 2.enable uss 3.gluster volume status volname snapd Actual results: glusterd crashed Expected results: glusterd should not crash. Additional info:
REVIEW: http://review.gluster.org/13759 (glusterd/snapshot: dereferencing null variable resulted in crash) posted (#1) for review on master by mohammed rafi kc (rkavunga)
REVIEW: http://review.gluster.org/13759 (glusterd/snapshot: dereferencing null variable resulted in crash) posted (#2) for review on master by mohammed rafi kc (rkavunga)
COMMIT: http://review.gluster.org/13759 committed in master by Atin Mukherjee (amukherj) ------ commit 696fbf9b18078a7ac28080d841f0de2306786b87 Author: Mohammed Rafi KC <rkavunga> Date: Thu Mar 17 13:37:59 2016 +0530 glusterd/snapshot: dereferencing null variable resulted in crash When we add service details into dictionary, snapd is volume based service. So the svc variable for snapd will be stored in volinfo. But when we trying to add details for snapd node alone we use generic function, ie that won't have the svc variable initialized. Change-Id: I7e4abc477e6c520388d696548ffa260a43281827 BUG: 1318544 Signed-off-by: Mohammed Rafi KC <rkavunga> Reviewed-on: http://review.gluster.org/13759 Smoke: Gluster Build System <jenkins.com> CentOS-regression: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Avra Sengupta <asengupt> Reviewed-by: Atin Mukherjee <amukherj>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user