Description of problem: when processing "gluster v status [vol] nfs clients" after "systemctl restart glusterd", gnfs will crash with certain probability. dump trace info: /lib64/libglusterfs.so.0(+0x270f0)[0x7effb6c7b0f0] /lib64/libglusterfs.so.0(gf_print_trace+0x334)[0x7effb6c854a4] /lib64/libc.so.6(+0x35270)[0x7effb52e7270] /usr/sbin/glusterfs(glusterfs_handle_node_status+0x155)[0x7effb7196905] /lib64/libglusterfs.so.0(+0x63f40)[0x7effb6cb7f40] /lib64/libc.so.6(+0x46d40)[0x7effb52f8d40] Version-Release number of selected component (if applicable): How reproducible: certain probability Steps to Reproduce: 1. create replicate volume naming rep 2. set rep volume nfs.disable off 3. process cli: "systemctl restart glusterd; gluster volume status rep nfs clients" Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/21569 (glusterfsd: Do not process GLUSTERD_NODE_STATUS if graph is not ready) posted (#1) for review on master by Hu Jianfei
REVIEW: https://review.gluster.org/21569 (glusterfsd: Do not process GLUSTERD_NODE_STATUS if graph is not ready) posted (#4) for review on master by Amar Tumballi
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/