Description of problem: When glfs_* methods operating on glfd are invoked after calling glfs_close(), the program segfaults inside __GLFS_ENTRY_VALIDATE_FD trying to deference glfd->fd->inode which is invalid. Version-Release number of selected component (if applicable): Master branch of glusterfs Steps to Reproduce (Example using python binding) #!/usr/bin/env python from glusterfs import gfapi import os v = gfapi.Volume("pp", "real") v.mount() f = v.open("file", os.O_RDONLY) # assuming file exists f.close() f.read() # This will segfault Actual results: Segfault with code dump Expected results: Gracefully exit with errno set to EBADF
REVIEW: http://review.gluster.org/10759 (libgfapi: Gracefully exit when glfd is invalid) posted (#3) for review on master by Prashanth Pai (ppai)
REVIEW: http://review.gluster.org/10759 (libgfapi: Gracefully exit when glfd is invalid) posted (#4) for review on master by Prashanth Pai (ppai)
REVIEW: http://review.gluster.org/10759 (libgfapi: Gracefully exit when glfd is invalid) posted (#5) for review on master by Prashanth Pai (ppai)
COMMIT: http://review.gluster.org/10759 committed in master by Shyamsundar Ranganathan (srangana) ------ commit afa793ff16b349989ca7c958466eae15d2d003f9 Author: Prashanth Pai <ppai> Date: Tue May 12 16:36:55 2015 +0530 libgfapi: Gracefully exit when glfd is invalid When glfs_* methods operating on glfd are invoked after calling glfs_close(), the program segfaults inside __GLFS_ENTRY_VALIDATE_FD trying to deference glfd->fd->inode which is invalid. Also, returning EBADF seemed more specific than EINVAL. BUG: 1221008 Change-Id: I13a92dca52da9a300252b69e026581b3a9e931fd Signed-off-by: Prashanth Pai <ppai> Reviewed-on: http://review.gluster.org/10759 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Shyamsundar Ranganathan <srangana>
REVIEW: http://review.gluster.org/11571 (libgfapi: Gracefully exit when glfd is invalid) posted (#1) for review on release-3.7 by Prashanth Pai (ppai)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user