Description of problem: When testing integration with bareos backup/restore application, it was noticed that the brick process, glusterfsd terminates with a core dump. Random attribute key values having length greater than 255 consistent caused glusterfsd to crash while servicing a call to glfs_lgetxattr() function. Input validation seems to be missing in libgfapi. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Could you provide core(bt), logs and the steps to reproduce the issue.
Actually I faced a stack corruption problem while testing integration with a backup-restore application called bareos. There has been some root cause analysis by Raghavendra G and Poornima G and it was concluded that although the VFS doesn't allow operations on xattr keys > 255 in length via getfattr command and the keys don't reach the server, there isn't any validation in the libgfapi at least for this specific criteria for this specific API. Poornima G also attempted to get/set xattr via api/examples/glfsxmp.c with a key > 255 length and had a different outcome. Please consult her for more info. Here's the uncommitted patch of the fix for reference: diff --git a/api/src/glfs-fops.c b/api/src/glfs-fops.c index ff85f7b..2d7a23c 100644 --- a/api/src/glfs-fops.c +++ b/api/src/glfs-fops.c @@ -2853,6 +2853,12 @@ glfs_getxattr_common (struct glfs *fs, const char *path, const char *name, errno = EIO; goto out; } + + if (strlen(name) > 255) { + ret = -1; + errno = EINVAL; + goto out; + } retry: if (follow) ret = glfs_resolve (fs, subvol, path, &loc, &iatt, reval);
Can u send out fix on upstream and change the status of the bug accordingly.
Gerrit review for upstream master available at: http://review.gluster.org/#/c/12207/
REVIEW: http://review.gluster.org/12462 (gfapi: function exit should use __GLFS_EXIT_FS) posted (#1) for review on master by Milind Changire (mchangir)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user