Description of problem: ======================= Following traceback on aux-mount is observed during geo-rep sanity cases which does fops create,chmod,chown,chgrp,symlink,hardlink,truncate,rename and rmdir. This is most likely during rmdir. [2016-05-22 17:45:05.458262] E [resource(/bricks/brick0/master_brick2):1292:inhibit] <top>: mount cleanup failure: Traceback (most recent call last): File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1290, in inhibit self.cleanup_mntpt(mntpt) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1316, in cleanup_mntpt os.rmdir(mntpt) OSError: [Errno 16] Device or resource busy: '/tmp/gsyncd-aux-mount-G3GogE' [2016-05-22 17:45:05.460496] E [resource(/bricks/brick1/master_brick8):1292:inhibit] <top>: mount cleanup failure: Traceback (most recent call last): File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1290, in inhibit self.cleanup_mntpt(mntpt) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1316, in cleanup_mntpt os.rmdir(mntpt) OSError: [Errno 16] Device or resource busy: '/tmp/gsyncd-aux-mount-5BA95I' Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.7.9-5.el7rhgs.x86_64 How reproducible: ================= 1/2 Steps Carried: ============== Found during automation cases of geo-rep {tarssh+fuse}
Upstream patch sent: http://review.gluster.org/15686
Upstream patch https://review.gluster.org/#/c/17015/ fixes this issue.(Already merged in Upstream)
Verified with build: glusterfs-geo-replication-3.12.2-8.el7rhgs.x86_64 Ran the usecase mentioned in comment 3 and description. Havent seen this issue. Moving this bug to verified state. Regression cycle will cover this scenario and if the issue is seen than will open a new bug to triage.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607