Bug 1579615

Summary: [geo-rep]: [Errno 39] Directory not empty
Product: [Community] GlusterFS Reporter: Raghavendra G <rgowdapp>
Component: geo-replicationAssignee: bugs <bugs>
Status: CLOSED NOTABUG QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: mainlineCC: atumball, bugs, csaba, khiremat, rallan, rhinduja, rhs-bugs, sankarshan, sheggodu, storage-qa-internal
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1575553 Environment:
Last Closed: 2019-06-25 13:03:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1575553, 1661258    
Bug Blocks:    

Comment 1 Raghavendra G 2018-05-18 03:34:48 UTC
Description of problem:
=======================
Ran automated test cases with 3x3 master volume and a 3x3 slave volume (Rsync + Fuse)

The geo-rep status was stuck in history crawl with some workers' status 'FAULTY'


[root@ master]# gluster v info



The worker crashed with 'Directory not empty':

Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 210, in main
    main_i()
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 802, in main_i
    local.service_loop(*[r for r in [remote] if r])
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1676, in service_loop
    g3.crawlwrap(oneshot=True)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 597, in crawlwrap
    self.crawl()
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1470, in crawl
    self.changelogs_batch_process(changes)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1370, in changelogs_batch_process
    self.process(batch)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1204, in process
    self.process_change(change, done, retry)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1114, in process_change
    failures = self.slave.server.entry_ops(entries)
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 228, in __call__
    return self.ins(self.meth, *a)
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 210, in __call__
    raise res
OSError: [Errno 39] Directory not empty: '.gfid/b6c0b18a-8a5a-408b-88ec-a01fb88c8bfe/level46'



Version-Release number of selected component (if applicable):
=============================================================
root@master]# rpm -qa | grep gluster
glusterfs-server-3.12.2-8.el7rhgs.x86_64
glusterfs-api-3.12.2-8.el7rhgs.x86_64
glusterfs-rdma-3.12.2-8.el7rhgs.x86_64
glusterfs-cli-3.12.2-8.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-libs-3.12.2-8.el7rhgs.x86_64
glusterfs-client-xlators-3.12.2-8.el7rhgs.x86_64
libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.2.x86_64
vdsm-gluster-4.19.43-2.3.el7rhgs.noarch
glusterfs-events-3.12.2-8.el7rhgs.x86_64
glusterfs-3.12.2-8.el7rhgs.x86_64
glusterfs-fuse-3.12.2-8.el7rhgs.x86_64
glusterfs-geo-replication-3.12.2-8.el7rhgs.x86_64
python2-gluster-3.12.2-8.el7rhgs.x86_64
gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64



How reproducible:
=================
1/1


Actual results:
==============
The worker crashed with 'Directory not empty' tracebacks which flooded the logs.

Expected results:
================
There should be no crash