Hide Forgot
Description of problem: ======================= In one of the automation case, executed rm -rf from the master mount. It cleared all the content from master but slave complained for the following errors: [2016-11-21 23:01:51.599591] W [syncdutils(slave):506:errno_wrap] <top>: reached maximum retries (['ebdcd4fb-3c7e-406e-87b0-b94bdc858055', '.gfid/0091d587-073c-40bb-b415-1f6c3e3424b8/level80', '.gfid/0091d587-073c-40bb-b415-1f6c3e3424b8/level80'])...[Errno 39] Directory not empty: '.gfid/0091d587-073c-40bb-b415-1f6c3e3424b8/level80' [2016-11-21 23:01:51.602191] W [resource(slave):733:entry_ops] <top>: Recursive remove ebdcd4fb-3c7e-406e-87b0-b94bdc858055 => .gfid/0091d587-073c-40bb-b415-1f6c3e3424b8/level80failed: Directory not empty [2016-11-21 23:01:51.631956] W [syncdutils(slave):506:errno_wrap] <top>: reached maximum retries (['ebdcd4fb-3c7e-406e-87b0-b94bdc858055', '.gfid/0091d587-073c-40bb-b415-1f6c3e3424b8/level80', '.gfid/0091d587-073c-40bb-b415-1f6c3e3424b8/level80'])...[Errno 39] Directory not empty: '.gfid/0091d587-073c-40bb-b415-1f6c3e3424b8/level80' [2016-11-21 23:01:51.632200] W [resource(slave):733:entry_ops] <top>: Recursive remove ebdcd4fb-3c7e-406e-87b0-b94bdc858055 => .gfid/0091d587-073c-40bb-b415-1f6c3e3424b8/level80failed: Directory not empty [2016-11-21 23:01:57.232242] W [syncdutils(slave):506:errno_wrap] <top>: reached maximum retries (['0091d587-073c-40bb-b415-1f6c3e3424b8', '.gfid/471f647f-cc13-41ed-b25a-a909882490a5/level70', '.gfid/471f647f-cc13-41ed-b25a-a909882490a5/level70'])...[Errno 39] Directory not empty: '.gfid/471f647f-cc13-41ed-b25a-a909882490a5/level70/level80' [2016-11-21 23:01:57.232630] W [resource(slave):733:entry_ops] <top>: Recursive remove 0091d587-073c-40bb-b415-1f6c3e3424b8 => .gfid/471f647f-cc13-41ed-b25a-a909882490a5/level70failed: Directory not empty Following was the directory structure remaining at the slave client: #ls /mnt/slave/ thread3 #ls /mnt/slave/thread3/ level00 #ls /mnt/slave/thread3/level00/ level10 #ls /mnt/slave/thread3/level00/level10/ level20 #ls /mnt/slave/thread3/level00/level10/level20/ level30 #ls /mnt/slave/thread3/level00/level10/level20/level30/ level40 #ls /mnt/slave/thread3/level00/level10/level20/level30/level40/ level50 #ls /mnt/slave/thread3/level00/level10/level20/level30/level40/level50/ level60 #ls /mnt/slave/thread3/level00/level10/level20/level30/level40/level50/level60/ level70 #ls /mnt/slave/thread3/level00/level10/level20/level30/level40/level50/level60/level70/ level80 #ls /mnt/slave/thread3/level00/level10/level20/level30/level40/level50/level60/level70/level80/ # Re-executing rm -rf * directly on the slave client resolves the issue as: [root@dhcp37-64 slave]# rm -rf * [root@dhcp37-64 slave]# ls -l total 0 [root@dhcp37-64 slave]# ls -lR .: total 0 [root@dhcp37-64 slave]# Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.8.4-5.el6rhs.x86_64 How reproducible: ================= Have run the same case multiple times on RHEL7 based platform and havent seen this. Will run once again on RHEL6 based platform. Steps Carried: ============== Carried automated suit (34) of geo-replication
*** Bug 1238068 has been marked as a duplicate of this bug. ***
Reducing the severity as this was not considered for last 2 releases. Not working on this at the moment as the usecase is not the priority from engineering perspective. (deleting whole directory structure on master).