Created attachment 1450404 [details] active-mds.2 log Description of problem: With reference to BZ "1572555" comment 12,cleanup of dirs in the mount point with command 'rm -rf' was hung(No IOs ran in mount point) with 2 fuse clients only, no kernel clients. Ceph status output: ================== [root@ceph-prsurve-run488-node12-mds ceph]# ceph -s cluster: id: 8025f5f4-3a37-4974-886d-ba181cd7848a health: HEALTH_WARN 1 MDSs report slow requests services: mon: 3 daemons, quorum ceph-prsurve-run488-node14-monmgr,ceph-prsurve-run488-node15-monmgr,ceph-prsurve-run488-node1-monmgrinstaller mgr: ceph-prsurve-run488-node15-monmgr(active), standbys: ceph-prsurve-run488-node14-monmgr, ceph-prsurve-run488-node1-monmgrinstaller mds: cephfs-2/2/2 up {0=ceph-prsurve-run488-node3-mds=up:active,1=ceph-prsurve-run488-node12-mds=up:active}, 2 up:standby osd: 12 osds: 12 up, 12 in data: pools: 3 pools, 192 pgs objects: 97 objects, 195 MB usage: 2000 MB used, 345 GB / 347 GB avail pgs: 192 active+clean ================= "1 MDSs report slow requests" warning occurred while running 'rm -rf'. RAM utilization of mds.1: ========================================== root@ceph-prsurve-run488-node12-mds ceph]# free -h total used free shared buff/cache available Mem: 7.6G 446M 5.4G 17M 1.8G 6.9G Swap: 0B 0B 0B =========================================== RAM utilization of mds.0: =========================================== [root@ceph-prsurve-run488-node3-mds ~]# free -h total used free shared buff/cache available Mem: 7.6G 257M 5.6G 17M 1.8G 7.0G Swap: 0B 0B 0B ========================================== Version-Release number of selected component (if applicable): ceph version 12.2.5-25.el7cp (9987a3f5bbd16358cd1fdda2f48b93a757de9f95) luminous (stable) RHEL 7.5 How reproducible: Always Steps to Reproduce: 1.Setup ceph 3.1 cluster,then follow BZ 1572555 comment 12, 2.Do cleanup in the mount point Actual results: command 'rm -rf' hangs Expected results: Cleanup in the mount point should be successful Additional info: If mds.1 service is restarted ,cluster's health becomes HEALTH_OK and 'rm -rf' command executes successfully.
Created attachment 1450405 [details] active-mds_1 log
*** This bug has been marked as a duplicate of bug 1570597 ***