Bug 1590241 - [CephFS]: 'rm -rf' command hangs
Summary: [CephFS]: 'rm -rf' command hangs
Keywords:
Status: CLOSED DUPLICATE of bug 1570597
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 3.1
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: z1
: 3.2
Assignee: Patrick Donnelly
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-12 09:52 UTC by Persona non grata
Modified: 2018-11-01 18:17 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-21 23:35:02 UTC
Embargoed:


Attachments (Terms of Use)
active-mds.2 log (6.41 MB, text/plain)
2018-06-12 09:52 UTC, Persona non grata
no flags Details
active-mds_1 log (3.03 MB, text/plain)
2018-06-12 09:53 UTC, Persona non grata
no flags Details

Description Persona non grata 2018-06-12 09:52:27 UTC
Created attachment 1450404 [details]
active-mds.2 log

Description of problem:
With reference to BZ "1572555" comment 12,cleanup of dirs in the  mount point with command 'rm -rf' was hung(No IOs ran in mount point) with 2 fuse clients only, no kernel clients.
Ceph status output:
==================
[root@ceph-prsurve-run488-node12-mds ceph]# ceph -s
  cluster:
    id:     8025f5f4-3a37-4974-886d-ba181cd7848a
    health: HEALTH_WARN
            1 MDSs report slow requests
 
  services:
    mon: 3 daemons, quorum ceph-prsurve-run488-node14-monmgr,ceph-prsurve-run488-node15-monmgr,ceph-prsurve-run488-node1-monmgrinstaller
    mgr: ceph-prsurve-run488-node15-monmgr(active), standbys: ceph-prsurve-run488-node14-monmgr, ceph-prsurve-run488-node1-monmgrinstaller
    mds: cephfs-2/2/2 up  {0=ceph-prsurve-run488-node3-mds=up:active,1=ceph-prsurve-run488-node12-mds=up:active}, 2 up:standby
    osd: 12 osds: 12 up, 12 in
 
  data:
    pools:   3 pools, 192 pgs
    objects: 97 objects, 195 MB
    usage:   2000 MB used, 345 GB / 347 GB avail
    pgs:     192 active+clean
=================
"1 MDSs report slow requests" warning occurred while running 'rm -rf'. 
RAM utilization of mds.1:
==========================================
root@ceph-prsurve-run488-node12-mds ceph]# free -h
              total        used        free      shared  buff/cache   available
Mem:           7.6G        446M        5.4G         17M        1.8G        6.9G
Swap:            0B          0B          0B
===========================================
RAM utilization of mds.0:
===========================================
[root@ceph-prsurve-run488-node3-mds ~]# free -h
              total        used        free      shared  buff/cache   available
Mem:           7.6G        257M        5.6G         17M        1.8G        7.0G
Swap:            0B          0B          0B
==========================================



Version-Release number of selected component (if applicable):

ceph version 12.2.5-25.el7cp (9987a3f5bbd16358cd1fdda2f48b93a757de9f95) luminous (stable)
RHEL 7.5

How reproducible:
Always

Steps to Reproduce:
1.Setup ceph 3.1 cluster,then follow BZ 1572555 comment 12,
2.Do cleanup in the mount point

Actual results:
command 'rm -rf' hangs

Expected results:
Cleanup in the mount point should be successful


Additional info:
If mds.1 service is restarted ,cluster's health becomes HEALTH_OK and 'rm -rf' command executes successfully.

Comment 3 Persona non grata 2018-06-12 09:53:20 UTC
Created attachment 1450405 [details]
active-mds_1 log

Comment 7 Patrick Donnelly 2018-09-21 23:35:02 UTC

*** This bug has been marked as a duplicate of bug 1570597 ***


Note You need to log in before you can comment on or make changes to this bug.