Bug 2240079 - Ceph: MDS: 1 clients failing to respond to cache pressure
Summary: Ceph: MDS: 1 clients failing to respond to cache pressure
Keywords:
Status: CLOSED DUPLICATE of bug 2249571
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 5.3
Hardware: All
OS: All
unspecified
medium
Target Milestone: ---
: 5.3z6
Assignee: Venky Shankar
QA Contact: Hemanth Kumar
URL:
Whiteboard:
Depends On: 2239539 2249574
Blocks: 2240077
TreeView+ depends on / blocked
 
Reported: 2023-09-21 16:22 UTC by Manny
Modified: 2024-04-01 00:05 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 2239539
Environment:
Last Closed: 2023-12-18 10:12:06 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-7514 0 None None None 2023-09-21 16:27:34 UTC
Red Hat Knowledge Base (Solution) 7034296 0 None None None 2023-09-21 19:00:38 UTC

Description Manny 2023-09-21 16:22:58 UTC
+++ This bug was initially created as a clone of Bug #2239539 +++

Description of problem:  Ceph: MDS:  1 clients failing to respond to cache pressure

We are seeing this:
~~~
$ sudo cephadm shell -- ceph -s

 cluster:
    id:     8d2323ac-0820-49ad-b917-059ba00794f2
    health: HEALTH_WARN
            1 clients failing to respond to cache pressure

  services:
    mon: 3 daemons, quorum bmwcefesp000003,bmwcefesp000002,bmwcefesp000001 (age 2h)
    mgr: bmwcefesp000003(active, since 3w), standbys: bmwcefesp000002, bmwcefesp000001
    mds: 2/2 daemons up, 1 standby
    osd: 324 osds: 324 up (since 3d), 324 in (since 4w)

  data:
    volumes: 1/1 healthy
    pools:   13 pools, 10625 pgs
    objects: 111.98M objects, 56 TiB
    usage:   175 TiB used, 441 TiB / 616 TiB avail
    pgs:     10625 active+clean

  io:
    client:   56 MiB/s rd, 1.5 GiB/s wr, 1.75k op/s rd, 6.53k op/s wr


$ sudo cephadm shell -- ceph health detail
HEALTH_WARN 1 clients failing to respond to cache pressure
[WRN] MDS_CLIENT_RECALL: 1 clients failing to respond to cache pressure
    mds.cephfs.bmwcefesp000001.bnsdfe(mds.0): Client bmwcefesp000003:bmwcefesp000003 failing to respond to cache pressure client_id: 191198227
~~~

The MDS is configured like this:
~~~
  mds                 advanced  mds_bal_interval                                0
  mds                 basic     mds_cache_memory_limit                          68719476736
  mds                 advanced  mds_cache_trim_threshold                        1048576
  mds                 advanced  mds_min_caps_per_client                         5000
  mds                 advanced  mds_session_cap_acquisition_decay_rate          120.000000
  mds                 advanced  mds_session_cap_acquisition_throttle            500000
    mds.cephfs        basic     mds_join_fs                                     cephfs
~~~

We created this script and collected data for ~20 minutes
All this information is in Support Shell
~~~
#/bin/bash

MDS=mds.cephfs.bmwcefesp000001.bnsdfe
XDR=/tmp/collectMDS

while true
do
    mkdir -p ${XDR} 2>/dev/null; XDT=$(date '+%F_%H-%M-%S')
    #ceph tell ${MDS} ops > ${XDR}/MDS.ops.${XDT} 2>&1
    sudo cephadm shell -- ceph tell ${MDS} ops > ${XDR}/MDS.ops.${XDT} 2>&1
    sudo cephadm shell -- ceph tell ${MDS} session ls > ${XDR}/MDS.sesls.${XDT} 2>&1
    sudo cephadm shell -- ceph tell ${MDS} perf dump > ${XDR}/MDS.pd.${XDT} 2>&1
    if [ -f /tmp/Stop_Collect ]; then exit 0; fi            #### touch /tmp/Stop_Collect to stop this script
    sleep 3; find ${XDR}/ -type f -mmin +59 -exec rm -f {} \;
done
~~~

From Ops In Flight:
~~~
-bash 5.1 $ grep initiated_at 0040-18092023.xz/18092023/collectMDS/MDS.ops.2023-09-18_18-12-03 | sort | head -2
            "initiated_at": "2023-09-18T18:12:03.484201+0000",
            "initiated_at": "2023-09-18T18:12:04.664094+0000",
-bash 5.1 $ grep initiated_at 0040-18092023.xz/18092023/collectMDS/MDS.ops.2023-09-18_18-12-03 | sort | tail -2
            "initiated_at": "2023-09-18T18:12:05.445152+0000",
            "initiated_at": "2023-09-18T18:12:05.453203+0000",

        {
            "description": "client_request(client.191198227:102697938 readdir #0x100009e011a//csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/d38ed12f-d463-4568-8574-c09604769fd7/jobs/application/jobs/fot-ingest/jobs/deployment/jobs/deploy-signal-synchronizer-config-from-branch/workspace@script/cicd/ansible/ansible-endurance-run/roles 2023-09-18T18:12:03.484230+0000 caller_uid=0, caller_gid=0{})",
            "initiated_at": "2023-09-18T18:12:03.484201+0000",
            "age": 1.991068649,
            "duration": 2.0573971590000002,
            "type_data": {
                "flag_point": "acquired locks",
                "reqid": "client.191198227:102697938",
                "op_type": "client_request",
                "client_info": {
                    "client": "client.191198227",
                    "tid": 102697938
                },
                "events": [
                    {
                        "time": "2023-09-18T18:12:03.484201+0000",
                        "event": "initiated"
                    },
                    {
                        "time": "2023-09-18T18:12:03.484202+0000",
                        "event": "throttled"
                    },
                    {
                        "time": "2023-09-18T18:12:03.484201+0000",
                        "event": "header_read"
                    },
                    {
                        "time": "2023-09-18T18:12:03.484209+0000",
                        "event": "all_read"
                    },
                    {
                        "time": "2023-09-18T18:12:03.484404+0000",
                        "event": "dispatched"
                    },
                    {
                        "time": "2023-09-18T18:12:03.484486+0000",
                        "event": "acquired locks"
                    },
                    {
                        "time": "2023-09-18T18:12:03.496816+0000",
                        "event": "acquired locks"
                    }
                ]
            }
        },
~~~


We then go look at client_id: 191198227 in the various logs collected
Session LS:
~~~
    {
        "id": 191198227,
        "entity": {
            "name": {
                "type": "client",
                "num": 191198227
            },
            "addr": {
                "type": "any",
                "addr": "192.78.200.27:0",
                "nonce": 18250926
            }
        },
        "state": "open",
        "num_leases": 3,
        "num_caps": 1112974,
        "request_load_avg": 4678,
        "uptime": 1169547.251125531,
        "requests_in_flight": 1,
        "num_completed_requests": 1,
        "num_completed_flushes": 2,
        "reconnecting": false,
        "recall_caps": {
            "value": 2430149.3415465355,
            "halflife": 60
        },
        "release_caps": {
            "value": 598.09471021320235,
            "halflife": 60
        },
        "recall_caps_throttle": {
            "value": 66499.357655372936,
            "halflife": 1.5
        },
        "recall_caps_throttle2o": {
            "value": 30670.696254188886,
            "halflife": 0.5
        },
        "session_cache_liveness": {
            "value": 12272.634947810215,
            "halflife": 300
        },
        "cap_acquisition": {
            "value": 1489.122272882312,
            "halflife": 120
        },
        "delegated_inos": [
            {
                "start": "0x10098c2d720",
                "length": 337
            },
            {
                "start": "0x10098c2d873",
                "length": 163
            }
        ],
        "inst": "client.191198227 192.78.200.27:0/18250926",
        "completed_requests": [
            {
                "tid": 102697962,
                "created_ino": "0x0"
            }
        ],
        "prealloc_inos": [
            {
                "start": "0x10098c2d720",
                "length": 337
            },
            {
                "start": "0x10098c2d873",
                "length": 163
            },
            {
                "start": "0x1009b491c6f",
                "length": 375
            },
            {
                "start": "0x1009b493371",
                "length": 501
            }
        ],
        "client_metadata": {
            "client_features": {
                "feature_bits": "0x000000000004ffff"
            },
            "metric_spec": {
                "metric_flags": {
                    "feature_bits": "0x000000000000ffff"
                }
            },
            "ceph_sha1": "5d6355e2bccd18b5c6457a34cb666d773f21823d",
            "ceph_version": "ceph version 16.2.10-187.el8cp (5d6355e2bccd18b5c6457a34cb666d773f21823d) pacific (stable)",
            "entity_id": "bmwcefesp000003",
            "hostname": "bmwcefesp000003",
            "pid": "7",
            "root": "/"
        }
    }
]
~~~

Version-Release number of selected component (if applicable):  RHCS 5.3z4


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:  Kevan and I would like to review this with Engineering and resolve this issue (of course). More importantly, we'd like to write a proper KCS to help analyze situations like this and avoid future escalations.

BR
Manny

--- Additional comment from Manny on 2023-09-18 22:43:13 UTC ---

The customer is DXC.

--- Additional comment from Patrick Donnelly on 2023-09-19 01:07:22 UTC ---

(In reply to Manny from comment #0)
> Description of problem:  Ceph: MDS:  1 clients failing to respond to cache
> pressure
> 
> We are seeing this:
> ~~~
> $ sudo cephadm shell -- ceph -s
> 
>  cluster:
>     id:     8d2323ac-0820-49ad-b917-059ba00794f2
>     health: HEALTH_WARN
>             1 clients failing to respond to cache pressure
> 
>   services:
>     mon: 3 daemons, quorum bmwcefesp000003,bmwcefesp000002,bmwcefesp000001
> (age 2h)
>     mgr: bmwcefesp000003(active, since 3w), standbys: bmwcefesp000002,
> bmwcefesp000001
>     mds: 2/2 daemons up, 1 standby
>     osd: 324 osds: 324 up (since 3d), 324 in (since 4w)
> 
>   data:
>     volumes: 1/1 healthy
>     pools:   13 pools, 10625 pgs
>     objects: 111.98M objects, 56 TiB
>     usage:   175 TiB used, 441 TiB / 616 TiB avail
>     pgs:     10625 active+clean
> 
>   io:
>     client:   56 MiB/s rd, 1.5 GiB/s wr, 1.75k op/s rd, 6.53k op/s wr
> 
> 
> $ sudo cephadm shell -- ceph health detail
> HEALTH_WARN 1 clients failing to respond to cache pressure
> [WRN] MDS_CLIENT_RECALL: 1 clients failing to respond to cache pressure
>     mds.cephfs.bmwcefesp000001.bnsdfe(mds.0): Client
> bmwcefesp000003:bmwcefesp000003 failing to respond to cache pressure
> client_id: 191198227
> ~~~
> 
> The MDS is configured like this:
> ~~~
>   mds                 advanced  mds_bal_interval                            
> 0
>   mds                 basic     mds_cache_memory_limit                      
> 68719476736
>   mds                 advanced  mds_cache_trim_threshold                    
> 1048576
>   mds                 advanced  mds_min_caps_per_client                     
> 5000
>   mds                 advanced  mds_session_cap_acquisition_decay_rate      
> 120.000000
>   mds                 advanced  mds_session_cap_acquisition_throttle        
> 500000

These throttles don't seem to help for this particular client, which looks to be the ceph-mgr. Can you please collect regular perf dumps every minute and

> ceph tell mds.X session ls id=191198227

I don't have an idea why it's not releasing caps yet. Please collect ceph-mgr logs with:

> ceph config set mgr debug_client 20

for a few minutes then fetch the logs.

--- Additional comment from Manny on 2023-09-19 16:12:50 UTC ---

Hello Patrick,

As you have asked so has the customer provided !!
Please see the seventh and eighth attachment in Support Shell:
~~~
-bash 5.1 $ find 0070-19092023.tar.gz/ -type f -exec ls -l {} \;
-rw-rw-rw-+ 1 yank yank 1948105 Sep 19 14:13 0070-19092023.tar.gz/19092023/client_id_191198227.out
-rw-rw-rw-+ 1 yank yank 36077 Sep 19 14:13 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-13-18
-rw-rw-rw-+ 1 yank yank 36078 Sep 19 14:13 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-13-41
-rw-rw-rw-+ 1 yank yank 36075 Sep 19 14:14 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-14-03
-rw-rw-rw-+ 1 yank yank 36077 Sep 19 14:14 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-14-25
-rw-rw-rw-+ 1 yank yank 36080 Sep 19 14:14 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-14-47
-rw-rw-rw-+ 1 yank yank 36079 Sep 19 14:15 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-15-09
-rw-rw-rw-+ 1 yank yank 36060 Sep 19 14:15 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-15-31
-rw-rw-rw-+ 1 yank yank 36060 Sep 19 14:15 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-15-53
-rw-rw-rw-+ 1 yank yank 36061 Sep 19 14:16 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-16-16
-rw-rw-rw-+ 1 yank yank 36062 Sep 19 14:16 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-16-38
-rw-rw-rw-+ 1 yank yank 36062 Sep 19 14:17 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-17-00
-rw-rw-rw-+ 1 yank yank 36062 Sep 19 14:17 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-17-22
-rw-rw-rw-+ 1 yank yank 36062 Sep 19 14:17 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-17-44
-rw-rw-rw-+ 1 yank yank 36066 Sep 19 14:18 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-18-06
-rw-rw-rw-+ 1 yank yank 36062 Sep 19 14:18 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-18-28
-rw-rw-rw-+ 1 yank yank 36070 Sep 19 14:18 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-18-51
-rw-rw-rw-+ 1 yank yank 36063 Sep 19 14:19 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-19-13
-rw-rw-rw-+ 1 yank yank 44823 Sep 19 14:19 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-19-35
-rw-rw-rw-+ 1 yank yank 44824 Sep 19 14:19 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-19-57
-rw-rw-rw-+ 1 yank yank 44824 Sep 19 14:20 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-20-19
-rw-rw-rw-+ 1 yank yank 44824 Sep 19 14:20 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-20-41
-rw-rw-rw-+ 1 yank yank 36060 Sep 19 14:21 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-21-03
-rw-rw-rw-+ 1 yank yank 36070 Sep 19 14:21 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-21-26
-rw-rw-rw-+ 1 yank yank 36060 Sep 19 14:21 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-21-48
-rw-rw-rw-+ 1 yank yank 36075 Sep 19 14:22 0070-19092023.tar.gz/19092023/mgr.perf.2023-09-19_14-22-10


-bash 5.1 $ find 0080-mgr.logs.tar.gz/ -type f
0080-mgr.logs.tar.gz/mgr.logs
~~~

Best regards,
Manny

--- Additional comment from Patrick Donnelly on 2023-09-20 00:18:27 UTC ---

It appears there is a file handle leak in the volumes plugin:

> debug 2023-09-19T14:14:41.204+0000 7f9b7c4fd700 10 client.191198227 trim_caps mds.0 max 1172084 caps 1202075
> debug 2023-09-19T14:14:41.204+0000 7f9b7c4fd700 20 client.191198227  trying to trim dentries for 0x10093492d6d.head(faked_ino=0 nref=3 ll_ref=0 cap_refs={} open={} mode=42755 size=0/0 nlink=1 btime=2023-08-28T13:18:36.951355+0000 mtime=2023-04-09T13:43:26.000000+0000 ctime=2023-09-12T17:45:11.493954+0000 change_attr=1653 caps=pAsLsXsFsx(0=pAsLsXsFsx) dirty_caps=Fx 0x55643ca61e00)
> debug 2023-09-19T14:14:41.204+0000 7f9b7c4fd700 20 client.191198227 trim_caps counting as trimmed: 0x10093492d6d.head(faked_ino=0 nref=3 ll_ref=0 cap_refs={} open={} mode=42755 size=0/0 nlink=1 btime=2023-08-28T13:18:36.951355+0000 mtime=2023-04-09T13:43:26.000000+0000 ctime=2023-09-12T17:45:11.493954+0000 change_attr=1653 caps=pAsLsXsFsx(0=pAsLsXsFsx) dirty_caps=Fx 0x55643ca61e00)
> ...
> debug 2023-09-19T14:21:58.038+0000 7f9b7c4fd700 20 client.191198227 put_inode on 0x10099383d8f.head(faked_ino=0 nref=17 ll_ref=0 cap_refs={} open={} mode=42775 size=0/0 nlink=1 btime=2023-09-13T13:27:37.912820+0000 mtime=2022-08-15T13:29:03.000000+0000 ctime=2023-09-13T13:27:38.339331+0000 change_attr=4 caps=-(0=pAsLsXsFsx) dirty_caps=Fx 0x55645a6c5800) n = 1
> debug 2023-09-19T14:21:58.038+0000 7f9b7c4fd700 20 client.191198227  trying to trim dentries for 0x10099383d8e.head(faked_ino=0 nref=17 ll_ref=0 cap_refs={} open={} mode=42775 size=0/0 nlink=1 btime=2023-09-13T13:27:37.907988+0000 mtime=2022-08-15T13:29:03.000000+0000 ctime=2023-09-13T13:27:38.340402+0000 change_attr=4 caps=-(0=pAsLsXsFsx) dirty_caps=Fx 0x55640dd6e400)
> debug 2023-09-19T14:21:58.038+0000 7f9b7c4fd700 20 client.191198227 trim_caps counting as trimmed: 0x10099383d8e.head(faked_ino=0 nref=17 ll_ref=0 cap_refs={} open={} mode=42775 size=0/0 nlink=1 btime=2023-09-13T13:27:37.907988+0000 mtime=2022-08-15T13:29:03.000000+0000 ctime=2023-09-13T13:27:38.340402+0000 change_attr=4 caps=-(0=pAsLsXsFsx) dirty_caps=Fx 0x55640dd6e400)
> debug 2023-09-19T14:21:58.038+0000 7f9b7c4fd700 20 client.191198227 put_inode on 0x10099383d8e.head(faked_ino=0 nref=17 ll_ref=0 cap_refs={} open={} mode=42775 size=0/0 nlink=1 btime=2023-09-13T13:27:37.907988+0000 mtime=2022-08-15T13:29:03.000000+0000 ctime=2023-09-13T13:27:38.340402+0000 change_attr=4 caps=-(0=pAsLsXsFsx) dirty_caps=Fx 0x55640dd6e400) n = 1
> debug 2023-09-19T14:21:58.038+0000 7f9b7c4fd700 20 client.191198227  trying to trim dentries for 0x10099383d8d.head(faked_ino=0 nref=17 ll_ref=0 cap_refs={} open={} mode=42775 size=0/0 nlink=1 btime=2023-09-13T13:27:37.865732+0000 mtime=2022-08-15T13:29:03.000000+0000 ctime=2023-09-13T13:27:38.341610+0000 change_attr=4 caps=-(0=pAsLsXsFsx) dirty_caps=Fx 0x5564145c8600)
> debug 2023-09-19T14:21:58.038+0000 7f9b7c4fd700 20 client.191198227 trim_caps counting as trimmed: 0x10099383d8d.head(faked_ino=0 nref=17 ll_ref=0 cap_refs={} open={} mode=42775 size=0/0 nlink=1 btime=2023-09-13T13:27:37.865732+0000 mtime=2022-08-15T13:29:03.000000+0000 ctime=2023-09-13T13:27:38.341610+0000 change_attr=4 caps=-(0=pAsLsXsFsx) dirty_caps=Fx 0x5564145c8600)
> debug 2023-09-19T14:21:58.038+0000 7f9b7c4fd700 20 client.191198227 put_inode on 0x10099383d8d.head(faked_ino=0 nref=17 ll_ref=0 cap_refs={} open={} mode=42775 size=0/0 nlink=1 btime=2023-09-13T13:27:37.865732+0000 mtime=2022-08-15T13:29:03.000000+0000 ctime=2023-09-13T13:27:38.341610+0000 change_attr=4 caps=-(0=pAsLsXsFsx) dirty_caps=Fx 0x5564145c8600) n = 1

The client is spending at least **7** minutes processing a single recall caps operation (with debugging enabled), going through 50k inodes.

I want to see if we can figure out a pattern with these inodes, please collect:

> ceph tell mds.cephfs:0 dump inode 0x10099383d1e
> ceph tell mds.cephfs:0 dump inode 0x10099383d1d
> ceph tell mds.cephfs:0 dump inode 0x10099383d1c
> ceph tell mds.cephfs:0 dump inode 0x10099383d10
> ceph tell mds.cephfs:0 dump inode 0x10099383d0f
> ceph tell mds.cephfs:0 dump inode 0x10099383d2e
> ceph tell mds.cephfs:0 dump inode 0x10099383d2d
> ceph tell mds.cephfs:0 dump inode 0x10099383d2c
> ceph tell mds.cephfs:0 dump inode 0x10099383d2b
> ceph tell mds.cephfs:0 dump inode 0x10099383d2a
> ceph tell mds.cephfs:0 dump inode 0x10099383d29
> ceph tell mds.cephfs:0 dump inode 0x10099383d36
> ceph tell mds.cephfs:0 dump inode 0x10099383d35
> ceph tell mds.cephfs:0 dump inode 0x10099383d34
> ceph tell mds.cephfs:0 dump inode 0x10099383d33
> ceph tell mds.cephfs:0 dump inode 0x10099383d32
> ceph tell mds.cephfs:0 dump inode 0x10099383d31
> ceph tell mds.cephfs:0 dump inode 0x10099383d28
> ceph tell mds.cephfs:0 dump inode 0x10099383d27
> ceph tell mds.cephfs:0 dump inode 0x10099383d43
> ceph tell mds.cephfs:0 dump inode 0x10099383d42
> ceph tell mds.cephfs:0 dump inode 0x10099383d41
> ceph tell mds.cephfs:0 dump inode 0x10099383d40
> ceph tell mds.cephfs:0 dump inode 0x10099383d3f
> ceph tell mds.cephfs:0 dump inode 0x10099383d3e
> ceph tell mds.cephfs:0 dump inode 0x10099383d4b
> ceph tell mds.cephfs:0 dump inode 0x10099383d4a
> ceph tell mds.cephfs:0 dump inode 0x10099383d49
> ceph tell mds.cephfs:0 dump inode 0x10099383d48
> ceph tell mds.cephfs:0 dump inode 0x10099383d47
> ceph tell mds.cephfs:0 dump inode 0x10099383d46
> ceph tell mds.cephfs:0 dump inode 0x10099383d3d
> ceph tell mds.cephfs:0 dump inode 0x10099383d3c
> ceph tell mds.cephfs:0 dump inode 0x10099383d53
> ceph tell mds.cephfs:0 dump inode 0x10099383d59
> ceph tell mds.cephfs:0 dump inode 0x10099383d58
> ceph tell mds.cephfs:0 dump inode 0x10099383d57
> ceph tell mds.cephfs:0 dump inode 0x10099383d56
> ceph tell mds.cephfs:0 dump inode 0x10099383d5f
> ceph tell mds.cephfs:0 dump inode 0x10099383d65
> ceph tell mds.cephfs:0 dump inode 0x10099383d69
> ceph tell mds.cephfs:0 dump inode 0x10099383d67
> ceph tell mds.cephfs:0 dump inode 0x10099383d6b
> ceph tell mds.cephfs:0 dump inode 0x10099383d6e
> ceph tell mds.cephfs:0 dump inode 0x10099383d5e
> ceph tell mds.cephfs:0 dump inode 0x10099383d5d
> ceph tell mds.cephfs:0 dump inode 0x10099383d5c
> ceph tell mds.cephfs:0 dump inode 0x10099383d5b
> ceph tell mds.cephfs:0 dump inode 0x10099383d55
> ceph tell mds.cephfs:0 dump inode 0x10099383d52
> ceph tell mds.cephfs:0 dump inode 0x10099383d71
> ceph tell mds.cephfs:0 dump inode 0x10099383d79
> ceph tell mds.cephfs:0 dump inode 0x10099383d7e
> ceph tell mds.cephfs:0 dump inode 0x10099383d7d
> ceph tell mds.cephfs:0 dump inode 0x10099383d80
> ceph tell mds.cephfs:0 dump inode 0x10099383d84
> ceph tell mds.cephfs:0 dump inode 0x10099383d78
> ceph tell mds.cephfs:0 dump inode 0x10099383d77
> ceph tell mds.cephfs:0 dump inode 0x10099383d76
> ceph tell mds.cephfs:0 dump inode 0x10099383d75
> ceph tell mds.cephfs:0 dump inode 0x10099383d74
> ceph tell mds.cephfs:0 dump inode 0x10099383d70
> ceph tell mds.cephfs:0 dump inode 0x10099383d51
> ceph tell mds.cephfs:0 dump inode 0x10099383d89
> ceph tell mds.cephfs:0 dump inode 0x10099383d50
> ceph tell mds.cephfs:0 dump inode 0x10099383d93
> ceph tell mds.cephfs:0 dump inode 0x10099383d92
> ceph tell mds.cephfs:0 dump inode 0x10099383d91
> ceph tell mds.cephfs:0 dump inode 0x10099383d90
> ceph tell mds.cephfs:0 dump inode 0x10099383d8f
> ceph tell mds.cephfs:0 dump inode 0x10099383d8e
> ceph tell mds.cephfs:0 dump inode 0x10099383d8d
> ceph tell mds.cephfs:0 dump inode 0x10093492d6d
> ceph tell mds.cephfs:0 dump inode 0x10093492d68
> ceph tell mds.cephfs:0 dump inode 0x10098c2d9c2
> ceph tell mds.cephfs:0 dump inode 0x10098c2d9c1
> ceph tell mds.cephfs:0 dump inode 0x10098c2d9c0
> ceph tell mds.cephfs:0 dump inode 0x10098c2d9bf
> ceph tell mds.cephfs:0 dump inode 0x10098c2d9be
> ceph tell mds.cephfs:0 dump inode 0x10098c2d9bd
> ceph tell mds.cephfs:0 dump inode 0x10098c2d9b8
> ceph tell mds.cephfs:0 dump inode 0x10098c2d9c4
> ceph tell mds.cephfs:0 dump inode 0x10098c2d9d1
> ceph tell mds.cephfs:0 dump inode 0x10098ce692b
> ceph tell mds.cephfs:0 dump inode 0x10098ce692a
> ceph tell mds.cephfs:0 dump inode 0x10098ce6929
> ceph tell mds.cephfs:0 dump inode 0x10098ce6928
> ceph tell mds.cephfs:0 dump inode 0x10098ce6927
> ceph tell mds.cephfs:0 dump inode 0x10098ce6926
> ceph tell mds.cephfs:0 dump inode 0x10098c2d9c8
> ceph tell mds.cephfs:0 dump inode 0x10098ce6933
> ceph tell mds.cephfs:0 dump inode 0x10098ce692d
> ceph tell mds.cephfs:0 dump inode 0x10098ce84d8
> ceph tell mds.cephfs:0 dump inode 0x10098ce84d2
> ceph tell mds.cephfs:0 dump inode 0x10098cea238
> ceph tell mds.cephfs:0 dump inode 0x10098cea232
> ceph tell mds.cephfs:0 dump inode 0x10098cecb4b
> ceph tell mds.cephfs:0 dump inode 0x10098cecb45
> ceph tell mds.cephfs:0 dump inode 0x10098ceea9d
> ceph tell mds.cephfs:0 dump inode 0x10098ceea97
> ceph tell mds.cephfs:0 dump inode 0x10098cf09eb
> ceph tell mds.cephfs:0 dump inode 0x10098cf09e5
> ceph tell mds.cephfs:0 dump inode 0x10098cf2330
> ceph tell mds.cephfs:0 dump inode 0x10098cf232a
> ceph tell mds.cephfs:0 dump inode 0x10098cf50a0
> ceph tell mds.cephfs:0 dump inode 0x10098cf5900
> ceph tell mds.cephfs:0 dump inode 0x10098cf58ff
> ceph tell mds.cephfs:0 dump inode 0x10098cf58fe
> ceph tell mds.cephfs:0 dump inode 0x10098cf58fd
> ceph tell mds.cephfs:0 dump inode 0x10098cf58fc
> ceph tell mds.cephfs:0 dump inode 0x10098cf58fb
> ceph tell mds.cephfs:0 dump inode 0x10098cf5099
> ceph tell mds.cephfs:0 dump inode 0x10098cf5908
> ceph tell mds.cephfs:0 dump inode 0x10098cf5902
> ceph tell mds.cephfs:0 dump inode 0x10098cf5939
> ceph tell mds.cephfs:0 dump inode 0x10098cf5933
> ceph tell mds.cephfs:0 dump inode 0x10098cf7949

--- Additional comment from Manny on 2023-09-20 11:46:50 UTC ---

Hello Patrick,

Please see the ninth attachment for the data you requested:
The file is cached in all SS instances, but do "yank 03616273" so that your user ID is added to the ACL for these files
It should just take a few seconds
~~~
-bash 5.1 $ ls -ld 0*
-rw-rw-rw-+ 1 yank yank 1969207 Sep 18 16:02 0010-info.txt
-rw-rw-rw-+ 1 yank yank 1987571 Sep 18 16:16 0020-info.txt
drwxrwxrwx+ 3 yank yank      75 Sep 18 17:54 0030-sosreport-bmwcefesp000001-03616273-2023-09-18-hhvshku.tar.xz
drwxrwxrwx+ 3 yank yank      30 Sep 18 18:41 0040-18092023.xz
-rw-rw-rw-+ 1 yank yank   20272 Sep 18 20:48 0050-fresh-info.txt
drwxrwxrwx+ 3 yank yank      75 Sep 18 22:44 0060-sosreport-bmwcefesp000001-03616273-2023-09-18-aqgunqb.tar.xz
drwxrwxrwx+ 3 yank yank      30 Sep 19 15:02 0070-19092023.tar.gz
drwxrwxrwx+ 2 yank yank      30 Sep 19 15:03 0080-mgr.logs.tar.gz
-rw-rw-rw-+ 1 yank yank  501449 Sep 20 09:58 0090-inode_dump.out
~~~


I looked at this much:
~~~
-bash 5.1 $ grep -A1 "\"path\"" 0090-inode_dump.out 
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/proj4-wrapper/src/test/scala/com",
    "ino": 1102082227486,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/proj4-wrapper/src/test/scala",
    "ino": 1102082227485,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/proj4-wrapper/src/test",
    "ino": 1102082227484,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/proj4-wrapper/src",
    "ino": 1102082227472,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/proj4-wrapper",
    "ino": 1102082227471,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging/src/main/scala/com/bmw/ad/common",
    "ino": 1102082227502,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging/src/main/scala/com/bmw/ad",
    "ino": 1102082227501,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging/src/main/scala/com/bmw",
    "ino": 1102082227500,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging/src/main/scala/com",
    "ino": 1102082227499,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging/src/main/scala",
    "ino": 1102082227498,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging/src/main",
    "ino": 1102082227497,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging/src/test/scala/com/bmw/ad/common",
    "ino": 1102082227510,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging/src/test/scala/com/bmw/ad",
    "ino": 1102082227509,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging/src/test/scala/com/bmw",
    "ino": 1102082227508,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging/src/test/scala/com",
    "ino": 1102082227507,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging/src/test/scala",
    "ino": 1102082227506,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging/src/test",
    "ino": 1102082227505,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging/src",
    "ino": 1102082227496,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging",
    "ino": 1102082227495,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging-java/src/main/java/com/bmw/ad/common",
    "ino": 1102082227523,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging-java/src/main/java/com/bmw/ad",
    "ino": 1102082227522,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging-java/src/main/java/com/bmw",
    "ino": 1102082227521,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging-java/src/main/java/com",
    "ino": 1102082227520,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging-java/src/main/java",
    "ino": 1102082227519,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging-java/src/main",
    "ino": 1102082227518,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging-java/src/test/java/com/bmw/ad/common",
    "ino": 1102082227531,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging-java/src/test/java/com/bmw/ad",
    "ino": 1102082227530,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging-java/src/test/java/com/bmw",
    "ino": 1102082227529,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging-java/src/test/java/com",
    "ino": 1102082227528,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging-java/src/test/java",
    "ino": 1102082227527,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging-java/src/test",
    "ino": 1102082227526,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging-java/src",
    "ino": 1102082227517,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-logging-java",
    "ino": 1102082227516,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/main/assembly",
    "ino": 1102082227539,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/main/scala/org/apache/spark/streaming",
    "ino": 1102082227545,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/main/scala/org/apache/spark",
    "ino": 1102082227544,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/main/scala/org/apache",
    "ino": 1102082227543,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/main/scala/org",
    "ino": 1102082227542,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/main/scala/com/bmw/ad/common/util",
    "ino": 1102082227551,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/main/scala/com/bmw/ad/common/exceptions",
    "ino": 1102082227557,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/main/scala/com/bmw/ad/common/spark/join",
    "ino": 1102082227561,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/main/scala/com/bmw/ad/common/spark",
    "ino": 1102082227559,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/main/scala/com/bmw/ad/common/kafka",
    "ino": 1102082227563,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/main/scala/com/bmw/ad/common/configuration",
    "ino": 1102082227566,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/main/scala/com/bmw/ad/common",
    "ino": 1102082227550,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/main/scala/com/bmw/ad",
    "ino": 1102082227549,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/main/scala/com/bmw",
    "ino": 1102082227548,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/main/scala/com",
    "ino": 1102082227547,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/main/scala",
    "ino": 1102082227541,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/main",
    "ino": 1102082227538,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/test/resources",
    "ino": 1102082227569,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/test/scala/com/bmw/ad/common/util",
    "ino": 1102082227577,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/test/scala/com/bmw/ad/common/spark/join",
    "ino": 1102082227582,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/test/scala/com/bmw/ad/common/spark",
    "ino": 1102082227581,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/test/scala/com/bmw/ad/common/kafka",
    "ino": 1102082227584,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/test/scala/com/bmw/ad/common/configuration",
    "ino": 1102082227588,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/test/scala/com/bmw/ad/common",
    "ino": 1102082227576,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/test/scala/com/bmw/ad",
    "ino": 1102082227575,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/test/scala/com/bmw",
    "ino": 1102082227574,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/test/scala/com",
    "ino": 1102082227573,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/test/scala",
    "ino": 1102082227572,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src/test",
    "ino": 1102082227568,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/src",
    "ino": 1102082227537,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils/bin",
    "ino": 1102082227593,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-utils",
    "ino": 1102082227536,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-jetty-module/src/main/java/com/bmw/ad/server/jetty",
    "ino": 1102082227603,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-jetty-module/src/main/java/com/bmw/ad/server",
    "ino": 1102082227602,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-jetty-module/src/main/java/com/bmw/ad",
    "ino": 1102082227601,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-jetty-module/src/main/java/com/bmw",
    "ino": 1102082227600,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-jetty-module/src/main/java/com",
    "ino": 1102082227599,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-jetty-module/src/main/java",
    "ino": 1102082227598,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/dxc_dust/jobs/E2E Nightly builds retry test/workspace@script/common/common-jetty-module/src/main",
    "ino": 1102082227597,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/814/workflow",
    "ino": 1101982674285,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/814",
    "ino": 1101982674280,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/4/libs/pdxc-snow-integration/src/io/dxc/devops",
    "ino": 1102074534338,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/4/libs/pdxc-snow-integration/src/io/dxc",
    "ino": 1102074534337,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/4/libs/pdxc-snow-integration/src/io",
    "ino": 1102074534336,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/4/libs/pdxc-snow-integration/src",
    "ino": 1102074534335,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/4/libs/pdxc-snow-integration",
    "ino": 1102074534334,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/4/libs",
    "ino": 1102074534333,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/4",
    "ino": 1102074534328,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/724",
    "ino": 1102074534340,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/112/workflow",
    "ino": 1102074534353,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/112/libs/pdxc-snow-integration/src/io/dxc/devops",
    "ino": 1102075291947,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/112/libs/pdxc-snow-integration/src/io/dxc",
    "ino": 1102075291946,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/112/libs/pdxc-snow-integration/src/io",
    "ino": 1102075291945,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/112/libs/pdxc-snow-integration/src",
    "ino": 1102075291944,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/112/libs/pdxc-snow-integration",
    "ino": 1102075291943,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/112/libs",
    "ino": 1102075291942,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/112",
    "ino": 1102074534344,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/666/workflow",
    "ino": 1102075291955,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/666",
    "ino": 1102075291949,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/295/workflow",
    "ino": 1102075299032,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/295",
    "ino": 1102075299026,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/800/workflow",
    "ino": 1102075306552,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/800",
    "ino": 1102075306546,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/348/workflow",
    "ino": 1102075317067,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/348",
    "ino": 1102075317061,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/846/workflow",
    "ino": 1102075325085,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/846",
    "ino": 1102075325079,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/381/workflow",
    "ino": 1102075333099,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/381",
    "ino": 1102075333093,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/638/workflow",
    "ino": 1102075339568,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/638",
    "ino": 1102075339562,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/64/workflow",
    "ino": 1102075351200,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/64/libs/pdxc-snow-integration/src/io/dxc/devops",
    "ino": 1102075353344,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/64/libs/pdxc-snow-integration/src/io/dxc",
    "ino": 1102075353343,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/64/libs/pdxc-snow-integration/src/io",
    "ino": 1102075353342,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/64/libs/pdxc-snow-integration/src",
    "ino": 1102075353341,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/64/libs/pdxc-snow-integration",
    "ino": 1102075353340,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/64/libs",
    "ino": 1102075353339,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/64",
    "ino": 1102075351193,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/42/workflow",
    "ino": 1102075353352,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/42",
    "ino": 1102075353346,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/842/workflow",
    "ino": 1102075353401,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/842",
    "ino": 1102075353395,
--
    "path": "/volumes/csi/csi-vol-0c4591c6-fb9c-11ed-ad6f-0a580a81060d/241b4be6-ea11-401b-ba03-6a6566972abd/jobs/application/jobs/smoke-test/jobs/isolated-data-pipeline-e2e-test-stage-nightly/builds/688/workflow",
    "ino": 1102075361609,
~~~

Best regards,
Manny

--- Additional comment from Patrick Donnelly on 2023-09-21 01:40:10 UTC ---

So these are all paths referring to living files in some CSI subvolume. I see

> debug 2023-09-19T14:14:41.204+0000 7f9b7c4fd700 20 client.191198227 trim_caps counting as trimmed: 0x10093492d6d.head(faked_ino=0 nref=3 ll_ref=0 cap_refs={} open={} mode=42755 size=0/0 nlink=1 btime=2023-08-28T13:18:36.951355+0000 mtime=2023-04-09T13:43:26.000000+0000 ctime=2023-09-12T17:45:11.493954+0000 change_attr=1653 caps=pAsLsXsFsx(0=pAsLsXsFsx) dirty_caps=Fx 0x55643ca61e00)

and all of these look to be directories (mode & 040000), have no cap_refs, no open handles, relatively old mtime, and dirty Fx caps.

I'm not yet sure what this indicates as far as the cause. I'll continue researching.

--- Additional comment from Geo Jose on 2023-09-21 07:31:32 UTC ---

Hello Patrick/Venky,

As per the case#03616273 #BZ2239539 we have instructed customer not to restart mgr as we are debugging this issue and would like to leave the issue intact till we find the root cause.
Customer has a planned upgrade activity today (Sept 21 Thu 11:00-18:00 CET) for RHCS 5.3z4 to RHCS 5.3z5+hotfix, this hotfix contains fixes for cephfs issue - BZ2235338 (CephFS blocked requests with warning 1 MDSs behind on trimming) and for the Dashboard issue BZ2237391 (While evicting one client via ceph dashboard, it evicts all other client mounts of the ceph filesystem)
Is it okay if they proceed with upgrade as upgrade may trigger mgr restarts ?

Regards,
Geo Jose

--- Additional comment from Venky Shankar on 2023-09-21 09:37:07 UTC ---

(In reply to Geo Jose from comment #7)
> Hello Patrick/Venky,
> 
> As per the case#03616273 #BZ2239539 we have instructed customer not to
> restart mgr as we are debugging this issue and would like to leave the issue
> intact till we find the root cause.
> Customer has a planned upgrade activity today (Sept 21 Thu 11:00-18:00 CET)
> for RHCS 5.3z4 to RHCS 5.3z5+hotfix, this hotfix contains fixes for cephfs
> issue - BZ2235338 (CephFS blocked requests with warning 1 MDSs behind on
> trimming) and for the Dashboard issue BZ2237391 (While evicting one client
> via ceph dashboard, it evicts all other client mounts of the ceph filesystem)
> Is it okay if they proceed with upgrade as upgrade may trigger mgr restarts ?

We kind of know the issue the customer is running into that's causing these warning to show up. The fix wasn't backported since it was mostly a performance enhancement for a psecific multimds case[1], but the change fixes the warning.

As far as the upgrade is concerned, I think the customer should continue upgrade since the hotfix fixes a deadlock which is far more problematic that this warning.

[1]: https://tracker.ceph.com/issues/44916

--- Additional comment from Geo Jose on 2023-09-21 09:59:18 UTC ---

Hello,

Thanks for your quick response. I will ask the customer to proceed with the hotfix upgrade.

Regards,
Geo Jose

--- Additional comment from Patrick Donnelly on 2023-09-21 12:16:50 UTC ---

The suspected fix is https://github.com/ceph/ceph/commit/d4c175cd5e0f97ca75ee9e62592ac41e3da9c805

Yes, please proceed with the upgrade.

--- Additional comment from Patrick Donnelly on 2023-09-21 12:55:25 UTC ---

Also: https://github.com/ceph/ceph/pull/53571

which doesn't yet have a tracker ticket.

Comment 4 Venky Shankar 2023-12-18 10:12:06 UTC

*** This bug has been marked as a duplicate of bug 2249571 ***


Note You need to log in before you can comment on or make changes to this bug.