Bug 2030540
Summary: | mds: opening connection to up:replay/up:creating daemon causes message drop | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Venky Shankar <vshankar> |
Component: | CephFS | Assignee: | Patrick Donnelly <pdonnell> |
Status: | CLOSED ERRATA | QA Contact: | Amarnath <amk> |
Severity: | medium | Docs Contact: | Akash Raj <akraj> |
Priority: | medium | ||
Version: | 5.0 | CC: | akraj, ceph-eng-bugs, gfarnum, hyelloji, kdreyer, pdonnell, tserlin, vereddy |
Target Milestone: | --- | Keywords: | Rebase |
Target Release: | 5.2 | ||
Hardware: | All | ||
OS: | All | ||
Whiteboard: | |||
Fixed In Version: | ceph-16.2.8-2.el8cp | Doc Type: | Bug Fix |
Doc Text: |
.Race condition no longer causes confusion among MDS in a cluster
Previously, a race condition in MDS, during messenger setup, would result in confusion among other MDS in the cluster, causing other MDS to refuse communication.
With this fix, the race condition is rectified, establishing successful communication among the MDS.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2022-08-09 17:36:46 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 2102272 |
Description
Venky Shankar
2021-12-09 05:07:18 UTC
Patrick, please create a MR. Hi I have tested this with below steps 1. Created Filesytem with 2 active mds and 1 standby 2. mounted to a client and started IOs on it 3. Initiated ceph mds fail 0 4. Standy by mds became active with following changes replay -> resolve -> reconnect -> rejoin -> clientreplay ->active 5. I did not observe any message drops in the mds logs [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx Reqs: 0 /s 33.1k 33.2k 6293 533 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 27.6k 27.6k 4452 561 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1193M 54.1G cephfs.cephfs.data data 3586M 54.1G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph config set mds mds_sleep_rank_change 10000000.0 [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph config set mds mds_connect_bootstrapping True [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph -s cluster: id: ef320eb6-eb1c-11ec-8277-fa163eb9a4e9 health: HEALTH_OK services: mon: 3 daemons, quorum ceph-amk-bz-2-c61ql9-node1-installer,ceph-amk-bz-2-c61ql9-node2,ceph-amk-bz-2-c61ql9-node3 (age 18h) mgr: ceph-amk-bz-2-c61ql9-node1-installer.txjreu(active, since 18h), standbys: ceph-amk-bz-2-c61ql9-node2.jbmoqy mds: 2/2 daemons up, 1 standby osd: 12 osds: 12 up (since 18h), 12 in (since 18h) data: volumes: 1/1 healthy pools: 4 pools, 97 pgs objects: 51.77k objects, 1.6 GiB usage: 6.6 GiB used, 173 GiB / 180 GiB avail pgs: 97 active+clean [root@ceph-amk-bz-2-c61ql9-node7 ~]# ls -lrt /mnt/cephfs_fusebo39r14l2j total 1024003 drwxr-xr-x. 2 root root 0 Jun 13 09:44 98nr4kvgtr drwxr-xr-x. 3 root root 81920020 Jun 13 09:46 mt5vbcwd2j drwxr-xr-x. 2 root root 0 Jun 13 09:47 vibx51oy92 drwxr-xr-x. 3 root root 81920020 Jun 13 09:47 fidsc7yzrm -rw-r--r--. 1 root root 1048576000 Jun 13 09:47 ceph-amk-bz-2-c61ql9-node7.txt drwxr-xr-x. 5 root root 40960010 Jun 13 09:49 dir drwxr-xr-x. 2 root root 0 Jun 14 03:53 run_ios [root@ceph-amk-bz-2-c61ql9-node7 ~]# df /mnt/cephfs_fusebo39r14l2j Filesystem 1K-blocks Used Available Use% Mounted on ceph-fuse 57700352 2043904 55656448 4% /mnt/cephfs_fusebo39r14l2j [root@ceph-amk-bz-2-c61ql9-node7 ~]# df /mnt/cephfs_fusebo39r14l2j Filesystem 1K-blocks Used Available Use% Mounted on ceph-fuse 57397248 2265088 55132160 4% /mnt/cephfs_fusebo39r14l2j [root@ceph-amk-bz-2-c61ql9-node7 ~]# df /mnt/cephfs_fusebo39r14l2j Filesystem 1K-blocks Used Available Use% Mounted on ceph-fuse 57372672 2809856 54562816 5% /mnt/cephfs_fusebo39r14l2j [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx Reqs: 6 /s 33.2k 33.2k 6314 591 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 4 /s 27.6k 27.6k 4493 639 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1201M 53.5G cephfs.cephfs.data data 5442M 53.5G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fail mds 0 no valid command found; 10 closest matches: pg stat pg getmap pg dump [all|summary|sum|delta|pools|osds|pgs|pgs_brief...] pg dump_json [all|summary|sum|pools|osds|pgs...] pg dump_pools_json pg ls-by-pool <poolstr> [<states>...] pg ls-by-primary <id|osd.id> [<pool:int>] [<states>...] pg ls-by-osd <id|osd.id> [<pool:int>] [<states>...] pg ls [<pool:int>] [<states>...] pg dump_stuck [inactive|unclean|stale|undersized|degraded...] [<threshold:int>] Error EINVAL: invalid command [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph mds fail 0 failed mds gid 14520 [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 replay cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 0 0 0 0 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 92 96 55 62 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 replay cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 0 0 0 0 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 92 96 55 62 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 3 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 replay cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 0 0 0 0 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 92 96 55 62 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 3 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 resolve cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 0 0 0 0 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 92 96 55 62 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 resolve cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 60.7k 60.3k 10.2k 0 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 92 96 55 62 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 reconnect cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 60.7k 60.3k 10.2k 0 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 4 /s 92 96 55 62 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 reconnect cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 60.7k 60.3k 10.2k 0 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 4 /s 92 96 55 59 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 rejoin cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 60.7k 60.3k 10.2k 82 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 92 96 55 59 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 clientreplay cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 60.7k 60.3k 10.2k 82 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 92 96 55 59 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 clientreplay cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 60.7k 60.3k 10.2k 83 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 92 96 55 59 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke Reqs: 0 /s 60.7k 60.3k 10.2k 44 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 92 96 55 59 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1266M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph orch host ls HOST ADDR LABELS STATUS ceph-amk-bz-2-c61ql9-node1-installer 10.0.208.208 _admin mgr installer mon ceph-amk-bz-2-c61ql9-node2 10.0.211.204 mgr osd mon ceph-amk-bz-2-c61ql9-node3 10.0.210.106 osd mon ceph-amk-bz-2-c61ql9-node4 10.0.211.133 mds nfs ceph-amk-bz-2-c61ql9-node5 10.0.211.244 osd mds ceph-amk-bz-2-c61ql9-node6 10.0.211.227 mds nfs 6 hosts in cluster [root@ceph-amk-bz-2-c61ql9-node7 ~]# Regards, Amarnath Hi I have tested this with below steps 1. Created Filesytem with 2 active mds and 1 standby 2. mounted to a client and started IOs on it 3. Initiated ceph mds fail 0 4. Standy by mds became active with following changes replay -> resolve -> reconnect -> rejoin -> clientreplay ->active 5. I did not observe any message drops in the mds logs [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx Reqs: 0 /s 33.1k 33.2k 6293 533 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 27.6k 27.6k 4452 561 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1193M 54.1G cephfs.cephfs.data data 3586M 54.1G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph config set mds mds_sleep_rank_change 10000000.0 [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph config set mds mds_connect_bootstrapping True [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph -s cluster: id: ef320eb6-eb1c-11ec-8277-fa163eb9a4e9 health: HEALTH_OK services: mon: 3 daemons, quorum ceph-amk-bz-2-c61ql9-node1-installer,ceph-amk-bz-2-c61ql9-node2,ceph-amk-bz-2-c61ql9-node3 (age 18h) mgr: ceph-amk-bz-2-c61ql9-node1-installer.txjreu(active, since 18h), standbys: ceph-amk-bz-2-c61ql9-node2.jbmoqy mds: 2/2 daemons up, 1 standby osd: 12 osds: 12 up (since 18h), 12 in (since 18h) data: volumes: 1/1 healthy pools: 4 pools, 97 pgs objects: 51.77k objects, 1.6 GiB usage: 6.6 GiB used, 173 GiB / 180 GiB avail pgs: 97 active+clean [root@ceph-amk-bz-2-c61ql9-node7 ~]# ls -lrt /mnt/cephfs_fusebo39r14l2j total 1024003 drwxr-xr-x. 2 root root 0 Jun 13 09:44 98nr4kvgtr drwxr-xr-x. 3 root root 81920020 Jun 13 09:46 mt5vbcwd2j drwxr-xr-x. 2 root root 0 Jun 13 09:47 vibx51oy92 drwxr-xr-x. 3 root root 81920020 Jun 13 09:47 fidsc7yzrm -rw-r--r--. 1 root root 1048576000 Jun 13 09:47 ceph-amk-bz-2-c61ql9-node7.txt drwxr-xr-x. 5 root root 40960010 Jun 13 09:49 dir drwxr-xr-x. 2 root root 0 Jun 14 03:53 run_ios [root@ceph-amk-bz-2-c61ql9-node7 ~]# df /mnt/cephfs_fusebo39r14l2j Filesystem 1K-blocks Used Available Use% Mounted on ceph-fuse 57700352 2043904 55656448 4% /mnt/cephfs_fusebo39r14l2j [root@ceph-amk-bz-2-c61ql9-node7 ~]# df /mnt/cephfs_fusebo39r14l2j Filesystem 1K-blocks Used Available Use% Mounted on ceph-fuse 57397248 2265088 55132160 4% /mnt/cephfs_fusebo39r14l2j [root@ceph-amk-bz-2-c61ql9-node7 ~]# df /mnt/cephfs_fusebo39r14l2j Filesystem 1K-blocks Used Available Use% Mounted on ceph-fuse 57372672 2809856 54562816 5% /mnt/cephfs_fusebo39r14l2j [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx Reqs: 6 /s 33.2k 33.2k 6314 591 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 4 /s 27.6k 27.6k 4493 639 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1201M 53.5G cephfs.cephfs.data data 5442M 53.5G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fail mds 0 no valid command found; 10 closest matches: pg stat pg getmap pg dump [all|summary|sum|delta|pools|osds|pgs|pgs_brief...] pg dump_json [all|summary|sum|pools|osds|pgs...] pg dump_pools_json pg ls-by-pool <poolstr> [<states>...] pg ls-by-primary <id|osd.id> [<pool:int>] [<states>...] pg ls-by-osd <id|osd.id> [<pool:int>] [<states>...] pg ls [<pool:int>] [<states>...] pg dump_stuck [inactive|unclean|stale|undersized|degraded...] [<threshold:int>] Error EINVAL: invalid command [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph mds fail 0 failed mds gid 14520 [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 replay cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 0 0 0 0 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 92 96 55 62 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 replay cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 0 0 0 0 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 92 96 55 62 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 3 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 replay cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 0 0 0 0 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 92 96 55 62 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 3 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 resolve cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 0 0 0 0 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 92 96 55 62 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 resolve cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 60.7k 60.3k 10.2k 0 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 92 96 55 62 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 reconnect cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 60.7k 60.3k 10.2k 0 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 4 /s 92 96 55 62 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 reconnect cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 60.7k 60.3k 10.2k 0 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 4 /s 92 96 55 59 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 rejoin cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 60.7k 60.3k 10.2k 82 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 92 96 55 59 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 clientreplay cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 60.7k 60.3k 10.2k 82 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 92 96 55 59 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 clientreplay cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke 60.7k 60.3k 10.2k 83 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 92 96 55 59 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1265M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph fs status cephfs - 2 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs.ceph-amk-bz-2-c61ql9-node5.tyjkke Reqs: 0 /s 60.7k 60.3k 10.2k 44 1 active cephfs.ceph-amk-bz-2-c61ql9-node6.twwsmw Reqs: 0 /s 92 96 55 59 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 1266M 52.9G cephfs.cephfs.data data 5986M 52.9G STANDBY MDS cephfs.ceph-amk-bz-2-c61ql9-node4.mdufsx MDS version: ceph version 16.2.8-42.el8cp (c15e56a8d2decae9230567653130d1e31a36fe0a) pacific (stable) [root@ceph-amk-bz-2-c61ql9-node7 ~]# ceph orch host ls HOST ADDR LABELS STATUS ceph-amk-bz-2-c61ql9-node1-installer 10.0.208.208 _admin mgr installer mon ceph-amk-bz-2-c61ql9-node2 10.0.211.204 mgr osd mon ceph-amk-bz-2-c61ql9-node3 10.0.210.106 osd mon ceph-amk-bz-2-c61ql9-node4 10.0.211.133 mds nfs ceph-amk-bz-2-c61ql9-node5 10.0.211.244 osd mds ceph-amk-bz-2-c61ql9-node6 10.0.211.227 mds nfs 6 hosts in cluster [root@ceph-amk-bz-2-c61ql9-node7 ~]# Regards, Amarnath Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5997 |