Bug 2236385
| Summary: | The command `ceph mds metadata` doesn't list information for the active MDS server | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Milind Changire <mchangir> |
| Component: | CephFS | Assignee: | Patrick Donnelly <pdonnell> |
| Status: | CLOSED ERRATA | QA Contact: | Hemanth Kumar <hyelloji> |
| Severity: | low | Docs Contact: | Akash Raj <akraj> |
| Priority: | unspecified | ||
| Version: | 5.2 | CC: | akraj, amk, ceph-eng-bugs, cephqe-warriors, gfarnum, hyelloji, lithomas, mchangir, nravinas, pdonnell, tserlin, vereddy, vshankar |
| Target Milestone: | --- | ||
| Target Release: | 6.1z2 | ||
| Hardware: | Unspecified | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-17.2.6-128.el9cp | Doc Type: | Bug Fix |
| Doc Text: |
.`ceph mds metadata` command now functions as expected across upgrades
Previously, monitors could lose track of MDS metadata during upgrades and cancel the PAXOS transactions causing the MDS metadata to be unavailable.
With this fix, MDS metadata is added in batches with FSMap changes to ensure consistency. The `ceph mds metadata` command now functions as expected across upgrades.
|
Story Points: | --- |
| Clone Of: | 2144472 | Environment: | |
| Last Closed: | 2023-10-12 16:34:37 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 2235257 | ||
|
Comment 1
Venky Shankar
2023-09-05 09:08:05 UTC
Hi All,
We are observing details on all the mds servers
[root@ceph-fs-6-bz-gebvp1-node7 ~]# ceph versions
{
"mon": {
"ceph version 17.2.6-136.el9cp (24003d91a44631e46f9397ab1b9c5b77dc9223bc) quincy (stable)": 3
},
"mgr": {
"ceph version 17.2.6-136.el9cp (24003d91a44631e46f9397ab1b9c5b77dc9223bc) quincy (stable)": 2
},
"osd": {
"ceph version 17.2.6-136.el9cp (24003d91a44631e46f9397ab1b9c5b77dc9223bc) quincy (stable)": 12
},
"mds": {
"ceph version 17.2.6-136.el9cp (24003d91a44631e46f9397ab1b9c5b77dc9223bc) quincy (stable)": 3
},
"overall": {
"ceph version 17.2.6-136.el9cp (24003d91a44631e46f9397ab1b9c5b77dc9223bc) quincy (stable)": 20
}
}
[root@ceph-fs-6-bz-gebvp1-node7 ~]# ceph mds metadata
[
{
"name": "cephfs.ceph-fs-6-bz-gebvp1-node6.kgmbmg",
"addr": "[v2:10.0.210.120:6800/1495934188,v1:10.0.210.120:6801/1495934188]",
"arch": "x86_64",
"ceph_release": "quincy",
"ceph_version": "ceph version 17.2.6-136.el9cp (24003d91a44631e46f9397ab1b9c5b77dc9223bc) quincy (stable)",
"ceph_version_short": "17.2.6-136.el9cp",
"container_hostname": "ceph-fs-6-bz-gebvp1-node6",
"container_image": "registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:a59c0b2dfbe038c85ad5c8358fc3016f84da7670874d0e9a3b395056fabb8c5f",
"cpu": "Intel Xeon Processor (Cascadelake)",
"distro": "rhel",
"distro_description": "Red Hat Enterprise Linux 9.2 (Plow)",
"distro_version": "9.2",
"hostname": "ceph-fs-6-bz-gebvp1-node6",
"kernel_description": "#1 SMP PREEMPT_DYNAMIC Thu Jul 20 09:11:28 EDT 2023",
"kernel_version": "5.14.0-284.25.1.el9_2.x86_64",
"mem_swap_kb": "0",
"mem_total_kb": "3750064",
"os": "Linux"
},
{
"name": "cephfs.ceph-fs-6-bz-gebvp1-node4.dipxkm",
"addr": "[v2:10.0.209.2:6800/4207929190,v1:10.0.209.2:6801/4207929190]",
"arch": "x86_64",
"ceph_release": "quincy",
"ceph_version": "ceph version 17.2.6-136.el9cp (24003d91a44631e46f9397ab1b9c5b77dc9223bc) quincy (stable)",
"ceph_version_short": "17.2.6-136.el9cp",
"container_hostname": "ceph-fs-6-bz-gebvp1-node4",
"container_image": "registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:a59c0b2dfbe038c85ad5c8358fc3016f84da7670874d0e9a3b395056fabb8c5f",
"cpu": "Intel Xeon Processor (Icelake)",
"distro": "rhel",
"distro_description": "Red Hat Enterprise Linux 9.2 (Plow)",
"distro_version": "9.2",
"hostname": "ceph-fs-6-bz-gebvp1-node4",
"kernel_description": "#1 SMP PREEMPT_DYNAMIC Thu Jul 20 09:11:28 EDT 2023",
"kernel_version": "5.14.0-284.25.1.el9_2.x86_64",
"mem_swap_kb": "0",
"mem_total_kb": "3748036",
"os": "Linux"
},
{
"name": "cephfs.ceph-fs-6-bz-gebvp1-node5.kbimfq",
"addr": "[v2:10.0.208.24:6832/3072745219,v1:10.0.208.24:6833/3072745219]",
"arch": "x86_64",
"ceph_release": "quincy",
"ceph_version": "ceph version 17.2.6-136.el9cp (24003d91a44631e46f9397ab1b9c5b77dc9223bc) quincy (stable)",
"ceph_version_short": "17.2.6-136.el9cp",
"container_hostname": "ceph-fs-6-bz-gebvp1-node5",
"container_image": "registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:a59c0b2dfbe038c85ad5c8358fc3016f84da7670874d0e9a3b395056fabb8c5f",
"cpu": "Intel Xeon Processor (Cascadelake)",
"distro": "rhel",
"distro_description": "Red Hat Enterprise Linux 9.2 (Plow)",
"distro_version": "9.2",
"hostname": "ceph-fs-6-bz-gebvp1-node5",
"kernel_description": "#1 SMP PREEMPT_DYNAMIC Thu Jul 20 09:11:28 EDT 2023",
"kernel_version": "5.14.0-284.25.1.el9_2.x86_64",
"mem_swap_kb": "0",
"mem_total_kb": "3750056",
"os": "Linux"
}
]
[root@ceph-fs-6-bz-gebvp1-node7 ~]#
[root@ceph-fs-6-bz-gebvp1-node7 ~]# ceph fs status
cephfs - 2 clients
======
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active cephfs.ceph-fs-6-bz-gebvp1-node4.dipxkm Reqs: 0 /s 46.2k 46.2k 7204 34.0k
1 active cephfs.ceph-fs-6-bz-gebvp1-node6.kgmbmg Reqs: 0 /s 14.2k 14.2k 3222 14.1k
POOL TYPE USED AVAIL
cephfs.cephfs.meta metadata 1160M 54.0G
cephfs.cephfs.data data 3586M 54.0G
STANDBY MDS
cephfs.ceph-fs-6-bz-gebvp1-node5.kbimfq
MDS version: ceph version 17.2.6-136.el9cp (24003d91a44631e46f9397ab1b9c5b77dc9223bc) quincy (stable)
Regards,
Amarnath
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security, enhancement, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:5693 |