Bug 2346094
| Summary: | [SMB][CTDB]Cephadm Log Displays Warning: "fence_old_ranks: Unsupported" | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Mohit Bisht <mobisht> |
| Component: | smb | Assignee: | avan <athakkar> |
| smb sub component: | ctdb | QA Contact: | Mohit Bisht <mobisht> |
| Status: | CLOSED CURRENTRELEASE | Docs Contact: | Rivka Pollack <rpollack> |
| Severity: | low | ||
| Priority: | unspecified | CC: | anoopcs, aramteke, cephqe-warriors, gdeschner, jmulligan, rpollack, sprabhu, tserlin, vdas |
| Version: | 8.0 | Keywords: | External |
| Target Milestone: | --- | ||
| Target Release: | 8.1 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-19.2.1-30.el9cp | Doc Type: | Bug Fix |
| Doc Text: |
.Out of date rank maps no longer cause warning messages for the Ceph cluster
Previously, out of date rank maps emit the "fence_old_ranks: Unsupported." warning in the logs.
With this fix, the rank maps are updated to reflect the current state of nodes on the Ceph cluster and the warning message is no longer emitted to the logs.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2025-07-23 15:51:16 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 2351689 | ||
Description of problem: ======================== [SMB][CTDB]Cephadm Log Displays Warning: "fence_old_ranks: Unsupported" After a successful Samba service deployment, the cephadm log continuously displays the warning: "fence_old_ranks: Unsupported." Note: 1. There are no functional issues observed. 2. The warning keeps appearing in the cephadm log # ceph smb cluster create smb1 user --define_user_pass user1%passwd --placement label:smb --clustering default --public_addrs 10.8.131.254/21 { "resource": { "resource_type": "ceph.smb.cluster", "cluster_id": "smb1", "auth_mode": "user", "intent": "present", "user_group_settings": [ { "source_type": "resource", "ref": "smb1ajycysxa" } ], "placement": { "label": "smb" }, "clustering": "default", "public_addrs": [ { "address": "10.8.131.254/21" } ] }, "state": "created", "additional_results": [ { "resource": { "resource_type": "ceph.smb.usersgroups", "users_groups_id": "smb1ajycysxa", "intent": "present", "values": { "users": [ { "name": "user1", "password": "passwd" } ], "groups": [] }, "linked_to_cluster": "smb1" }, "state": "created", "success": true } ], "success": true } # ceph smb share create smb1 share1 cephfs / --subvolume smb/sv1 { "resource": { "resource_type": "ceph.smb.share", "cluster_id": "smb1", "share_id": "share1", "intent": "present", "name": "share1", "readonly": false, "browseable": true, "cephfs": { "volume": "cephfs", "path": "/", "subvolumegroup": "smb", "subvolume": "sv1", "provider": "samba-vfs" } }, "state": "created", "success": true } # ceph orch ls | grep smb.smb1 smb.smb1 3/3 3s ago 26s label:smb # smbclient -U user1%passwd //10.8.131.254/share1 -c ls . D 0 Thu Feb 13 19:44:56 2025 .. D 0 Thu Feb 13 19:44:56 2025 4633575424 blocks of size 1024. 4633522176 blocks available # ceph log last 100 cephadm 2025-02-17T10:56:01.755596+0000 mgr.argo012.dbhxzr (mgr.11009982) 322 : cephadm [WRN] fence_old_ranks: Unsupported {0: {0: 'smb1.0.0.argo012.napmty'}, 1: {0: 'smb1.1.0.argo013.zvchho'}, 2: {0: 'smb1.2.0.argo014.euxvgo'}} 3 2025-02-17T10:56:09.386343+0000 mgr.argo012.dbhxzr (mgr.11009982) 326 : cephadm [WRN] fence_old_ranks: Unsupported {0: {0: 'smb1.0.0.argo012.napmty'}, 1: {0: 'smb1.1.0.argo013.zvchho'}, 2: {0: 'smb1.2.0.argo014.euxvgo'}} Version-Release number of selected component (if applicable): ============================================================== 19.2.0-53 How reproducible: =================== Always Steps to Reproduce: ==================== 1.Deploy smb service 2.Check cephadm logs "ceph log last 100 cephadm" Actual results: =============== cephadm log continuously displays the warning: "fence_old_ranks: Unsupported." Expected results: ================== cephadm log should not show any warning