Bug 1892173

Summary: Unable to retrieve the current connection scores via connection scores dump command
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Pawan <pdhiran>
Component: RADOSAssignee: Neha Ojha <nojha>
Status: CLOSED ERRATA QA Contact: Pawan <pdhiran>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.2CC: akupczyk, bhubbard, ceph-eng-bugs, dzafman, kchai, nojha, rzarzyns, sseshasa, tserlin, vereddy, vumrao
Target Milestone: ---   
Target Release: 4.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-14.2.11-81.el8cp, ceph-14.2.11-81.el7cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-01-12 14:58:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Pawan 2020-10-28 05:31:27 UTC
Description of problem:
Executing the command : ceph daemon mon.`hostname` connection scores dump does not return the current scores maintained by the monitor nodes. The command errors out stating no valid command found.

Also executed : ceph daemon mon.`hostname` get_command_descriptions to get all the available commands, but the connection scores dump command was not found.

Version-Release number of selected component (if applicable):
ceph version 14.2.11-62.el7cp

How reproducible:
Always

Steps to Reproduce:
As per the documentation : https://docs.ceph.com/en/latest/rados/operations/change-mon-elections/, the connection scores even if they aren’t in the connectivity election mode.
1. Deploy a 4.2 ceph cluster
2. Execute cmd : ceph daemon mon.{name} connection scores dump to get the connection scores for each monitor node

Actual results:
Command not found error :

# ceph daemon mon.`hostname` connection scores dump
no valid command found; 10 closest matches:
git_version
get_command_descriptions
dump_historic_ops_by_duration
dump_historic_ops
dump_mempools
dump_historic_slow_ops
config set <var> <val> [<val>...]
config help {<var>}
config unset <var>
config show
admin_socket: invalid command


Expected results:
The connection scores should be displayed

Additional info:
Ran the scenario on ceph cluster :
# ceph versions
{
    "mon": {
        "ceph version 14.2.11-62.el7cp (e078eac465cf91c4a561c38e62321b756d5c213d) nautilus (stable)": 3
    },
    "mgr": {
        "ceph version 14.2.11-62.el7cp (e078eac465cf91c4a561c38e62321b756d5c213d) nautilus (stable)": 1
    },
    "osd": {
        "ceph version 14.2.11-62.el7cp (e078eac465cf91c4a561c38e62321b756d5c213d) nautilus (stable)": 17
    },
    "mds": {
        "ceph version 14.2.11-62.el7cp (e078eac465cf91c4a561c38e62321b756d5c213d) nautilus (stable)": 2
    },
    "rgw": {
        "ceph version 14.2.11-62.el7cp (e078eac465cf91c4a561c38e62321b756d5c213d) nautilus (stable)": 2
    },
    "rgw-nfs": {
        "ceph version 14.2.11-62.el7cp (e078eac465cf91c4a561c38e62321b756d5c213d) nautilus (stable)": 1
    },
    "overall": {
        "ceph version 14.2.11-62.el7cp (e078eac465cf91c4a561c38e62321b756d5c213d) nautilus (stable)": 26
    }
}

Comment 1 Pawan 2020-10-28 05:32:50 UTC
Changing the target release to 4.2.

Comment 2 Yaniv Kaul 2020-11-12 19:41:28 UTC
Is that even supposed to work on Nautilus?
https://docs.ceph.com/en/nautilus/rados/operations/change-mon-elections/ gives 404.

Comment 3 Neha Ojha 2020-11-12 19:56:41 UTC
(In reply to Yaniv Kaul from comment #2)
> Is that even supposed to work on Nautilus?

I think it should work in RHCS 4.2 unless Greg thinks otherwise.

> https://docs.ceph.com/en/nautilus/rados/operations/change-mon-elections/
> gives 404.

That's because the stretch cluster patches have not been merged in upstream nautilus. You can see it here https://github.com/ceph/ceph/pull/37173/files#diff-79982f24c7e6a2a7bb35efb9d37ec86c1de01d7fdaa6565b744363d17b3ae562R4

Comment 4 Pawan 2020-11-17 04:31:11 UTC
(In reply to Yaniv Kaul from comment #2)
> Is that even supposed to work on Nautilus?
> https://docs.ceph.com/en/nautilus/rados/operations/change-mon-elections/
> gives 404.

Yeah, I think it should work in Nautilus. Getting the scores is important as that determines the Leader mon of the cluster.

Comment 13 Pawan 2020-11-25 05:45:12 UTC
the scores are being displayed via connection scores dump
# ceph daemon mon.`hostname` connection scores dump
{
    "rank": 1,
    "epoch": 50,
    "version": 145179,
    "half_life": 43200,
    "persist_interval": 10,
    "reports": {
        "report": {
            "rank": -1,
            "epoch": 0,
            "version": 0,
            "peer_scores": {}
        },
        "report": {
            "rank": 0,
            "epoch": 50,
            "version": 145176,
            "peer_scores": {
                "peer": {
                    "peer_rank": 1,
                    "peer_score": 0.99930078301421887,
                    "peer_alive": true
                },
                "peer": {
                    "peer_rank": 2,
                    "peer_score": 0.99946124202110298,
                    "peer_alive": true
                }
            }
        },
        "report": {
            "rank": 1,
            "epoch": 50,
            "version": 145179,
            "peer_scores": {
                "peer": {
                    "peer_rank": 0,
                    "peer_score": 0.9994008327669448,
                    "peer_alive": true
                },
                "peer": {
                    "peer_rank": 1,
                    "peer_score": 1,
                    "peer_alive": true
                },
                "peer": {
                    "peer_rank": 2,
                    "peer_score": 0.99946122365040913,
                    "peer_alive": true
                }
            }
        },
        "report": {
            "rank": 2,
            "epoch": 50,
            "version": 145185,
            "peer_scores": {
                "peer": {
                    "peer_rank": 0,
                    "peer_score": 0.99928003179216351,
                    "peer_alive": true
                },
                "peer": {
                    "peer_rank": 1,
                    "peer_score": 0.99920069830486058,
                    "peer_alive": true
                }
            }
        }
    }
}

# ceph versions
{
    "mon": {
        "ceph version 14.2.11-82.el7cp (d85597cdd8076cf36951d04d95e49d139804b9c5) nautilus (stable)": 3
    },
    "mgr": {
        "ceph version 14.2.11-82.el7cp (d85597cdd8076cf36951d04d95e49d139804b9c5) nautilus (stable)": 1
    },
    "osd": {
        "ceph version 14.2.11-82.el7cp (d85597cdd8076cf36951d04d95e49d139804b9c5) nautilus (stable)": 14
    },
    "mds": {
        "ceph version 14.2.11-82.el7cp (d85597cdd8076cf36951d04d95e49d139804b9c5) nautilus (stable)": 2
    },
    "rgw": {
        "ceph version 14.2.11-82.el7cp (d85597cdd8076cf36951d04d95e49d139804b9c5) nautilus (stable)": 2
    },
    "overall": {
        "ceph version 14.2.11-82.el7cp (d85597cdd8076cf36951d04d95e49d139804b9c5) nautilus (stable)": 22
    }
}

Comment 15 errata-xmlrpc 2021-01-12 14:58:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:0081