Description of problem: ----------------------- While setting host into maintenance mode, getting below traceback: ~~~ Traceback (most recent call last): File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 125, in wrapper return OrchResult(f(*args, **kwargs)) File "/usr/share/ceph/mgr/cephadm/module.py", line 1749, in host_ok_to_stop raise OrchestratorError(msg, errno=rc) orchestrator._interface.OrchestratorError: unsafe to stop osd(s) at this time (21 PGs are or would become offline) 2022-07-27T12:31:42.451729+0000 mgr.ceph101.zcvrte (mgr.1244104) 77504 : cephadm [ERR] unsafe to stop osd(s) at this time (21 PGs are or would become offline) Note: Warnings can be bypassed with the --force flag Traceback (most recent call last): File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 125, in wrapper return OrchResult(f(*args, **kwargs)) File "/usr/share/ceph/mgr/cephadm/module.py", line 130, in wrapper return func(*args, **kwargs) File "/usr/share/ceph/mgr/cephadm/module.py", line 1799, in enter_host_maintenance msg + '\nNote: Warnings can be bypassed with the --force flag', errno=rc) orchestrator._interface.OrchestratorError: unsafe to stop osd(s) at this time (21 PGs are or would become offline) Note: Warnings can be bypassed with the --force flag 2022-07-27T12:31:48.913602+0000 mgr.ceph101.zcvrte (mgr.1244104) 77510 : cephadm [ERR] unsafe to stop osd(s) at this time (21 PGs are or would become offline) Note: Warnings can be bypassed with the --force flag Traceback (most recent call last): File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 125, in wrapper return OrchResult(f(*args, **kwargs)) File "/usr/share/ceph/mgr/cephadm/module.py", line 130, in wrapper return func(*args, **kwargs) File "/usr/share/ceph/mgr/cephadm/module.py", line 1799, in enter_host_maintenance msg + '\nNote: Warnings can be bypassed with the --force flag', errno=rc) orchestrator._interface.OrchestratorError: unsafe to stop osd(s) at this time (21 PGs are or would become offline) ~~~ Version-Release number of selected component (if applicable): ------------------------------------------------------------- RHCS 5.1z2 - 16.2.7-126.el8cp How reproducible: Everytime Steps to Reproduce: 1. Install RHCS 5.2z2 with 3 osd node and failure domain set at host level. 2. Create a pool with size/min_size 2/2 3. Set any one osd host into maintenance mode, and check cephadm/mgr logs. Actual results: --------------- Getting the traceback. Expected results: ----------------- The error should be handled by the code (without any traceback).
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:7780
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days