Bug 2111539 - [cee/sd][ceph-mgr] While setting host into maintenance, getting a traceback "raise OrchestratorError(msg, errno=rc)"
Summary: [cee/sd][ceph-mgr] While setting host into maintenance, getting a traceback "...
Keywords:
Status: POST
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 7.0
Assignee: Adam King
QA Contact: Manasa
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-07-27 13:14 UTC by Lijo Stephen Thomas
Modified: 2023-08-15 13:55 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-4946 0 None None None 2022-07-27 13:17:22 UTC

Description Lijo Stephen Thomas 2022-07-27 13:14:33 UTC
Description of problem:
-----------------------
 While setting host into maintenance mode, getting below traceback:
~~~
Traceback (most recent call last):
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 125, in wrapper
    return OrchResult(f(*args, **kwargs))
  File "/usr/share/ceph/mgr/cephadm/module.py", line 1749, in host_ok_to_stop
    raise OrchestratorError(msg, errno=rc)
orchestrator._interface.OrchestratorError: unsafe to stop osd(s) at this time (21 PGs are or would become offline)
2022-07-27T12:31:42.451729+0000 mgr.ceph101.zcvrte (mgr.1244104) 77504 : cephadm [ERR] unsafe to stop osd(s) at this time (21 PGs are or would become offline)
Note: Warnings can be bypassed with the --force flag
Traceback (most recent call last):
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 125, in wrapper
    return OrchResult(f(*args, **kwargs))
  File "/usr/share/ceph/mgr/cephadm/module.py", line 130, in wrapper
    return func(*args, **kwargs)
  File "/usr/share/ceph/mgr/cephadm/module.py", line 1799, in enter_host_maintenance
    msg + '\nNote: Warnings can be bypassed with the --force flag', errno=rc)
orchestrator._interface.OrchestratorError: unsafe to stop osd(s) at this time (21 PGs are or would become offline)
Note: Warnings can be bypassed with the --force flag
2022-07-27T12:31:48.913602+0000 mgr.ceph101.zcvrte (mgr.1244104) 77510 : cephadm [ERR] unsafe to stop osd(s) at this time (21 PGs are or would become offline)
Note: Warnings can be bypassed with the --force flag
Traceback (most recent call last):
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 125, in wrapper
    return OrchResult(f(*args, **kwargs))
  File "/usr/share/ceph/mgr/cephadm/module.py", line 130, in wrapper
    return func(*args, **kwargs)
  File "/usr/share/ceph/mgr/cephadm/module.py", line 1799, in enter_host_maintenance
    msg + '\nNote: Warnings can be bypassed with the --force flag', errno=rc)
orchestrator._interface.OrchestratorError: unsafe to stop osd(s) at this time (21 PGs are or would become offline)
~~~

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHCS 5.1z2 - 16.2.7-126.el8cp

How reproducible:
Everytime

Steps to Reproduce:
1. Install RHCS 5.2z2 with 3 osd node and failure domain set at host level.
2. Create a pool with size/min_size 2/2
3. Set any one osd host into maintenance mode, and check cephadm/mgr logs.

Actual results:
---------------
Getting the traceback.


Expected results:
-----------------
The error should be handled by the code (without any traceback).


Note You need to log in before you can comment on or make changes to this bug.