Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2111539

Summary: [cee/sd][ceph-mgr] While setting host into maintenance, getting a traceback "raise OrchestratorError(msg, errno=rc)"
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Lijo Stephen Thomas <lithomas>
Component: CephadmAssignee: Adam King <adking>
Status: CLOSED ERRATA QA Contact: Sayalee <saraut>
Severity: medium Docs Contact: Rivka Pollack <rpollack>
Priority: unspecified    
Version: 5.1CC: adking, cephqe-warriors, gjose, kdreyer, mmuench, rpollack, saraut, vumrao
Target Milestone: ---   
Target Release: 7.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-18.2.0-1 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-12-13 15:19:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2237662    

Description Lijo Stephen Thomas 2022-07-27 13:14:33 UTC
Description of problem:
-----------------------
 While setting host into maintenance mode, getting below traceback:
~~~
Traceback (most recent call last):
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 125, in wrapper
    return OrchResult(f(*args, **kwargs))
  File "/usr/share/ceph/mgr/cephadm/module.py", line 1749, in host_ok_to_stop
    raise OrchestratorError(msg, errno=rc)
orchestrator._interface.OrchestratorError: unsafe to stop osd(s) at this time (21 PGs are or would become offline)
2022-07-27T12:31:42.451729+0000 mgr.ceph101.zcvrte (mgr.1244104) 77504 : cephadm [ERR] unsafe to stop osd(s) at this time (21 PGs are or would become offline)
Note: Warnings can be bypassed with the --force flag
Traceback (most recent call last):
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 125, in wrapper
    return OrchResult(f(*args, **kwargs))
  File "/usr/share/ceph/mgr/cephadm/module.py", line 130, in wrapper
    return func(*args, **kwargs)
  File "/usr/share/ceph/mgr/cephadm/module.py", line 1799, in enter_host_maintenance
    msg + '\nNote: Warnings can be bypassed with the --force flag', errno=rc)
orchestrator._interface.OrchestratorError: unsafe to stop osd(s) at this time (21 PGs are or would become offline)
Note: Warnings can be bypassed with the --force flag
2022-07-27T12:31:48.913602+0000 mgr.ceph101.zcvrte (mgr.1244104) 77510 : cephadm [ERR] unsafe to stop osd(s) at this time (21 PGs are or would become offline)
Note: Warnings can be bypassed with the --force flag
Traceback (most recent call last):
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 125, in wrapper
    return OrchResult(f(*args, **kwargs))
  File "/usr/share/ceph/mgr/cephadm/module.py", line 130, in wrapper
    return func(*args, **kwargs)
  File "/usr/share/ceph/mgr/cephadm/module.py", line 1799, in enter_host_maintenance
    msg + '\nNote: Warnings can be bypassed with the --force flag', errno=rc)
orchestrator._interface.OrchestratorError: unsafe to stop osd(s) at this time (21 PGs are or would become offline)
~~~

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHCS 5.1z2 - 16.2.7-126.el8cp

How reproducible:
Everytime

Steps to Reproduce:
1. Install RHCS 5.2z2 with 3 osd node and failure domain set at host level.
2. Create a pool with size/min_size 2/2
3. Set any one osd host into maintenance mode, and check cephadm/mgr logs.

Actual results:
---------------
Getting the traceback.


Expected results:
-----------------
The error should be handled by the code (without any traceback).

Comment 20 errata-xmlrpc 2023-12-13 15:19:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:7780

Comment 21 Red Hat Bugzilla 2024-04-12 04:25:09 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days