Description of problem: Sometimes orchestrator operations might get stuck in nodes due to various reasons. Example : --- Logging error --- Traceback (most recent call last): File "/usr/lib64/python3.6/logging/__init__.py", line 998, in emit self.flush() File "/usr/lib64/python3.6/logging/__init__.py", line 978, in flush self.stream.flush() OSError: [Errno 28] No space left on device ceph doesn't rever user back with any information but gets stuck without any notifications even in the DEBUG logs. This BZ is a downstream tracker for on ongoing effort to add timeouts to help users to know that the operation was actually tried but timed-out due to possible x,y,z scenario.
*** Bug 2149606 has been marked as a duplicate of this bug. ***
*** Bug 2149564 has been marked as a duplicate of this bug. ***
*** Bug 2102485 has been marked as a duplicate of this bug. ***
*** Bug 2133406 has been marked as a duplicate of this bug. ***
*** Bug 2153709 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 6.1 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:4473