Description of problem: =============================== Currently, if the node hosting the noobaa-db pod is shutdown, one has to explicitly force delete the noobaa-db pod since it stays stuck in Terminating state (as it has an RWO PVC mounted). The Bug 1783961 was closed as WONTFIX in OCS 4.2 timeframe. reason: NAME READY AGE statefulset.apps/noobaa-core 1/1 4d1h statefulset.apps/noobaa-db 1/1 4d1h But, recently on an ask from IBM, following changes were done for automated force deletion of OSDs and MONs which used to exhibit similar behavior on hosting node shutdown. With the fix for Bug 1830015 and Bug #1835908(OCS 4.4.1- Bug 1848184), now rook-ceph operator pod force deletes an OSD and MON pod to enable it to get scheduled on another spare node. >> Can similar functionality or approach be taken by noobaa-operator or other >> operator to handle the force deletion of the Terminating noobaa-db pod ? If this ask is invalid and cannot be resolved due to other constraints, please let me know. Otherwise, I felt like raising it as an RFE in case it can be achieved in later releases (not necessarily on priority) Reasons for ask: ---------------- 1. Till one force deletes the pod or the shutdown node is POWERED ON, the noobaa-db stays in Terminating state and hence noobaa DB is inaccessible 2. As a result of this, noobaa-endpoint and other noobaa-pods also get affected and keep changing state to CrashLoopBackOff Version-Release number of selected component (if applicable): ========================================== Since OCS 4.2 , the issue is documented in KNOWN Issues in every release note How reproducible: Always
Won't make it to 4.6, shoould push to 4.7
Following a triage with QE, moving to 4.7
*** Bug 1898969 has been marked as a duplicate of this bug. ***
*** Bug 1931940 has been marked as a duplicate of this bug. ***
*** Bug 1889616 has been marked as a duplicate of this bug. ***
*** Bug 1949727 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat OpenShift Data Foundation 4.9.0 enhancement, security, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:5086