Bug 2132366 - Remove workaround to restart RBD mirror daemon every hour in the ceph cluster config
Summary: Remove workaround to restart RBD mirror daemon every hour in the ceph cluster...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.12
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.12.0
Assignee: Mudit Agarwal
QA Contact: Pratik Surve
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-10-05 12:11 UTC by Shyamsundar
Modified: 2023-08-09 17:00 UTC (History)
10 users (show)

Fixed In Version: 4.12.0-74
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-02-08 14:06:28 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 1834 0 None open remove rbd mirror daemon restart config 2022-10-10 12:03:52 UTC
Github red-hat-storage ocs-operator pull 1835 0 None open BUG 2132366: remove rbd mirror daemon restart config 2022-10-10 13:56:29 UTC

Description Shyamsundar 2022-10-05 12:11:27 UTC
Description of problem (please be detailed as possible and provide log
snippests):

A workaround to restart the mirror daemon was added to the ocs-operator code, or more specifically to set 'rbd_mirror_die_after_seconds' to 3600 seconds. This was done as mirror daemon needed a restart to clear state and allow further mirroring due to issues in the mirroring code.

These issues are now fixed in Ceph and as a result the workaround/setting needs to be removed from the product as well.

PR introducing the change: https://github.com/red-hat-storage/ocs-operator/pull/1740
BZ introducing the change: https://bugzilla.redhat.com/show_bug.cgi?id=2093266

Version of all relevant components (if applicable):
Needs to be fixed 4.12 onward

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
No


Is there any workaround available to the best of your knowledge?
NA


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
NA

Can this issue reproducible?
NA


Note You need to log in before you can comment on or make changes to this bug.