Bug 2082530 - [DR] rbd-mirror pod does not restart even after enabling rbd_mirror_die_after_seconds option
Summary: [DR] rbd-mirror pod does not restart even after enabling rbd_mirror_die_after...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ceph
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.12.0
Assignee: Ilya Dryomov
QA Contact: Aman Agrawal
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-05-06 10:58 UTC by Pratik Surve
Modified: 2023-08-09 16:37 UTC (History)
12 users (show)

Fixed In Version: 4.11.0-69
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 2083746 2086471 (view as bug list)
Environment:
Last Closed: 2023-01-31 00:19:21 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2023:0551 0 None None None 2023-01-31 00:19:30 UTC

Description Pratik Surve 2022-05-06 10:58:00 UTC
Description of problem (please be detailed as possible and provide log
snippests):
[DR] rbd-mirror pod does not restart even after enabling rbd_mirror_die_after_seconds option


Version of all relevant components (if applicable):

OCP version:- 4.11.0-0.nightly-2022-05-05-015322
ODF version:- 4.11.0-63
CEPH version:- ceph version 16.2.7-109.el8cp (9f20d0292c5f7b341bc2bc580fb119416323fdc1) pacific (stable)

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
yes

Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
2

Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy the RDR cluster
2. Enable rbd_mirror_die_after_seconds option via ceph config set global rbd_mirror_die_after_seconds 3600
3. Check the last restart time
 

Actual results:

1:- rook-ceph-rbd-mirror-a-6c86b84c75-v5s7f                           2/2     Running     1 (4h16m ago)   5h16m


2:- Output from rbd-mirror pod
debug 2022-05-06T06:28:01.468+0000 7f4c32607540  0 rbd::mirror::Mirror: 0x55f48e2598c0 init: rbd_mirror_die_after_seconds=3600, arming death timer
debug 2022-05-06T07:28:01.468+0000 7f4c1d1b8700 -1 rbd::mirror::Mirror: 0x55f48e2598c0 operator(): stopping due to rbd_mirror_die_after_seconds

Expected results:
Rbd mirror pod should restart as per design

Additional info:
I see this issue only on secondary site

This causes the daemon state to go down 

#C1:- bash-4.4$  rbd mirror pool status ocs-storagecluster-cephblockpool
health: WARNING
daemon health: OK
image health: WARNING
images: 108 total
    108 unknown

#C2:- bash-4.4$  rbd mirror pool status ocs-storagecluster-cephblockpool
health: ERROR
daemon health: WARNING
image health: ERROR
images: 108 total
    93 error
    5 stopping_replay
    10 stopped

Comment 31 errata-xmlrpc 2023-01-31 00:19:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.12.0 enhancement and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:0551


Note You need to log in before you can comment on or make changes to this bug.