Bug 1674549 - [cee/sd][ceph-mgr] luminous: deadlock in standby ceph-mgr daemons
Summary: [cee/sd][ceph-mgr] luminous: deadlock in standby ceph-mgr daemons
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RADOS
Version: 3.2
Hardware: Unspecified
OS: Linux
Target Milestone: z2
: 3.2
Assignee: Brad Hubbard
QA Contact: Manohar Murthy
Aron Gunn
Depends On:
Blocks: 1629656
TreeView+ depends on / blocked
Reported: 2019-02-11 15:40 UTC by Tomas Petr
Modified: 2019-04-30 19:12 UTC (History)
15 users (show)

Fixed In Version: RHEL: ceph-12.2.8-113.el7cp Ubuntu: ceph_12.2.8-96redhat1xenial
Doc Type: Bug Fix
Doc Text:
.A race condition was causing threads to deadlock with the standby `ceph-mgr` daemon Some threads can cause a race condition when acquiring a local lock and the Python global interpreter lock, which is causing a deadlock issue for each thread. As the thread holds on to one of the locks, it wants to acquire the other lock, but cannot. In this release, the code was fixed to close the window of opportunity for the race condition to occur. This is done by changing the location of the lock acquisition and releasing the appropriate locks. Doing this results in the threads not causing a deadlock, which allows progress to be made.
Clone Of:
Last Closed: 2019-04-30 15:56:46 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Ceph Project Bug Tracker 35985 None None None 2019-02-11 15:40:21 UTC
Red Hat Product Errata RHSA-2019:0911 None None None 2019-04-30 15:57:00 UTC

Description Tomas Petr 2019-02-11 15:40:22 UTC
Description of problem:
From upstream tracker:
StandbyPyModule::get_config is using state.with_config without dropping the GIL around taking the lock.

The standby mgr process hangs without response, it is removed from mgrmap and does not retake active role when active mgr stops.

without MGR daemon, ceph reports 0 space, which has impact on OSP spawning new instances, as the available space is checked.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:

Actual results:
standby mgr process stops responding, but process is still running - no msgs logged

Expected results:
standby mgr process is responding, if active mgr stops, one of stanbys mgr become active

Additional info:

Comment 5 Brad Hubbard 2019-02-22 23:54:04 UTC
See analysis in  https://tracker.ceph.com/issues/35985

Comment 6 Brad Hubbard 2019-02-24 21:42:45 UTC

Comment 10 Brad Hubbard 2019-03-14 22:03:13 UTC
We are still waiting on thread dumps to confirm this issue is the same as https://tracker.ceph.com/issues/35985.

Comment 19 errata-xmlrpc 2019-04-30 15:56:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.