Bug 1568179 - enabled manager modules are lost upon restart of the ceph-mgr container
Summary: enabled manager modules are lost upon restart of the ceph-mgr container
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Container
Version: 3.0
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: rc
: 3.1
Assignee: Erwan Velu
QA Contact: Sidhant Agrawal
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-16 23:52 UTC by Paul Cuzner
Modified: 2018-09-26 19:17 UTC (History)
8 users (show)

Fixed In Version: rhceph:ceph-3.1-rhel-7-containers-candidate-56869-20180521200710
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-26 19:16:42 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-container pull 1052 0 None closed mgr: remove the modules code 2021-02-03 15:55:12 UTC
Red Hat Product Errata RHBA-2018:2820 0 None None None 2018-09-26 19:17:25 UTC

Description Paul Cuzner 2018-04-16 23:52:19 UTC
Description of problem:
After deploying a container based cluster with ceph-ansible - I enabled the prometheus plugin. However, if the container is restarted the module is no longer enabled. This issue was reported upstream
https://github.com/ceph/ceph-container/issues/928

This looks to be an issue with the start_mgr.sh script


Version-Release number of selected component (if applicable):
RHCS 3.0 and upstream ceph 12.2.4

How reproducible:
every time


Steps to Reproduce:
1. Create a containerised ceph cluster
2. enable the prometheus plugin with ceph mgr module enable prometheus
3. restart the mgr container
4. list the enabled modules - ceph mgr module ls

Actual results:
After the container restarts the enabled modules are no longer present

Expected results:
Any enabled module should be persisted across container restarts



Additional info:

Comment 3 Sébastien Han 2018-05-18 11:24:11 UTC
Erwan is going to take care of the backport downstream and trigger a new container build.

Comment 4 Erwan Velu 2018-05-18 14:42:48 UTC
I cannot push as 
remote: ***   Unapproved:
remote: ***     rhbz#1568179 (qa_ack+, ceph-3.y?, pm_ack+, devel_ack?)

Can one approve it please ?

Comment 5 Erwan Velu 2018-05-18 14:43:35 UTC
I'm pushing on ceph-3.0-rhel-7

Comment 6 Erwan Velu 2018-05-18 15:03:57 UTC
Pushed on ceph-3.1-rhel-7

Comment 11 errata-xmlrpc 2018-09-26 19:16:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2820


Note You need to log in before you can comment on or make changes to this bug.