Bug 1783223 - [ceph-ansible] : switch from rpm to containerized - default osd health check retry needs to be higher
Summary: [ceph-ansible] : switch from rpm to containerized - default osd health check ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.3
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 4.1
Assignee: Guillaume Abrioux
QA Contact: Vasishta
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-12-13 11:10 UTC by Vasishta
Modified: 2020-05-19 17:31 UTC (History)
9 users (show)

Fixed In Version: ceph-ansible-4.0.15-1.el8, ceph-ansible-4.0.15-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-19 17:31:24 UTC
Embargoed:
hyelloji: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 5013 0 None closed switch_to_containers: increase health check values 2020-05-13 20:20:44 UTC
Red Hat Product Errata RHSA-2020:2231 0 None None None 2020-05-19 17:31:57 UTC

Description Vasishta 2019-12-13 11:10:35 UTC
Description of problem:
When rpm to containerized cluster conversion was tried playbook exited not waiting enough to allow cluster to have clean pgs

Version-Release number of selected component (if applicable):
ceph-ansible-3.2.37-1.el7cp.noarch

How reproducible:
Alway

Steps to Reproduce:
1. Configure 3.x non-containerized cluster 
2. Fill some data
3. run  switch-from-non-containerized-to-containerized-ceph-daemons.yml

Actual results:
Playbook fails after waiting for PGs to come clean only after 5 retries

Expected results:
Playbook must wait longer to allow cluster to have clean pgs

Additional info:
As we set osd flags during rolling__update, we can consider setting osd flags during switching of OSDs to containerized version

Comment 9 errata-xmlrpc 2020-05-19 17:31:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:2231


Note You need to log in before you can comment on or make changes to this bug.