Bug 2049132 - [RFE] [ceph-ansible] : Upgrade from 4.x to 5.x : switch-from-non-containerized-to-containerized-ceph-daemons - Block migration if number of monitors are less than 3
Summary: [RFE] [ceph-ansible] : Upgrade from 4.x to 5.x : switch-from-non-containerize...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 4.3
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.3z1
Assignee: Guillaume Abrioux
QA Contact: Ameena Suhani S H
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-02-01 15:23 UTC by Vasishta
Modified: 2022-09-22 11:21 UTC (History)
8 users (show)

Fixed In Version: ceph-ansible-4.0.70.5-1.el8cp, ceph-ansible-4.0.70.5-1.el7cp
Doc Type: Enhancement
Doc Text:
Feature: switch-from-non-containerized-to-containerized-ceph-daemons playbook to fail if less than 3 monitors are present in the monitor group. Reason: This migration doesn't support clusters with less than 3 monitors Result: The playbook fails early when this requirement is not met.
Clone Of:
Environment:
Last Closed: 2022-09-22 11:21:06 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 7093 0 None Merged [skip ci] switch2containers: fail if less than 3 monitors 2022-02-22 08:24:34 UTC
Github ceph ceph-ansible pull 7097 0 None Merged switch2containers: fail if less than 3 monitors (backport #7093) 2022-02-22 08:24:41 UTC
Red Hat Issue Tracker RHCEPH-3087 0 None None None 2022-02-01 15:27:06 UTC
Red Hat Product Errata RHBA-2022:6684 0 None None None 2022-09-22 11:21:34 UTC

Description Vasishta 2022-02-01 15:23:00 UTC
Description of problem:
We block rolling_update if the monitor cound is less than 3
We need to implement the same for switch-from-non-containerized-to-containerized-ceph-daemons also

Version-Release number of selected component (if applicable):
RHCS 4.3

Steps to Reproduce:
1. Configure RHCS 4.x non-containerized cluster with single mon
2. Try to migrate from non-containerized to containerized 

Actual results:
switch-from-non-containerized-to-containerized-ceph-daemons migration runs even if user has single mon

Expected results:
switch-from-non-containerized-to-containerized-ceph-daemons migration should not happen if user has less than 3 monitors

Additional info:
switch-from-non-containerized-to-containerized-ceph-daemons is mandatory for a user in 4.x baremetal, switch-from-non-containerized-to-containerized-ceph-daemons stops mon services as part of migration, we must ensure that user has at least 3 mons for the migration procedure

Comment 5 Ameena Suhani S H 2022-05-25 02:38:18 UTC
Verified using 
4.0.70.6-1.el8cp.noarch

#ceph -s
cluster:
    id:     3a3dbf50-95b8-4569-af0a-de40d8fbc7ba
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim
 
  services:
    mon: 2 daemons, quorum ceph-amsyedha-t5xc5x-node1-installer,ceph-amsyedha-t5xc5x-node3 (age 3m)
    mgr: ceph-amsyedha-t5xc5x-node1-installer(active, since 35m), standbys: ceph-amsyedha-t5xc5x-node2
    osd: 12 osds: 12 up (since 29m), 12 in (since 6h)
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   12 GiB used, 168 GiB / 180 GiB avail
    pgs:     

#ansible-playbook -vvvv -i hosts infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml

TASK [fail when less than three monitors] ****************************************************************************************************************************************************************************************************
task path: /usr/share/ceph-ansible/infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml:20
Tuesday 24 May 2022  22:35:48 -0400 (0:00:04.951)       0:00:04.951 *********** 
fatal: [localhost]: FAILED! => changed=false 
  msg: This playbook requires at least three monitors.

Comment 10 errata-xmlrpc 2022-09-22 11:21:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 4.3 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:6684


Note You need to log in before you can comment on or make changes to this bug.