Bug 2008402 - [cee/sd] Need steps to clean up the ceph packages gracefully in order to install latest ceph-common packages on host.
Summary: [cee/sd] Need steps to clean up the ceph packages gracefully in order to inst...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: 5.2
Assignee: Guillaume Abrioux
QA Contact: Rajendra Khambadkar
Akash Raj
URL:
Whiteboard:
Depends On: 2075067
Blocks: 2102272
TreeView+ depends on / blocked
 
Reported: 2021-09-28 07:11 UTC by Lijo Stephen Thomas
Modified: 2023-09-15 01:36 UTC (History)
17 users (show)

Fixed In Version: cephadm-ansible-1.2.0-1.el8cp
Doc Type: Bug Fix
Doc Text:
.The `ceph-common` packages can now be installed without dependency errors Previously, after upgrading {storage-product} 4 to {storage-product} 5, a few packages were left out which caused dependency errors. With this fix, the left out {storage-product} 4 packages are removed and the `ceph-common` packages can now be installed during preflight playbook execution without any errors.
Clone Of:
Environment:
Last Closed: 2022-08-09 17:36:05 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph cephadm-ansible pull 65 0 None Merged preflight/clients: workaround packaging issue 2022-07-07 05:12:56 UTC
Red Hat Issue Tracker RHCEPH-1909 0 None None None 2021-09-28 07:15:01 UTC
Red Hat Product Errata RHSA-2022:5997 0 None None None 2022-08-09 17:36:30 UTC

Description Lijo Stephen Thomas 2021-09-28 07:11:04 UTC
Description of problem:
-----------------------
After upgrading the RHCS 4 (barmetal) cluster to RHCS 5 (containerized), the cephadm-preflight.yml playbook fails to install ceph-common (ceph-common-16.2.0-117.el8cp.x86_64.rpm) packages on host, due to older ceph packages(RHCS 4) installed on the host (original cluster was baremetal)
Removing older ceph packages results in removal of /etc/ceph directory, and also takes down all running ceph services. We need a way to gracefully carryout the package clean up, inorder to be able to install latest ceph-common.

Edit:
Note: the pre-flight is executed post upgrade activity (post executing of rolling_update and cephadm-adopt playbooks)

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHCS 5.0


How reproducible:
-----------------
Every time


Steps to Reproduce:
-------------------
1. Install RHCS 4 baremetal cluster.
2. Switch the cluster to containerized in preparation to RHCS 5 upgrade.
3. Upgrade the cluster to RHCS 5.
4. Run the cephadm-preflight.yml playbook

Comment 18 errata-xmlrpc 2022-08-09 17:36:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5997

Comment 19 Red Hat Bugzilla 2023-09-15 01:36:16 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 365 days


Note You need to log in before you can comment on or make changes to this bug.