Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2226808

Summary: [cee/sd][ceph-ansible] Cephadm-preflight playbook stops all the ceph services from node if older ceph rpms are present on the host.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Teoman ONAY <tonay>
Component: Ceph-AnsibleAssignee: Teoman ONAY <tonay>
Status: CLOSED ERRATA QA Contact: Vinayak Papnoi <vpapnoi>
Severity: medium Docs Contact: Akash Raj <akraj>
Priority: unspecified    
Version: 5.3CC: akraj, ceph-eng-bugs, cephqe-warriors, gmeno, tserlin
Target Milestone: ---   
Target Release: 5.3z5   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: cephadm-ansible-1.17.0-1.el8cp Doc Type: Bug Fix
Doc Text:
.`ceph-base` and `ceph-common` can now be successfully upgraded without any conflicts Previously, the remaining RPM packages, such as `ceph-mon`, `ceph-osd`, and the like from a previous Red Hat Ceph Storage 4.x version would prevent the upgrade of `ceph-base` and `ceph-common` due to dependency conflicts. With this fix, uninstalling the remaining Red Hat Ceph Storage 4.x packages before running the upgrade allows the successful completion of the upgrade.
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-08-28 09:40:56 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Teoman ONAY 2023-07-26 15:30:35 UTC
This bug was initially created as a copy of Bug #2211324

I am copying this bug because: 



Description of problem:
-----------------------
- After upgrading the cluster from RHCS 4.3z1 (baremetal) to RHCS 5.3z3 / RHCS 5.3z2, if we run cephadm-preflight playbook to install the latest ceph-common and cephadm packages on the ceph nodes,
it stops the ceph.target service, which in turn stops all the ceph services running on the host. 

This happens only when ceph rpms like ceph-common,ceph-base,ceph-mon,ceph-osd etc from older version (RHCS 4.3z1) still exists on the hosts (as cluster is migrated from baremetal to container).


Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHCS 5.3


How reproducible:
-----------------
Every time.


Steps to Reproduce:
--------------------
1. Deploy RHCS 4.3z1  baremetal cluster
2. Convert the Ceph services to containerized
3. Upgrade the cluster to RHCS 5.3z2 / RHCS 5.3z3
4. Run cephadm-preflight playbook to upgrade the ceph-common and cephadm package on the host.


Actual results:
---------------
The ceph packages is upgraded but all ceph services on the host are stoped.

Expected results:
-----------------
The ceph packages should be upgraded and no services should be impacted.

Comment 9 errata-xmlrpc 2023-08-28 09:40:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.3 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:4760