Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1677567

Summary: overcloud ceph-upgrade run should not re-update already up-to-date nodes
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Ganesh Kadam <gkadam>
Component: Ceph-AnsibleAssignee: Dimitri Savineau <dsavinea>
Status: CLOSED INSUFFICIENT_DATA QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.2CC: aschoen, ceph-eng-bugs, dsavinea, gabrioux, gfidente, ggrimaux, gmeno, johfulto, nthomas, sankarshan
Target Milestone: rcKeywords: Reopened
Target Release: 4.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-07-18 15:48:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1624388    

Description Ganesh Kadam 2019-02-15 09:19:33 UTC
Description of problem:

- Currently "overcloud ceph-upgrade run" triggers ceph-ansible playbooks - that cause ceph-osd nodes downtime also on fully updated nodes upon re-execution. 

- There seems to be no detection on ceph-osd nodes for container version checking - and skipping OSDs shutdown in case they already run the latest container version. 

- At the moment "overcloud ceph-upgrade run" is unnecessary intrusive - when re-running it (after partial update failure for example).



Version-Release number of selected component (if applicable):
RHOSP 13

How reproducible:

Execute overcloud ceph-upgrade run 


Actual results:

There seems to be no detection on ceph-osd nodes for container version checking - and skipping OSDs shutdown in case they already run the latest container version.

Expected results:

We should have some detection mechanism/provision in place in case to check the container version, and the upgrades should not re-run on already upgraded ceph nodes.

Comment 1 Giulio Fidente 2019-02-15 15:46:55 UTC
Can you say what version of ceph-ansible are you using?

Comment 2 ggrimaux 2019-02-18 18:11:18 UTC
@Giulio:

[root@os2-prd-director01 ~]# rpm -qa|grep ceph-ansible
ceph-ansible-3.1.5-1.el7cp.noarch

Comment 4 John Fulton 2019-02-20 15:16:00 UTC
This is already possible. A ceph-ansible playbook will run its tasks on the nodes in its inventory. If you don't want it to run those tasks on certain nodes, then the recommendation from the ceph-asnible team is to omit those nodes from the inventory. TripleO will omit nodes from the ceph-ansi8ble inventory if you black list them as described in the following document:

https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html-single/director_installation_and_usage/#Scaling-Blacklisting_Nodes

Comment 16 Red Hat Bugzilla 2023-09-14 05:23:42 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days