Bug 1342621 - [ceph-ansible] rolling_update playbook needs more implementation
Summary: [ceph-ansible] rolling_update playbook needs more implementation
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat
Component: ceph-ansible
Version: 2
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: 2
Assignee: Sébastien Han
QA Contact: Tamil
Depends On:
TreeView+ depends on / blocked
Reported: 2016-06-03 17:08 UTC by Tamil
Modified: 2016-08-23 19:54 UTC (History)
12 users (show)

Fixed In Version: ceph-ansible-1.0.5-25.el7scon
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2016-08-23 19:54:25 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:1754 0 normal SHIPPED_LIVE New packages: Red Hat Storage Console 2.0 2017-04-18 19:09:06 UTC

Description Tamil 2016-06-03 17:08:19 UTC
Description of problem:
This bug is about upgrading later ceph packages post RH Ceph 2.0 version using rolling_update playbook in ceph-ansible.

currently the issues are:
rolling_update playbook should support systemd - this is being tracked upstream by https://github.com/ceph/ceph-ansible/issues/814

also, currently there is lack of information for the user to know where to mention the source and destination packages for the upgrade - it would be nice to have this information included in the rolling_update.yml.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. install ceph using ceph-ansible
2. modify rolling_update.yml to reflect the source and destination packages for the upgrade
3. ansible_playbook rolling_update.yml

Actual results:
TASK: [Gracefully stop the OSDs (Sysvinit)] ***********************************
failed: [] => {"failed": true}
msg: Job for ceph.service failed because the control process exited with error code. See "systemctl status ceph.service" and "journalctl -xe" for details

Expected results:
OSDs should be restarted and ceph upgrade should be successful.

Additional info:

Comment 3 Ken Dreyer (Red Hat) 2016-06-21 16:54:28 UTC
Need Tamil to check leseb's fix upstream

Comment 4 Tamil 2016-06-22 00:06:52 UTC
retesting leseb's fix. will update on the status soon.

Comment 5 Ken Dreyer (Red Hat) 2016-06-22 16:51:10 UTC
gmeno to provide link to patches

Comment 9 seb 2016-07-06 09:49:40 UTC
I just merged the github branch as Tamil reported that she was able to successfully perform an upgrade.
Tamil anymore comments?

Comment 15 Tamil 2016-07-11 20:22:40 UTC
sure Sebastien, I was able to upgrade from 10.2.1 to 10.2.2 for testing purposed to make sure rolling_update playbook worked and It DID ! :)

please let me know if we need to be doing anything else.

Comment 16 Tamil 2016-07-13 21:54:43 UTC
Tested ceph upgrades using from RH ceph v10.2.2-12 to RH Ceph v 10.2.2-19 using ceph-ansible rolling-update playbook and it worked fine!

source version - [http://download.eng.bos.redhat.com/rcm-guest/ceph-drops/auto/ceph-2-rhel-7-compose/Ceph-2-RHEL-7-20160630.t.0/] 

destination version - [http://download.eng.bos.redhat.com/rcm-guest/ceph-drops/auto/ceph-2-rhel-7-compose/Ceph-2.0-RHEL-7-20160712.t.0/]  

Test setup : 3 mons and 9 osds on a 4 nodes setup
magna011: mon
magna066: 3 osds
magna094 - 1 mon, 3 osds
magna095 - 1 mon, 3 osds

ceph-ansible version: ceph-ansible-1.0.5-25.el7scon


1. install ceph v 10.2.2-12 by copying repos locally running ceph-ansible site.yml and set up a cluster.
2. set ceph_origin: distro and ceph_stable_rh_storage: true in /usr/share/ceph-ansible/group_vars/all
3. copy the repos [destination repos to be upgraded to] to /etc/yum.repos.d using yum-config-manager.
4. set serial hosts for osds and mons. in my case, it was 3 mon hosts and 3 osd hosts.
5. run ceph-ansible rolling_update playbook.

make sure the ceph version is upgraded to desired version.

Comment 18 errata-xmlrpc 2016-08-23 19:54:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.