Bug 1373919 - [ceph-ansible] : rolling update will fail if cluster name is other than 'ceph'
Summary: [ceph-ansible] : rolling update will fail if cluster name is other than 'ceph'
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat
Component: ceph-ansible
Version: 2
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: 2
Assignee: Sébastien Han
QA Contact: Rachana Patel
URL:
Whiteboard:
Depends On:
Blocks: Console-2-Async
TreeView+ depends on / blocked
 
Reported: 2016-09-07 12:28 UTC by Rachana Patel
Modified: 2016-10-21 14:09 UTC (History)
11 users (show)

Fixed In Version: ceph-ansible-1.0.5-34.el7scon
Doc Type: Bug Fix
Doc Text:
Rolling upgrade fails when a custom cluster name other than "ceph" is used as it causes the ceph-ansible play to abort. To overcome this behavior, include the flags to indicate the cluster name, defaulting to 'ceph' when unspecified. As a result, the Ansible playbook succeeds with custom cluster names
Clone Of:
Environment:
Last Closed: 2016-10-19 15:22:20 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:2082 0 normal SHIPPED_LIVE Moderate: Red Hat Storage Console 2 security and bug fix update 2017-04-18 19:29:02 UTC

Description Rachana Patel 2016-09-07 12:28:55 UTC
Description of problem:
======================
rolling_update.yml will fail at multiple step if cluster name is other than ceph.

e.g.
TASK: [set osd flags] ********************************************************* 
failed: [magna100 -> magna095] => (item=noout) => {"changed": true, "cmd": ["ceph", "osd", "set", "noout"], "delta": "0:00:00.067159", "end": "2016-09-01 21:15:46.597727", "item": "noout", "rc": 1, "start": "2016-09-01 21:15:46.530568", "warnings": []}
stderr: Error initializing cluster client: Error('error calling conf_read_file: error code 22',)
failed: [magna100 -> magna095] => (item=noscrub) => {"changed": true, "cmd": ["ceph", "osd", "set", "noscrub"], "delta": "0:00:00.067466", "end": "2016-09-01 21:15:46.833746", "item": "noscrub", "rc": 1, "start": "2016-09-01 21:15:46.766280", "warnings": []}
stderr: Error initializing cluster client: Error('error calling conf_read_file: error code 22',)
failed: [magna100 -> magna095] => (item=nodeep-scrub) => {"changed": true, "cmd": ["ceph", "osd", "set", "nodeep-scrub"], "delta": "0:00:00.067018", "end": "2016-09-01 21:15:47.069549", "item": "nodeep-scrub", "rc": 1, "start": "2016-09-01 21:15:47.002531", "warnings": []}
stderr: Error initializing cluster client: Error('error calling conf_read_file: error code 22',)

FATAL: all hosts have already failed -- aborting
 	




Version-Release number of selected component (if applicable):
============================================================
update from 10.2.2-38.el7cp.x86_64 to 10.2.2-39.el7cp.x86_64


How reproducible:
=================
always



Steps to Reproduce:
===================
1. Create a cluster via ceph-ansible having 3 MON, 3 OSD and 1 RGW node (10.2.2-38.el7cp.x86_64). make sure cluster name is other than 'ceph'


2. create repo fie on all nodes which points to 10.2.2-39.el7cp.x86_64 bits
3. Change the value of 'serial:' to adjust the number of server to be updated.
4. use rolling_update.yml to update all nodes

Actual results:
===============
TASK: [set osd flags] ********************************************************* 
failed: [magna100 -> magna095] => (item=noout) => {"changed": true, "cmd": ["ceph", "osd", "set", "noout"], "delta": "0:00:00.067159", "end": "2016-09-01 21:15:46.597727", "item": "noout", "rc": 1, "start": "2016-09-01 21:15:46.530568", "warnings": []}
stderr: Error initializing cluster client: Error('error calling conf_read_file: error code 22',)
failed: [magna100 -> magna095] => (item=noscrub) => {"changed": true, "cmd": ["ceph", "osd", "set", "noscrub"], "delta": "0:00:00.067466", "end": "2016-09-01 21:15:46.833746", "item": "noscrub", "rc": 1, "start": "2016-09-01 21:15:46.766280", "warnings": []}
stderr: Error initializing cluster client: Error('error calling conf_read_file: error code 22',)
failed: [magna100 -> magna095] => (item=nodeep-scrub) => {"changed": true, "cmd": ["ceph", "osd", "set", "nodeep-scrub"], "delta": "0:00:00.067018", "end": "2016-09-01 21:15:47.069549", "item": "nodeep-scrub", "rc": 1, "start": "2016-09-01 21:15:47.002531", "warnings": []}
stderr: Error initializing cluster client: Error('error calling conf_read_file: error code 22',)

FATAL: all hosts have already failed -- aborting


Expected results:
=================
update should work with each cluster name


Additional info:

Comment 4 seb 2016-09-13 09:44:43 UTC
Fix upstream https://github.com/ceph/ceph-ansible/pull/972

Comment 9 Rachana Patel 2016-10-06 13:38:26 UTC
rolling update is working for cluster name other than ceph hence moving to verified

Comment 13 errata-xmlrpc 2016-10-19 15:22:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2016:2082


Note You need to log in before you can comment on or make changes to this bug.