| Summary: | [ceph-ansible] : rolling update will fail if cluster name is other than 'ceph' | ||
|---|---|---|---|
| Product: | Red Hat Storage Console | Reporter: | Rachana Patel <racpatel> |
| Component: | ceph-ansible | Assignee: | Sébastien Han <shan> |
| Status: | CLOSED ERRATA | QA Contact: | Rachana Patel <racpatel> |
| Severity: | urgent | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 2 | CC: | adeza, aschoen, ceph-eng-bugs, gmeno, kdreyer, nthomas, racpatel, rghatvis, sankarshan, seb, vsarmila |
| Target Milestone: | --- | ||
| Target Release: | 2 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-ansible-1.0.5-34.el7scon | Doc Type: | Bug Fix |
| Doc Text: |
Rolling upgrade fails when a custom cluster name other than "ceph" is used as it causes the ceph-ansible play to abort. To overcome this behavior, include the flags to indicate the cluster name, defaulting to 'ceph' when unspecified. As a result, the Ansible playbook succeeds with custom cluster names
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-10-19 15:22:20 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Bug Depends On: | |||
| Bug Blocks: | 1357777 | ||
Fix upstream https://github.com/ceph/ceph-ansible/pull/972 rolling update is working for cluster name other than ceph hence moving to verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2016:2082 |
Description of problem: ====================== rolling_update.yml will fail at multiple step if cluster name is other than ceph. e.g. TASK: [set osd flags] ********************************************************* failed: [magna100 -> magna095] => (item=noout) => {"changed": true, "cmd": ["ceph", "osd", "set", "noout"], "delta": "0:00:00.067159", "end": "2016-09-01 21:15:46.597727", "item": "noout", "rc": 1, "start": "2016-09-01 21:15:46.530568", "warnings": []} stderr: Error initializing cluster client: Error('error calling conf_read_file: error code 22',) failed: [magna100 -> magna095] => (item=noscrub) => {"changed": true, "cmd": ["ceph", "osd", "set", "noscrub"], "delta": "0:00:00.067466", "end": "2016-09-01 21:15:46.833746", "item": "noscrub", "rc": 1, "start": "2016-09-01 21:15:46.766280", "warnings": []} stderr: Error initializing cluster client: Error('error calling conf_read_file: error code 22',) failed: [magna100 -> magna095] => (item=nodeep-scrub) => {"changed": true, "cmd": ["ceph", "osd", "set", "nodeep-scrub"], "delta": "0:00:00.067018", "end": "2016-09-01 21:15:47.069549", "item": "nodeep-scrub", "rc": 1, "start": "2016-09-01 21:15:47.002531", "warnings": []} stderr: Error initializing cluster client: Error('error calling conf_read_file: error code 22',) FATAL: all hosts have already failed -- aborting Version-Release number of selected component (if applicable): ============================================================ update from 10.2.2-38.el7cp.x86_64 to 10.2.2-39.el7cp.x86_64 How reproducible: ================= always Steps to Reproduce: =================== 1. Create a cluster via ceph-ansible having 3 MON, 3 OSD and 1 RGW node (10.2.2-38.el7cp.x86_64). make sure cluster name is other than 'ceph' 2. create repo fie on all nodes which points to 10.2.2-39.el7cp.x86_64 bits 3. Change the value of 'serial:' to adjust the number of server to be updated. 4. use rolling_update.yml to update all nodes Actual results: =============== TASK: [set osd flags] ********************************************************* failed: [magna100 -> magna095] => (item=noout) => {"changed": true, "cmd": ["ceph", "osd", "set", "noout"], "delta": "0:00:00.067159", "end": "2016-09-01 21:15:46.597727", "item": "noout", "rc": 1, "start": "2016-09-01 21:15:46.530568", "warnings": []} stderr: Error initializing cluster client: Error('error calling conf_read_file: error code 22',) failed: [magna100 -> magna095] => (item=noscrub) => {"changed": true, "cmd": ["ceph", "osd", "set", "noscrub"], "delta": "0:00:00.067466", "end": "2016-09-01 21:15:46.833746", "item": "noscrub", "rc": 1, "start": "2016-09-01 21:15:46.766280", "warnings": []} stderr: Error initializing cluster client: Error('error calling conf_read_file: error code 22',) failed: [magna100 -> magna095] => (item=nodeep-scrub) => {"changed": true, "cmd": ["ceph", "osd", "set", "nodeep-scrub"], "delta": "0:00:00.067018", "end": "2016-09-01 21:15:47.069549", "item": "nodeep-scrub", "rc": 1, "start": "2016-09-01 21:15:47.002531", "warnings": []} stderr: Error initializing cluster client: Error('error calling conf_read_file: error code 22',) FATAL: all hosts have already failed -- aborting Expected results: ================= update should work with each cluster name Additional info: