Bug 1333339
| Summary: | [ceph-ansible] Mon addition on ceph 2.0 ubuntu cluster returns a failure | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Storage Console | Reporter: | Tejas <tchandra> |
| Component: | ceph-installer | Assignee: | Andrew Schoen <aschoen> |
| Status: | CLOSED DUPLICATE | QA Contact: | sds-qe-bugs |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 2 | CC: | adeza, aschoen, ceph-eng-bugs, hnallurv, kdreyer, nthomas, sankarshan |
| Target Milestone: | --- | ||
| Target Release: | 2 | ||
| Hardware: | Unspecified | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-05-09 19:58:32 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
This looks like it might be the same issue as bug 1331881. (In reply to Ken Dreyer (Red Hat) from comment #2) > This looks like it might be the same issue as bug 1331881. Agreed, this should be fixed by bug 1331881. I"m closing this as a duplicate of bug 1331881 *** This bug has been marked as a duplicate of bug 1331881 *** |
Description of problem: I am trying to add a mon node to an existing ubuntu cluster. so running the ansible-playbook site.yml command returns a failure result. Version-Release number of selected component (if applicable): Ceph: 10.2.0-4redhat1xenial Ceph-ansible: ceph-ansible-1.0.5-7.el7scon.noarch How reproducible: Always Steps to Reproduce: 1. use the "ansible-playbook site.yml" command to create a new ceph ubuntu cluster. 2. add a mon node to the hosts file, and rerun the above command 3. the mon is added to the cluster, but the command returns a failure result. Actual results: mon node is added , but the playbook fails Expected results: There should be no failure status returned. Additional info: root@magna052:~# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 5.39694 root default -2 2.69847 host magna058 0 0.89949 osd.0 up 1.00000 1.00000 3 0.89949 osd.3 up 1.00000 1.00000 5 0.89949 osd.5 up 1.00000 1.00000 -3 2.69847 host magna077 1 0.89949 osd.1 up 1.00000 1.00000 2 0.89949 osd.2 up 1.00000 1.00000 4 0.89949 osd.4 up 1.00000 1.00000 NOTIFIED: [ceph.ceph-common | restart ceph osds on ubuntu] ******************** skipping: [magna031] failed: [magna077] => {"changed": true, "cmd": "for id in $(ls /var/lib/ceph/osd/ |grep -oh '[0-9]*'); do\n initctl restart ceph-osd cluster=ceph id=$id\n done", "delta": "0:00:00.003228", "end": "2016-05-05 09:18:16.381543", "rc": 127, "start": "2016-05-05 09:18:16.378315", "warnings": []} stderr: /bin/sh: 2: initctl: not found /bin/sh: 2: initctl: not found /bin/sh: 2: initctl: not found failed: [magna058] => {"changed": true, "cmd": "for id in $(ls /var/lib/ceph/osd/ |grep -oh '[0-9]*'); do\n initctl restart ceph-osd cluster=ceph id=$id\n done", "delta": "0:00:00.005232", "end": "2016-05-05 09:18:16.390665", "rc": 127, "start": "2016-05-05 09:18:16.385433", "warnings": []} stderr: /bin/sh: 2: initctl: not found /bin/sh: 2: initctl: not found /bin/sh: 2: initctl: not found A restart of mons and osds is failing.