Bug 1333339 - [ceph-ansible] Mon addition on ceph 2.0 ubuntu cluster returns a failure
Summary: [ceph-ansible] Mon addition on ceph 2.0 ubuntu cluster returns a failure
Keywords:
Status: CLOSED DUPLICATE of bug 1331881
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: ceph-installer
Version: 2
Hardware: Unspecified
OS: Linux
unspecified
high
Target Milestone: ---
: 2
Assignee: Andrew Schoen
QA Contact: sds-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-05 10:22 UTC by Tejas
Modified: 2016-05-09 19:58 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-09 19:58:32 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1331881 0 unspecified CLOSED ceph-ansible does not use systemd on Xenial 2021-02-22 00:41:40 UTC

Internal Links: 1331881

Description Tejas 2016-05-05 10:22:36 UTC
Description of problem:
I am trying to add a mon node to an existing ubuntu cluster.
so running the ansible-playbook site.yml command returns a failure result.

Version-Release number of selected component (if applicable):
Ceph: 10.2.0-4redhat1xenial 
Ceph-ansible: ceph-ansible-1.0.5-7.el7scon.noarch


How reproducible:
Always

Steps to Reproduce:
1. use the "ansible-playbook site.yml" command to create a new ceph ubuntu cluster.
2. add a mon node to the hosts file, and rerun the above command
3. the mon is added to the cluster, but the command returns a failure result.

Actual results:
mon node is added , but the playbook fails

Expected results:
There should be no failure status returned.

Additional info:



root@magna052:~# ceph osd tree
ID WEIGHT  TYPE NAME         UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 5.39694 root default                                        
-2 2.69847     host magna058                                   
 0 0.89949         osd.0          up  1.00000          1.00000 
 3 0.89949         osd.3          up  1.00000          1.00000 
 5 0.89949         osd.5          up  1.00000          1.00000 
-3 2.69847     host magna077                                   
 1 0.89949         osd.1          up  1.00000          1.00000 
 2 0.89949         osd.2          up  1.00000          1.00000 
 4 0.89949         osd.4          up  1.00000          1.00000 

NOTIFIED: [ceph.ceph-common | restart ceph osds on ubuntu] ******************** 
skipping: [magna031]
failed: [magna077] => {"changed": true, "cmd": "for id in $(ls /var/lib/ceph/osd/ |grep -oh '[0-9]*'); do\n initctl restart ceph-osd cluster=ceph id=$id\n done", "delta": "0:00:00.003228", "end": "2016-05-05 09:18:16.381543", "rc": 127, "start": "2016-05-05 09:18:16.378315", "warnings": []}
stderr: /bin/sh: 2: initctl: not found
/bin/sh: 2: initctl: not found
/bin/sh: 2: initctl: not found
failed: [magna058] => {"changed": true, "cmd": "for id in $(ls /var/lib/ceph/osd/ |grep -oh '[0-9]*'); do\n initctl restart ceph-osd cluster=ceph id=$id\n done", "delta": "0:00:00.005232", "end": "2016-05-05 09:18:16.390665", "rc": 127, "start": "2016-05-05 09:18:16.385433", "warnings": []}
stderr: /bin/sh: 2: initctl: not found
/bin/sh: 2: initctl: not found
/bin/sh: 2: initctl: not found


A restart of mons and osds is failing.

Comment 2 Ken Dreyer (Red Hat) 2016-05-05 13:57:44 UTC
This looks like it might be the same issue as bug 1331881.

Comment 4 Andrew Schoen 2016-05-06 20:35:47 UTC

(In reply to Ken Dreyer (Red Hat) from comment #2)
> This looks like it might be the same issue as bug 1331881.

Agreed, this should be fixed by bug 1331881.

Comment 5 Andrew Schoen 2016-05-09 19:58:32 UTC
I"m closing this as a duplicate of bug 1331881

*** This bug has been marked as a duplicate of bug 1331881 ***


Note You need to log in before you can comment on or make changes to this bug.