Bug 1323131 - [ceph-ansible] : installation of osd on directories returns a fatal failure, but osd's are created.
Summary: [ceph-ansible] : installation of osd on directories returns a fatal failure, ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: ceph-installer
Version: 2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 2
Assignee: Christina Meno
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
: 1332234 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-01 10:53 UTC by Tejas
Modified: 2016-08-23 19:49 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-23 19:49:00 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:1754 0 normal SHIPPED_LIVE New packages: Red Hat Storage Console 2.0 2017-04-18 19:09:06 UTC

Description Tejas 2016-04-01 10:53:33 UTC
Description of problem:

When installing OSD on directories, the ansible-playbook command exits unsuccessfully.

Version-Release number of selected component (if applicable):
2.0

How reproducible:
Always

Steps to Reproduce:
1. Run ansible-playbook command to install a new cluster with osd's on directories
2. the command executes successfully but returns a failure result, and the osd's are up and running.


Actual results:
ansible command exits as a fatal failure.

Expected results:
The ansible-playbook command should exit successfully.

Additional info:

TASK: [ceph-osd | activate OSD(s)] ******************************************** 
ok: [magna077] => (item=/mnt/osdb/osd1)
ok: [magna052] => (item=/mnt/osdb/osd1)
ok: [magna058] => (item=/mnt/osdb/osd1)
ok: [magna077] => (item=/mnt/osdb/osd2)
ok: [magna058] => (item=/mnt/osdb/osd2)
ok: [magna052] => (item=/mnt/osdb/osd2)
ok: [magna077] => (item=/mnt/osdb/osd3)
ok: [magna052] => (item=/mnt/osdb/osd3)
ok: [magna058] => (item=/mnt/osdb/osd3)
ok: [magna077] => (item=/mnt/osdb/osd4)
ok: [magna052] => (item=/mnt/osdb/osd4)
ok: [magna058] => (item=/mnt/osdb/osd4)
ok: [magna058] => (item=/mnt/osdc/osd5)
ok: [magna077] => (item=/mnt/osdc/osd5)
ok: [magna052] => (item=/mnt/osdc/osd5)
ok: [magna058] => (item=/mnt/osdc/osd6)
ok: [magna052] => (item=/mnt/osdc/osd6)
ok: [magna077] => (item=/mnt/osdc/osd6)
ok: [magna052] => (item=/mnt/osdc/osd7)
ok: [magna058] => (item=/mnt/osdc/osd7)
ok: [magna077] => (item=/mnt/osdc/osd7)
ok: [magna058] => (item=/mnt/osdc/osd8)
ok: [magna052] => (item=/mnt/osdc/osd8)
ok: [magna077] => (item=/mnt/osdc/osd8)
ok: [magna058] => (item=/mnt/osdd/osd9)
ok: [magna052] => (item=/mnt/osdd/osd9)
ok: [magna077] => (item=/mnt/osdd/osd9)
ok: [magna058] => (item=/mnt/osdd/osd10)
ok: [magna052] => (item=/mnt/osdd/osd10)
ok: [magna077] => (item=/mnt/osdd/osd10)
ok: [magna052] => (item=/mnt/osdd/osd11)
ok: [magna058] => (item=/mnt/osdd/osd11)
ok: [magna077] => (item=/mnt/osdd/osd11)
ok: [magna058] => (item=/mnt/osdd/osd12)
ok: [magna052] => (item=/mnt/osdd/osd12)
ok: [magna077] => (item=/mnt/osdd/osd12)

TASK: [ceph-osd | start and add that the OSD service to the init sequence] **** 
fatal: [magna052] => error while evaluating conditional: ansible_service_mgr != "systemd"
fatal: [magna058] => error while evaluating conditional: ansible_service_mgr != "systemd"
fatal: [magna077] => error while evaluating conditional: ansible_service_mgr != "systemd"

FATAL: all hosts have already failed -- aborting

PLAY RECAP ******************************************************************** 
           to retry, use: --limit @/root/site.sample.retry

magna009                   : ok=70   changed=14   unreachable=0    failed=0   
magna031                   : ok=70   changed=14   unreachable=0    failed=0   
magna046                   : ok=70   changed=14   unreachable=0    failed=0   
magna052                   : ok=97   changed=10   unreachable=1    failed=0   
magna058                   : ok=97   changed=10   unreachable=1    failed=0   
magna077                   : ok=97   changed=10   unreachable=1    failed=0  





[root@magna009 yum.repos.d]# ceph osd tree
ID WEIGHT   TYPE NAME         UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 32.73102 root default                                        
-2 10.91034     host magna077                                   
 0  0.90919         osd.0          up  1.00000          1.00000 
 3  0.90919         osd.3          up  1.00000          1.00000 
 6  0.90919         osd.6          up  1.00000          1.00000 
 9  0.90919         osd.9          up  1.00000          1.00000 
12  0.90919         osd.12         up  1.00000          1.00000 
16  0.90919         osd.16         up  1.00000          1.00000 
20  0.90919         osd.20         up  1.00000          1.00000 
22  0.90919         osd.22         up  1.00000          1.00000 
26  0.90919         osd.26         up  1.00000          1.00000 
29  0.90919         osd.29         up  1.00000          1.00000 
32  0.90919         osd.32         up  1.00000          1.00000 
35  0.90919         osd.35         up  1.00000          1.00000 
-3 10.91034     host magna052                                   
 1  0.90919         osd.1          up  1.00000          1.00000 
 5  0.90919         osd.5          up  1.00000          1.00000 
 8  0.90919         osd.8          up  1.00000          1.00000 
10  0.90919         osd.10         up  1.00000          1.00000 
13  0.90919         osd.13         up  1.00000          1.00000 
17  0.90919         osd.17         up  1.00000          1.00000 
19  0.90919         osd.19         up  1.00000          1.00000 
23  0.90919         osd.23         up  1.00000          1.00000 
25  0.90919         osd.25         up  1.00000          1.00000 
28  0.90919         osd.28         up  1.00000          1.00000 
31  0.90919         osd.31         up  1.00000          1.00000 
33  0.90919         osd.33         up  1.00000          1.00000 
-4 10.91034     host magna058                                   
 2  0.90919         osd.2          up  1.00000          1.00000 
 4  0.90919         osd.4          up  1.00000          1.00000 
 7  0.90919         osd.7          up  1.00000          1.00000 
11  0.90919         osd.11         up  1.00000          1.00000 
14  0.90919         osd.14         up  1.00000          1.00000 
15  0.90919         osd.15         up  1.00000          1.00000 
18  0.90919         osd.18         up  1.00000          1.00000 
21  0.90919         osd.21         up  1.00000          1.00000 
24  0.90919         osd.24         up  1.00000          1.00000 
27  0.90919         osd.27         up  1.00000          1.00000 
30  0.90919         osd.30         up  1.00000          1.00000 
34  0.90919         osd.34         up  1.00000          1.00000

Comment 2 Christina Meno 2016-04-09 13:47:07 UTC
This is not something we'll propose that customers do.

Comment 4 Matt Thompson 2016-04-28 14:17:13 UTC
I believe this is a ceph-ansible bug.  See https://github.com/ceph/ceph-ansible/issues/741 for more info.

Comment 7 Christina Meno 2016-05-04 21:04:12 UTC
*** Bug 1332234 has been marked as a duplicate of this bug. ***

Comment 8 Alfredo Deza 2016-05-23 19:18:54 UTC
This was fixed with commit ed2b7757d43ba7af96d97a9f345aba38e4410bd2 that was released a while ago and should be in the latest puddle.

Comment 9 Tejas 2016-05-25 11:55:39 UTC
Alfredo,

Could you move this to ON_QA and not closed?

Thanks,
Tejas

Comment 13 errata-xmlrpc 2016-08-23 19:49:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1754


Note You need to log in before you can comment on or make changes to this bug.