Bug 1393684 - osds are not coming up while preparing cluster with the latest ceph-ansible
Summary: osds are not coming up while preparing cluster with the latest ceph-ansible
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat
Component: ceph-ansible
Version: 2
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: 2
Assignee: Sébastien Han
QA Contact: rakesh
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-10 07:22 UTC by rakesh
Modified: 2016-11-22 23:42 UTC (History)
9 users (show)

Fixed In Version: ceph-ansible-1.0.5-43.el7scon
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-22 23:42:59 UTC
Target Upstream Version:


Attachments (Terms of Use)
teuthology.log (1.57 MB, text/plain)
2016-11-10 07:22 UTC, rakesh
no flags Details
log file of ansible-playbook (using ceph-ansible-1.0.5-42.el7scon.noarch, without teuthology) (587.72 KB, text/plain)
2016-11-10 13:51 UTC, Vasishta
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2817 0 normal SHIPPED_LIVE ceph-iscsi-ansible and ceph-ansible bug fix update 2017-04-18 19:50:43 UTC

Description rakesh 2016-11-10 07:22:04 UTC
Created attachment 1219212 [details]
teuthology.log

Description of problem:

while preparing the cluster with the latest ceph-anisble, the osds are not created. This is with a co-located journal
 sudo ceph -s 
    cluster 39cf1820-20a1-4381-8ad6-73db167d04b0
     health HEALTH_ERR
            192 pgs are stuck inactive for more than 300 seconds
            192 pgs stuck inactive
            no osds
     monmap e1: 1 mons at {magna054=10.8.128.54:6789/0}
            election epoch 3, quorum 0 magna054
     osdmap e2: 0 osds: 0 up, 0 in
            flags sortbitwise
      pgmap v3: 192 pgs, 2 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                 192 creating


ceph-ansible version: ceph-ansible-1.0.5-42.el7scon.noarch. 

this has been installed using ceph-ansible task with teuthology. 

I am attaching teuthology logs here.

Comment 4 Vasishta 2016-11-10 13:22:46 UTC
Hi, 

I tried this on fresh machines just ceph-ansible-1.0.5-42.el7scon.noarch without teuthology. 
Faced same issue.


Regards,
Vasishta

Comment 5 Vasishta 2016-11-10 13:51:57 UTC
Created attachment 1219404 [details]
log file of ansible-playbook (using ceph-ansible-1.0.5-42.el7scon.noarch, without teuthology)

Comment 6 Ken Dreyer (Red Hat) 2016-11-10 19:28:09 UTC
I've reproduced this error with ceph-ansible-1.0.5-42 and ceph-installer.

We are going to drop the dmcrypt feature in order to solve this regression. ceph-ansible-1.0.5-43 will have the fix for this.

Comment 8 rakesh 2016-11-11 14:54:37 UTC
Ceph installed properly with Ceph-ansible and the OSDS are created and up and running. 

rpm -qa | grep ceph-ansible
ceph-ansible-1.0.5-43.el7scon.noarch
 

sudo ceph -s
    cluster 1b085b66-c23e-4c8c-926f-76e265ff0483
     health HEALTH_OK
     monmap e1: 1 mons at {magna054=10.8.128.54:6789/0}
            election epoch 3, quorum 0 magna054
     osdmap e54: 9 osds: 9 up, 9 in
            flags sortbitwise
      pgmap v146: 704 pgs, 6 pools, 1636 bytes data, 171 objects
            328 MB used, 8370 GB / 8370 GB avail
                 704 active+clean


sudo ceph osd tree
ID WEIGHT  TYPE NAME         UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 8.17465 root default                                        
-2 2.72488     host magna054                                   
 0 0.90829         osd.0          up  1.00000          1.00000 
 3 0.90829         osd.3          up  1.00000          1.00000 
 6 0.90829         osd.6          up  1.00000          1.00000 
-3 2.72488     host magna081                                   
 1 0.90829         osd.1          up  1.00000          1.00000 
 5 0.90829         osd.5          up  1.00000          1.00000 
 8 0.90829         osd.8          up  1.00000          1.00000 
-4 2.72488     host magna037                                   
 2 0.90829         osd.2          up  1.00000          1.00000 
 4 0.90829         osd.4          up  1.00000          1.00000 
 7 0.90829         osd.7          up  1.00000          1.00000

Comment 10 errata-xmlrpc 2016-11-22 23:42:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:2817


Note You need to log in before you can comment on or make changes to this bug.