| Summary: | osds are not coming up while preparing cluster with the latest ceph-ansible | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Storage Console | Reporter: | rakesh <rgowdege> | ||||||
| Component: | ceph-ansible | Assignee: | Sébastien Han <shan> | ||||||
| Status: | CLOSED ERRATA | QA Contact: | rakesh <rgowdege> | ||||||
| Severity: | urgent | Docs Contact: | |||||||
| Priority: | unspecified | ||||||||
| Version: | 2 | CC: | adeza, aschoen, ceph-eng-bugs, gmeno, hnallurv, kdreyer, nthomas, sankarshan, vashastr | ||||||
| Target Milestone: | --- | ||||||||
| Target Release: | 2 | ||||||||
| Hardware: | x86_64 | ||||||||
| OS: | Linux | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | ceph-ansible-1.0.5-43.el7scon | Doc Type: | If docs needed, set a value | ||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2016-11-22 23:42:59 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Attachments: |
|
||||||||
Hi, I tried this on fresh machines just ceph-ansible-1.0.5-42.el7scon.noarch without teuthology. Faced same issue. Regards, Vasishta Created attachment 1219404 [details]
log file of ansible-playbook (using ceph-ansible-1.0.5-42.el7scon.noarch, without teuthology)
I've reproduced this error with ceph-ansible-1.0.5-42 and ceph-installer. We are going to drop the dmcrypt feature in order to solve this regression. ceph-ansible-1.0.5-43 will have the fix for this. Ceph installed properly with Ceph-ansible and the OSDS are created and up and running.
rpm -qa | grep ceph-ansible
ceph-ansible-1.0.5-43.el7scon.noarch
sudo ceph -s
cluster 1b085b66-c23e-4c8c-926f-76e265ff0483
health HEALTH_OK
monmap e1: 1 mons at {magna054=10.8.128.54:6789/0}
election epoch 3, quorum 0 magna054
osdmap e54: 9 osds: 9 up, 9 in
flags sortbitwise
pgmap v146: 704 pgs, 6 pools, 1636 bytes data, 171 objects
328 MB used, 8370 GB / 8370 GB avail
704 active+clean
sudo ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 8.17465 root default
-2 2.72488 host magna054
0 0.90829 osd.0 up 1.00000 1.00000
3 0.90829 osd.3 up 1.00000 1.00000
6 0.90829 osd.6 up 1.00000 1.00000
-3 2.72488 host magna081
1 0.90829 osd.1 up 1.00000 1.00000
5 0.90829 osd.5 up 1.00000 1.00000
8 0.90829 osd.8 up 1.00000 1.00000
-4 2.72488 host magna037
2 0.90829 osd.2 up 1.00000 1.00000
4 0.90829 osd.4 up 1.00000 1.00000
7 0.90829 osd.7 up 1.00000 1.00000
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:2817 |
Created attachment 1219212 [details] teuthology.log Description of problem: while preparing the cluster with the latest ceph-anisble, the osds are not created. This is with a co-located journal sudo ceph -s cluster 39cf1820-20a1-4381-8ad6-73db167d04b0 health HEALTH_ERR 192 pgs are stuck inactive for more than 300 seconds 192 pgs stuck inactive no osds monmap e1: 1 mons at {magna054=10.8.128.54:6789/0} election epoch 3, quorum 0 magna054 osdmap e2: 0 osds: 0 up, 0 in flags sortbitwise pgmap v3: 192 pgs, 2 pools, 0 bytes data, 0 objects 0 kB used, 0 kB / 0 kB avail 192 creating ceph-ansible version: ceph-ansible-1.0.5-42.el7scon.noarch. this has been installed using ceph-ansible task with teuthology. I am attaching teuthology logs here.