Bug 1304367 - overcloud deployment finished successfully and Ceph's OSDs are down
overcloud deployment finished successfully and Ceph's OSDs are down
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-puppet-modules (Show other bugs)
8.0 (Liberty)
x86_64 Linux
high Severity high
: ga
: 8.0 (Liberty)
Assigned To: Emilien Macchi
Yogev Rabl
: 1309926 (view as bug list)
Depends On:
Blocks: 1261979 1310828
  Show dependency treegraph
Reported: 2016-02-03 07:35 EST by Yogev Rabl
Modified: 2016-04-26 10:47 EDT (History)
22 users (show)

See Also:
Fixed In Version: openstack-puppet-modules-7.0.10-1.el7ost
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2016-04-07 17:27:31 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
overcloud deployment log (7.09 MB, text/plain)
2016-02-03 07:35 EST, Yogev Rabl
no flags Details

External Trackers
Tracker ID Priority Status Summary Last Updated
OpenStack gerrit 276141 None None None 2016-02-25 14:50 EST

  None (edit)
Description Yogev Rabl 2016-02-03 07:35:54 EST
Created attachment 1120745 [details]
overcloud deployment log

Description of problem:
The deployment of the overcloud installed 3 Ceph storage nodes, each with 4 hard drives (1 for te OS 3 for the OSDs) was successful, finished with return value of 0. 
Though the deployment was a success the OSDs are down. The services didn't start - had to do it manually. 

Version-Release number of selected component (if applicable):

(though I know the same happens with OSPD 7.2 and 7.3) 

How reproducible:

Steps to Reproduce:
1. Add additional hard drives to the would be Ceph storage nodes
2. Set the ceph.yaml file with additional hard drives:
       journal: ''
       journal: ''
       journal: ''
3. Deploy the overcloud

Actual results:
The OSDs are down

[heat-admin@overcloud-controller-0 ~]$ sudo ceph osd tree 
-1 0.59995 root default                                                       
-2 0.29997     host overcloud-cephstorage-2                                   
 0 0.09999         osd.0                       down        0          1.00000 
 4 0.09999         osd.4                       down        0          1.00000 
 7 0.09999         osd.7                       down        0          1.00000 
-3 0.29997     host overcloud-cephstorage-1                                   
 1 0.09999         osd.1                       down        0          1.00000 
 6 0.09999         osd.6                       down        0          1.00000 
 8 0.09999         osd.8                       down        0          1.00000 
 2       0 osd.2                               down        0          1.00000 
 3       0 osd.3                               down        0          1.00000 
 5       0 osd.5                               down        0          1.00000

Expected results:
All the OSDs should be up

Additional info:
Comment 2 Alan Bishop 2016-02-05 14:01:06 EST
Could this be a duplicate of #1298620?
Comment 3 arkady kanevsky 2016-02-15 23:47:59 EST
yes, but this is really an equivalent BZ for 1297251 but targeted for OSP8.
Comment 4 Alan Bishop 2016-02-19 08:36:44 EST
There are many BZs with same root cause (udev rules cause OSDs to be down after deployment), one of which is 1309926 and is targeted for OSP8. That BZ is being actively worked, and an external tracker (https://review.openstack.org/276141) is nearly resolved. I think this BZ should be marked as a duplicate of 1309926.
Comment 5 Emilien Macchi 2016-02-25 17:12:25 EST
*** Bug 1309926 has been marked as a duplicate of this bug. ***
Comment 8 Yogev Rabl 2016-03-24 11:20:51 EDT
The deployment finished successfully with the OSDs up and running 

Comment 9 errata-xmlrpc 2016-04-07 17:27:31 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.