Bug 1486830

Summary: Some Containerized OSDs doesn't start after reboot
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Yogev Rabl <yrabl>
Component: Ceph-AnsibleAssignee: Sébastien Han <shan>
Status: CLOSED ERRATA QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 2.4CC: adeza, aschoen, ceph-eng-bugs, dbecker, gmeno, jbrier, jefbrown, mburns, morazi, nthomas, rhel-osp-director-maint, sankarshan, seb, shan, yrabl
Target Milestone: rcKeywords: Triaged, ZStream
Target Release: 3.1   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
.Containerized OSDs start after reboot Previously, in a containerized environment, after rebooting Ceph storage nodes some OSDs might not have started. This was due to a race condition. The race condition was resolved and now all OSD nodes start properly after a reboot.
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-09-26 18:16:41 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1584264    

Description Yogev Rabl 2017-08-30 15:18:44 UTC
Description of problem:
After rebooting Ceph storage nodes some of the OSDs are not starting.
The status of the cluster is  

ID WEIGHT  TYPE NAME                        UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.82077 root default
-2 0.27539     host overcloud-cephstorage-1
 0 0.09270         osd.0                       down        0          1.00000
 3 0.09270         osd.3                       down        0          1.00000
 4 0.09000         osd.4                         up  1.00000          1.00000
-3 0.27269     host overcloud-cephstorage-2
 2 0.09270         osd.2                       down        0          1.00000
 5 0.09000         osd.5                         up  1.00000          1.00000
 7 0.09000         osd.7                         up  1.00000          1.00000
-4 0.27269     host overcloud-cephstorage-0
 1 0.09270         osd.1                       down        0          1.00000
 8 0.09000         osd.8                         up  1.00000          1.00000
 6 0.09000         osd.6                         up  1.00000          1.00000

Version-Release number of selected component (if applicable):
ceph-ansible-3.0.0-0.1.rc3.el7cp.noarch

How reproducible:
100%

Steps to Reproduce:
1. Once an overcloud is successfully deployed, reboot the nodes with Ceph OSDs
2. Check the status of the Ceph cluster
3.

Actual results:
not all OSDs are up

Expected results:
All OSDs are up and running

Additional info:

Comment 2 seb 2017-08-31 22:18:38 UTC
deployment logs please?

Comment 4 Yogev Rabl 2017-10-26 18:02:12 UTC
Tested, the bug was not reproduced

Comment 12 Sébastien Han 2018-09-25 09:15:59 UTC
lgtm thanks for taking care of it.

Comment 14 errata-xmlrpc 2018-09-26 18:16:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2819

Comment 15 John Brier 2018-10-02 20:20:09 UTC
Release Notes Doc Text change "may" to "might" per IBM Style Guide