Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 1486830 - Some Containerized OSDs doesn't start after reboot
Some Containerized OSDs doesn't start after reboot
Status: CLOSED ERRATA
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Ceph-Ansible (Show other bugs)
2.4
x86_64 Linux
urgent Severity urgent
: rc
: 3.1
Assigned To: leseb
ceph-qe-bugs
: Triaged, ZStream
Depends On:
Blocks: 1584264
  Show dependency treegraph
 
Reported: 2017-08-30 11:18 EDT by Yogev Rabl
Modified: 2018-10-02 16:20 EDT (History)
15 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
.Containerized OSDs start after reboot Previously, in a containerized environment, after rebooting Ceph storage nodes some OSDs might not have started. This was due to a race condition. The race condition was resolved and now all OSD nodes start properly after a reboot.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-09-26 14:16:41 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:2819 None None None 2018-09-26 14:17 EDT

  None (edit)
Description Yogev Rabl 2017-08-30 11:18:44 EDT
Description of problem:
After rebooting Ceph storage nodes some of the OSDs are not starting.
The status of the cluster is  

ID WEIGHT  TYPE NAME                        UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.82077 root default
-2 0.27539     host overcloud-cephstorage-1
 0 0.09270         osd.0                       down        0          1.00000
 3 0.09270         osd.3                       down        0          1.00000
 4 0.09000         osd.4                         up  1.00000          1.00000
-3 0.27269     host overcloud-cephstorage-2
 2 0.09270         osd.2                       down        0          1.00000
 5 0.09000         osd.5                         up  1.00000          1.00000
 7 0.09000         osd.7                         up  1.00000          1.00000
-4 0.27269     host overcloud-cephstorage-0
 1 0.09270         osd.1                       down        0          1.00000
 8 0.09000         osd.8                         up  1.00000          1.00000
 6 0.09000         osd.6                         up  1.00000          1.00000

Version-Release number of selected component (if applicable):
ceph-ansible-3.0.0-0.1.rc3.el7cp.noarch

How reproducible:
100%

Steps to Reproduce:
1. Once an overcloud is successfully deployed, reboot the nodes with Ceph OSDs
2. Check the status of the Ceph cluster
3.

Actual results:
not all OSDs are up

Expected results:
All OSDs are up and running

Additional info:
Comment 2 seb 2017-08-31 18:18:38 EDT
deployment logs please?
Comment 4 Yogev Rabl 2017-10-26 14:02:12 EDT
Tested, the bug was not reproduced
Comment 12 leseb 2018-09-25 05:15:59 EDT
lgtm thanks for taking care of it.
Comment 14 errata-xmlrpc 2018-09-26 14:16:41 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2819
Comment 15 John Brier 2018-10-02 16:20:09 EDT
Release Notes Doc Text change "may" to "might" per IBM Style Guide

Note You need to log in before you can comment on or make changes to this bug.