Description of problem: After rebooting Ceph storage nodes some of the OSDs are not starting. The status of the cluster is ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.82077 root default -2 0.27539 host overcloud-cephstorage-1 0 0.09270 osd.0 down 0 1.00000 3 0.09270 osd.3 down 0 1.00000 4 0.09000 osd.4 up 1.00000 1.00000 -3 0.27269 host overcloud-cephstorage-2 2 0.09270 osd.2 down 0 1.00000 5 0.09000 osd.5 up 1.00000 1.00000 7 0.09000 osd.7 up 1.00000 1.00000 -4 0.27269 host overcloud-cephstorage-0 1 0.09270 osd.1 down 0 1.00000 8 0.09000 osd.8 up 1.00000 1.00000 6 0.09000 osd.6 up 1.00000 1.00000 Version-Release number of selected component (if applicable): ceph-ansible-3.0.0-0.1.rc3.el7cp.noarch How reproducible: 100% Steps to Reproduce: 1. Once an overcloud is successfully deployed, reboot the nodes with Ceph OSDs 2. Check the status of the Ceph cluster 3. Actual results: not all OSDs are up Expected results: All OSDs are up and running Additional info:
deployment logs please?
Tested, the bug was not reproduced
lgtm thanks for taking care of it.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2819
Release Notes Doc Text change "may" to "might" per IBM Style Guide