Bug 1332348

Summary: Installation guide upgrade osd issue.
Product: Red Hat Ceph Storage Reporter: Warren <wusui>
Component: DocumentationAssignee: ceph-docs <ceph-docs>
Status: CLOSED DUPLICATE QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 1.3.2CC: asriram, kdreyer
Target Milestone: rc   
Target Release: 2.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-05-03 11:17:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Warren 2016-05-03 01:08:42 UTC
Looking at 
https://access.qa.redhat.com/documentation/en/red-hat-ceph-storage/version-2/installation-guide-for-red-hat-enterprise-linux/#upgrading_ceph_storage_cluster

Section 5.2 step 6 says:

Verify you have a HEALTH_OK Ceph storage cluster and all placement groups are active+clean before moving on to the next OSD host:

At this point in the process, noout and norebalance are set, so you will always get a HEALTH_WARN.  If you wait for HEALTH_OK, it will take forever.

Something better would be:

Verify that the only health warnings are that the noout and norebalance flags are set.

ceph health

HEALTH_WARN noout,norebalance flag(s) set

After step 7, then run ceph -s to verify that the health status is HEALTH_OK

Comment 2 Anjana Suparna Sriram 2016-05-03 11:17:17 UTC

*** This bug has been marked as a duplicate of bug 1332347 ***