Bug 1332347

Summary: Installation guide upgrade osd issue.
Product: Red Hat Ceph Storage Reporter: Warren <wusui>
Component: DocumentationAssignee: Aron Gunn <agunn>
Status: CLOSED CURRENTRELEASE QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 1.3.2CC: agunn, asriram, hyelloji, kdreyer
Target Milestone: rc   
Target Release: 2.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-09-30 17:20:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Warren 2016-05-03 01:08:29 UTC
Looking at 
https://access.qa.redhat.com/documentation/en/red-hat-ceph-storage/version-2/installation-guide-for-red-hat-enterprise-linux/#upgrading_ceph_storage_cluster

Section 5.2 step 6 says:

Verify you have a HEALTH_OK Ceph storage cluster and all placement groups are active+clean before moving on to the next OSD host:

At this point in the process, noout and norebalance are set, so you will always get a HEALTH_WARN.  If you wait for HEALTH_OK, it will take forever.

Something better would be:

Verify that the only health warnings are that the noout and norebalance flags are set.

ceph health

HEALTH_WARN noout,norebalance flag(s) set

After step 7, then run ceph -s to verify that the health status is HEALTH_OK

Comment 2 Anjana Suparna Sriram 2016-05-03 11:17:17 UTC
*** Bug 1332348 has been marked as a duplicate of this bug. ***

Comment 4 Hemanth Kumar 2016-05-31 12:13:47 UTC
Doc looks good to me. Moving to verified state.