Bug 1332348 - Installation guide upgrade osd issue.
Summary: Installation guide upgrade osd issue.
Keywords:
Status: CLOSED DUPLICATE of bug 1332347
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Documentation
Version: 1.3.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 2.0
Assignee: ceph-docs@redhat.com
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-03 01:08 UTC by Warren
Modified: 2016-05-03 11:17 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-03 11:17:17 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Warren 2016-05-03 01:08:42 UTC
Looking at 
https://access.qa.redhat.com/documentation/en/red-hat-ceph-storage/version-2/installation-guide-for-red-hat-enterprise-linux/#upgrading_ceph_storage_cluster

Section 5.2 step 6 says:

Verify you have a HEALTH_OK Ceph storage cluster and all placement groups are active+clean before moving on to the next OSD host:

At this point in the process, noout and norebalance are set, so you will always get a HEALTH_WARN.  If you wait for HEALTH_OK, it will take forever.

Something better would be:

Verify that the only health warnings are that the noout and norebalance flags are set.

ceph health

HEALTH_WARN noout,norebalance flag(s) set

After step 7, then run ceph -s to verify that the health status is HEALTH_OK

Comment 2 Anjana Suparna Sriram 2016-05-03 11:17:17 UTC

*** This bug has been marked as a duplicate of bug 1332347 ***


Note You need to log in before you can comment on or make changes to this bug.