Bug 1339393 - Documentation: make it clear how to resolve degraded or non-active pgs on a 1/2 osd cluster
Summary: Documentation: make it clear how to resolve degraded or non-active pgs on a 1...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Documentation
Version: 2.0
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: rc
: 2.1
Assignee: John Wilkins
QA Contact: Ramakrishnan Periyasamy
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-24 21:11 UTC by Alexander Chuzhoy
Modified: 2016-11-02 15:59 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-02 15:59:09 UTC
Embargoed:


Attachments (Terms of Use)

Description Alexander Chuzhoy 2016-05-24 21:11:24 UTC
Documentation: Need to document the confusion and workaround for the degraded state of ceph when deploying OSP with director using only a single OSD.


What happens is that if a deployment with director was done with a single OSD,
then the status returns:
     health HEALTH_WARN
and there are pgs in degraded,unclean,undersized state upon creation/load of data.


The workaround for it would be:

1) For every created pool to run:
ceph osd pool set <poolname> size 1

2) To set in /etc/ceph/ceph.conf
osd_pool_default_size = 1
And restart the ceph service.

Comment 2 Samuel Just 2016-05-24 21:21:06 UTC
There seems to be a lot of confusion in cases where a 1 osd test/poc cluster comes up with degraded/non-active pgs due to the default size=3/min_size=2 values.

http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/

at the least should have examples of clusters with both degraded and non-active pgs.  It should also be clear how to resolve it (both by setting the config value for new pools and by changing existing pools).


Note You need to log in before you can comment on or make changes to this bug.