Documentation: Need to document the confusion and workaround for the degraded state of ceph when deploying OSP with director using only a single OSD. What happens is that if a deployment with director was done with a single OSD, then the status returns: health HEALTH_WARN and there are pgs in degraded,unclean,undersized state upon creation/load of data. The workaround for it would be: 1) For every created pool to run: ceph osd pool set <poolname> size 1 2) To set in /etc/ceph/ceph.conf osd_pool_default_size = 1 And restart the ceph service.
There seems to be a lot of confusion in cases where a 1 osd test/poc cluster comes up with degraded/non-active pgs due to the default size=3/min_size=2 values. http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/ at the least should have examples of clusters with both degraded and non-active pgs. It should also be clear how to resolve it (both by setting the config value for new pools and by changing existing pools).