Hide Forgot
Description of problem: [RFE] osd_pool_default_min_size=2 should be set by default not 1 File: /usr/share/openstack-tripleo-heat-templates/puppet/hieradata/ceph.yaml [...] ceph::profile::params::osd_pool_default_size: 3 ceph::profile::params::osd_pool_default_min_size: 1 <=== [...] Version-Release number of selected component (if applicable): Red Hat Openstack Platform 8 - In Ceph we recommend users to use min_size=2 which prevents any kind of data loss/incomplete PGs/Unfound objects if 2 or more failure domain(default: host) goes down. - As with min_size=2 we pause writes to the Ceph pools and with min_size=1 we allow writes wiht only 1 failure domain up. - May be can make osd_pool_default_min_size configurable as we are doing here for size: https://bugzilla.redhat.com/show_bug.cgi?id=1283721 but default should be 2.
The best thing to do is to leave this option 'undef' in puppet-ceph so if we don't declare it Ceph will pick the right option for us.
(In reply to seb from comment #1) > The best thing to do is to leave this option 'undef' in puppet-ceph so if we > don't declare it Ceph will pick the right option for us. Thank you Sebastien. Yes that would be much better. With formula size-size/2. As given below for default: OPTION(osd_pool_default_min_size, OPT_INT, 0) // 0 means no specific default; ceph will use size-size/2
Changing the default in Director is possible and it should be safe on upgrade because the setting won't change for pre-existing pools. For new deployments with a single ceph-osd running though (which is a pretty common test/poc scenario), min will automatically be set to 2 and the cluster will not be writable unless the operator explicitly set min_size to 1. I'll see if it is possible to have an automated mechanism to enforce min_size to 1 when there is a single ceph-osd so we don't impact existing use-cases.
This bugzilla has been removed from the release and needs to be reviewed and Triaged for another Target Release.
Hello everyone, The best thing would be as informed by Seb in comment#1 leave this option undefined so by default ceph will take care of it and I am changing the bug title for same. Regards, Vikhyat
(In reply to Giulio Fidente from comment #4) > Changing the default in Director is possible and it should be safe on > upgrade because the setting won't change for pre-existing pools. As I said we do not have to change the default we need to remove this option and yes it should not change the pre-existing pools. > > For new deployments with a single ceph-osd running though (which is a pretty > common test/poc scenario), min will automatically be set to 2 and the > cluster will not be writable unless the operator explicitly set min_size to > 1. > For new deployment for test/POC with single OSD you need to set the replication size(osd_pool_default_size) as 1 and it should take care of min_size as 1. BTW POC/test should not be done with single OSD it should be *minimum* 3 OSDs. > I'll see if it is possible to have an automated mechanism to enforce > min_size to 1 when there is a single ceph-osd so we don't impact existing > use-cases. Same as above.
While we work on the fix, for new deployments the min_size can be set via environment file with: parameter_defaults: ExtraConfig: ceph::profile::params::osd_pool_default_min_size: 2
KCS: https://access.redhat.com/solutions/2999651
Upstream patch has been merged into master with an updated release notes. Keith
verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:3462