Bug 1404459
Summary: | [RFE] Remove osd_pool_default_min_size=1, keep it undefined so Ceph will take care of the min_size. | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Vikhyat Umrao <vumrao> |
Component: | openstack-tripleo-heat-templates | Assignee: | Keith Schincke <kschinck> |
Status: | CLOSED ERRATA | QA Contact: | Yogev Rabl <yrabl> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 8.0 (Liberty) | CC: | amaumene, aschultz, gfidente, jcall, jomurphy, jquinn, linuxkidd, mburns, mhackett, nlevinki, rhel-osp-director-maint, seb, shan, tvignaud, vaggarwa, vumrao |
Target Milestone: | Upstream M2 | Keywords: | FutureFeature, Triaged |
Target Release: | 12.0 (Pike) | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | openstack-tripleo-heat-templates-7.0.0-0.20170616123155.el7ost | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-12-13 20:54:56 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Vikhyat Umrao
2016-12-13 22:02:11 UTC
The best thing to do is to leave this option 'undef' in puppet-ceph so if we don't declare it Ceph will pick the right option for us. (In reply to seb from comment #1) > The best thing to do is to leave this option 'undef' in puppet-ceph so if we > don't declare it Ceph will pick the right option for us. Thank you Sebastien. Yes that would be much better. With formula size-size/2. As given below for default: OPTION(osd_pool_default_min_size, OPT_INT, 0) // 0 means no specific default; ceph will use size-size/2 Changing the default in Director is possible and it should be safe on upgrade because the setting won't change for pre-existing pools. For new deployments with a single ceph-osd running though (which is a pretty common test/poc scenario), min will automatically be set to 2 and the cluster will not be writable unless the operator explicitly set min_size to 1. I'll see if it is possible to have an automated mechanism to enforce min_size to 1 when there is a single ceph-osd so we don't impact existing use-cases. This bugzilla has been removed from the release and needs to be reviewed and Triaged for another Target Release. Hello everyone, The best thing would be as informed by Seb in comment#1 leave this option undefined so by default ceph will take care of it and I am changing the bug title for same. Regards, Vikhyat (In reply to Giulio Fidente from comment #4) > Changing the default in Director is possible and it should be safe on > upgrade because the setting won't change for pre-existing pools. As I said we do not have to change the default we need to remove this option and yes it should not change the pre-existing pools. > > For new deployments with a single ceph-osd running though (which is a pretty > common test/poc scenario), min will automatically be set to 2 and the > cluster will not be writable unless the operator explicitly set min_size to > 1. > For new deployment for test/POC with single OSD you need to set the replication size(osd_pool_default_size) as 1 and it should take care of min_size as 1. BTW POC/test should not be done with single OSD it should be *minimum* 3 OSDs. > I'll see if it is possible to have an automated mechanism to enforce > min_size to 1 when there is a single ceph-osd so we don't impact existing > use-cases. Same as above. While we work on the fix, for new deployments the min_size can be set via environment file with: parameter_defaults: ExtraConfig: ceph::profile::params::osd_pool_default_min_size: 2 This bugzilla has been removed from the release and needs to be reviewed and Triaged for another Target Release. Upstream patch has been merged into master with an updated release notes. Keith verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:3462 |