Bug 1404459 - [RFE] Remove osd_pool_default_min_size=1, keep it undefined so Ceph will take care of the min_size.
Summary: [RFE] Remove osd_pool_default_min_size=1, keep it undefined so Ceph will take...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 8.0 (Liberty)
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: Upstream M2
: 12.0 (Pike)
Assignee: Keith Schincke
QA Contact: Yogev Rabl
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-12-13 22:02 UTC by Vikhyat Umrao
Modified: 2021-03-11 14:51 UTC (History)
16 users (show)

Fixed In Version: openstack-tripleo-heat-templates-7.0.0-0.20170616123155.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-13 20:54:56 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1686773 0 None None None 2017-04-27 15:58:47 UTC
OpenStack gerrit 464183 0 None MERGED Remove osd_pool_default_min_size to allow Ceph cluster to do the right thing by default 2020-09-09 15:36:15 UTC
Red Hat Knowledge Base (Solution) 2999651 0 None None None 2017-04-13 19:00:33 UTC
Red Hat Product Errata RHEA-2017:3462 0 normal SHIPPED_LIVE Red Hat OpenStack Platform 12.0 Enhancement Advisory 2018-02-16 01:43:25 UTC

Description Vikhyat Umrao 2016-12-13 22:02:11 UTC
Description of problem:
[RFE] osd_pool_default_min_size=2 should be set by default not 1 

File: /usr/share/openstack-tripleo-heat-templates/puppet/hieradata/ceph.yaml
[...]
ceph::profile::params::osd_pool_default_size: 3
ceph::profile::params::osd_pool_default_min_size: 1 <===
[...]


Version-Release number of selected component (if applicable):
Red Hat Openstack Platform 8

- In Ceph we recommend users to use min_size=2 which prevents any kind of data loss/incomplete PGs/Unfound objects if 2 or more failure domain(default: host) goes down.

- As with min_size=2 we pause writes to the Ceph pools and with min_size=1 we allow writes wiht only 1 failure domain up.

- May be can make osd_pool_default_min_size configurable as we are doing here for size: https://bugzilla.redhat.com/show_bug.cgi?id=1283721 but default should be 2.

Comment 1 seb 2016-12-14 17:00:57 UTC
The best thing to do is to leave this option 'undef' in puppet-ceph so if we don't declare it Ceph will pick the right option for us.

Comment 2 Vikhyat Umrao 2016-12-14 17:48:46 UTC
(In reply to seb from comment #1)
> The best thing to do is to leave this option 'undef' in puppet-ceph so if we
> don't declare it Ceph will pick the right option for us.

Thank you Sebastien. Yes that would be much better. With formula size-size/2.

As given below for default:

OPTION(osd_pool_default_min_size, OPT_INT, 0)  // 0 means no specific default; ceph will use size-size/2

Comment 4 Giulio Fidente 2017-04-12 19:36:07 UTC
Changing the default in Director is possible and it should be safe on upgrade because the setting won't change for pre-existing pools.

For new deployments with a single ceph-osd running though (which is a pretty common test/poc scenario), min will automatically be set to 2 and the cluster will not be writable unless the operator explicitly set min_size to 1.

I'll see if it is possible to have an automated mechanism to enforce min_size to 1 when there is a single ceph-osd so we don't impact existing use-cases.

Comment 5 Red Hat Bugzilla Rules Engine 2017-04-12 19:36:35 UTC
This bugzilla has been removed from the release and needs to be reviewed and Triaged for another Target Release.

Comment 6 Vikhyat Umrao 2017-04-12 22:32:58 UTC
Hello everyone,

The best thing would be as informed by Seb in comment#1 leave this option undefined so by default ceph will take care of it and I am changing the bug title for same.

Regards,
Vikhyat

Comment 7 Vikhyat Umrao 2017-04-12 22:53:45 UTC
(In reply to Giulio Fidente from comment #4)
> Changing the default in Director is possible and it should be safe on
> upgrade because the setting won't change for pre-existing pools.

As I said we do not have to change the default we need to remove this option and yes it should not change the pre-existing pools.

> 
> For new deployments with a single ceph-osd running though (which is a pretty
> common test/poc scenario), min will automatically be set to 2 and the
> cluster will not be writable unless the operator explicitly set min_size to
> 1.
> 

For new deployment for test/POC with single OSD you need to set the replication size(osd_pool_default_size) as 1 and it should take care of min_size as 1.
BTW POC/test should not be done with single OSD it should be *minimum* 3 OSDs.


> I'll see if it is possible to have an automated mechanism to enforce
> min_size to 1 when there is a single ceph-osd so we don't impact existing
> use-cases.

Same as above.

Comment 9 Giulio Fidente 2017-04-13 08:50:16 UTC
While we work on the fix, for new deployments the min_size can be set via environment file with:

parameter_defaults:
  ExtraConfig:
    ceph::profile::params::osd_pool_default_min_size: 2

Comment 10 Vikhyat Umrao 2017-04-13 19:01:27 UTC
KCS: https://access.redhat.com/solutions/2999651

Comment 11 Red Hat Bugzilla Rules Engine 2017-04-25 15:32:22 UTC
This bugzilla has been removed from the release and needs to be reviewed and Triaged for another Target Release.

Comment 12 Keith Schincke 2017-05-25 14:45:02 UTC
Upstream patch has been merged into master with an updated release notes. 

Keith

Comment 14 Yogev Rabl 2017-09-28 12:51:26 UTC
verified

Comment 17 errata-xmlrpc 2017-12-13 20:54:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3462


Note You need to log in before you can comment on or make changes to this bug.