Bug 1314855 - API for failure domain configuration
API for failure domain configuration
Status: CLOSED ERRATA
Product: Red Hat Storage Console
Classification: Red Hat
Component: ceph-installer (Show other bugs)
2
Unspecified Linux
unspecified Severity urgent
: ---
: 2
Assigned To: Alfredo Deza
sds-qe-bugs
:
Depends On:
Blocks: 1291304
  Show dependency treegraph
 
Reported: 2016-03-04 11:51 EST by Nishanth Thomas
Modified: 2016-08-23 15:47 EDT (History)
12 users (show)

See Also:
Fixed In Version: ceph-installer-1.0.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-08-23 15:47:59 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Nishanth Thomas 2016-03-04 11:51:28 EST
Ceph installer should provide an API to set the failure domain information
Comment 2 Alfredo Deza 2016-03-08 09:12:10 EST
(In reply to Nishanth Thomas from comment #0)
> Ceph installer should provide an API to set the failure domain information

Would you mind clarifying this further? What does it mean to "set the failure domain" ?
Comment 3 Nishanth Thomas 2016-03-12 02:24:24 EST
Failure Domains:

Failure domain configuration require changes in the ceph.conf as well as the crush maps. The configuration some what similar to the crush configuration

These are different bucket types supported:
- type 9 region
- type 8 datacenter
- type 7 room
- type 6 pod
- type 5 pdu
- type 4 row
- type 3 rack
- type 2 chassis
- type 1 root

Each host can have a single or a combination of these hierarchies. say region(APAC)->datacenter(BLR)->room()->rack-chassis etc. By default all the hosts will be added to root bucket.

For USM UX flow, please have a look at https://docs.google.com/a/redhat.com/presentation/d/1MAgpVG2Fi2UtBYUhuyMScO8zObAYvhiHLp2HQ3_YpMI/edit?usp=sharing  slides 17-23
Comment 4 Alfredo Deza 2016-03-14 13:33:28 EDT
(In reply to Nishanth Thomas from comment #3)
> Failure Domains:
> 
> Failure domain configuration require changes in the ceph.conf as well as the
> crush maps. The configuration some what similar to the crush configuration
> 

ceph.conf changes should not be a problem

> These are different bucket types supported:
> - type 9 region
> - type 8 datacenter
> - type 7 room
> - type 6 pod
> - type 5 pdu
> - type 4 row
> - type 3 rack
> - type 2 chassis
> - type 1 root

Can you create a separate ticket for crush maps? That way we can track that work more specifically.


> 
> Each host can have a single or a combination of these hierarchies. say
> region(APAC)->datacenter(BLR)->room()->rack-chassis etc. By default all the
> hosts will be added to root bucket.
> 
> For USM UX flow, please have a look at
> https://docs.google.com/a/redhat.com/presentation/d/
> 1MAgpVG2Fi2UtBYUhuyMScO8zObAYvhiHLp2HQ3_YpMI/edit?usp=sharing  slides 17-23
Comment 5 Alfredo Deza 2016-03-14 13:38:34 EDT
Pull request opened to address ceph.conf configuration:

https://github.com/ceph/ceph-installer/pull/120
Comment 6 Alfredo Deza 2016-03-14 15:24:02 EDT
marking this as POST - the pull request was merged, but it addresses the ability to modify ceph.conf, not crush maps.

A separate ticket should get opened for the work needed on crush
Comment 8 Mike McCune 2016-03-28 18:11:59 EDT
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune@redhat.com with any questions
Comment 15 Daniel Horák 2016-08-04 10:05:18 EDT
Failure Domains (described in comment 3) are not part of USM 2, so is this feature really implemented in ceph-installer?

And if yes, is there any documentation we can follow to test this feature?
Comment 16 Alfredo Deza 2016-08-09 07:33:46 EDT
For both OSD and MON, a client is allowed to fully override a `ceph.conf`  via a key value mapping on sections.

Both sections document this:

http://docs.ceph.com/ceph-installer/docs/#post--api-osd-configure-
http://docs.ceph.com/ceph-installer/docs/#post--api-mon-configure-

    conf (object) – (optional) An object that maps ceph.conf sections
    (only global, mon, osd, rgw, mds allowed) to keys and values. 
    Anything defined in this mapping will override existing settings.

The feature is not documented as "failure domain" because it just implements the ability of changing ceph.conf, not dealing with anything related to crush maps
Comment 17 Martin Kudlej 2016-08-09 09:35:38 EDT
Console uses this API during installation, example:
{"address":"172.16.180.73","calamari":false,"cluster_name":"clustr1","cluster_network":"172.16.180.0/24","conf":{"global":{"osd crush update on start":false}},"fsid":"41e9a939-fc9b-4a6b-b0d6-1efedd0a4164","host":"mkudlej-usm1-mon2.os1.phx2.redhat.com","monitor_secret":"AQA7P8dWAAAAABAAH/tbiZQn/40Z8pr959UmEA==","monitors":[{"address":"172.16.180.5","host":"mkudlej-usm1-mon1.os1.phx2.redhat.com"}],"public_network":"172.16.180.0/24","redhat_storage":true}

It works in Console with packages:
ceph-ansible-1.0.5-32.el7scon.noarch
ceph-installer-1.0.14-1.el7scon.noarch
rhscon-ceph-0.0.40-1.el7scon.x86_64
rhscon-core-0.0.41-1.el7scon.x86_64
rhscon-core-selinux-0.0.41-1.el7scon.noarch
rhscon-ui-0.0.52-1.el7scon.noarch
Comment 19 errata-xmlrpc 2016-08-23 15:47:59 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1754

Note You need to log in before you can comment on or make changes to this bug.