Bug 1626647 - OSP-13 : Ceph Health Warn : application not enabled on 2 pool
Summary: OSP-13 : Ceph Health Warn : application not enabled on 2 pool
Keywords:
Status: CLOSED DUPLICATE of bug 1583333
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: ceph-ansible
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: John Fulton
QA Contact: Yogev Rabl
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-09-07 20:03 UTC by karan singh
Modified: 2022-03-13 15:31 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-01-24 07:53:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description karan singh 2018-09-07 20:03:39 UTC
Description of problem:

I know this already that starting RHCS 3 and later releases provide additional protection for pools to prevent unauthorized access by enabling application type.

This is not a Blocker, please consider this BZ as a feature request for RHHI4C 13 ( OSP 13 + Ceph 3 ) 

Request Description: OSPd (tripleo) triggers ceph-ansible to deploy Ceph Cluster. Tripleo later configures cinder, glance, nova to use Ceph as backend.

By default, tripleo creates all the required pools like images, vms, volumes, metrics & backup. What it does not do currently is "it doesn't assign an application type to the Ceph pool" as a result cluster health stays in WARN state, in a freshly deployed OSP-13 & RHCS 3 system.

We know application type is RBD for all the default pools created by triples. So tripleo should set RBD as application type by default.

[heat-admin@controller-0 ~]$ ceph -s
  cluster:
    id:     1ed62898-b2ad-11e8-916e-2047478ccfaa
    health: HEALTH_WARN
            application not enabled on 2 pool(s)
            too few PGs per OSD (8 < min 30)

  services:
    mon: 1 daemons, quorum controller-0
    mgr: controller-0(active)
    osd: 60 osds: 60 up, 60 in

  data:
    pools:   5 pools, 160 pgs
    objects: 1298 objects, 1692 MB
    usage:   7939 MB used, 218 TB / 218 TB avail
    pgs:     160 active+clean

[heat-admin@controller-0 ~]$
[heat-admin@controller-0 ~]$
[heat-admin@controller-0 ~]$ ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED
    218T      218T        7939M             0
POOLS:
    NAME        ID     USED       %USED     MAX AVAIL     OBJECTS
    images      1      45056k         0        67022G          12
    metrics     2           0         0        67022G           0
    backups     3           0         0        67022G           0
    vms         4       1648M         0        67022G        1286
    volumes     5           0         0        67022G           0
[heat-admin@controller-0 ~]$


Version-Release number of selected component (if applicable):

ceph-ansible-3.1.2-1.el7.noarch

ansible-tripleo-ipsec-8.1.1-0.20180308133440.8f5369a.el7ost.noarch
openstack-tripleo-common-containers-8.6.1-23.el7ost.noarch
openstack-tripleo-ui-8.3.1-3.el7ost.noarch
openstack-tripleo-puppet-elements-8.0.0-2.el7ost.noarch
openstack-tripleo-heat-templates-8.0.2-43.el7ost.noarch
puppet-tripleo-8.3.2-8.el7ost.noarch
openstack-tripleo-common-8.6.1-23.el7ost.noarch
openstack-tripleo-validations-8.4.1-5.el7ost.noarch
python-tripleoclient-9.2.1-13.el7ost.noarch
openstack-tripleo-image-elements-8.0.1-1.el7ost.noarch

How Reproducible 

Always (100%)

Steps to Reproduce:

1. Deploy OSP-13 with RHCS 3
2. Login to overcloud controller node
3. Do ceph -s

Actual results:

After tripleo deploys Ceph cluster, ceph cluster remains in health_warn state because of no application type flage not set on Ceph pool

Expected results:

After tripleo deploys Ceph cluster, application type must be automatically set on Ceph pools

Additional info:

Comment 1 karan singh 2018-09-07 20:08:35 UTC
Looks like out of 5 default pools (images, vms, volumes, metrics and backups) application is not set on 2 pools images and volumes. 

Which mean, tripleo already has required bits to enable application on Ceph pools, we just need to instruct tripleo (by default) to set application type on images and volumes pools.

Or am i missing something ?

Comment 4 karan singh 2018-09-17 10:25:14 UTC
Hi John

Here is the output from the overcloud deployed very recently.

https://paste.fedoraproject.org/paste/MYNMx5x9OJrL~Qxp7pU-uw/raw


[heat-admin@controller-0 ~]$ ceph -s
  cluster:
    id:     1ed62898-b2ad-11e8-916e-2047478ccfaa
    health: HEALTH_WARN
            application not enabled on 1 pool(s)
            too few PGs per OSD (8 < min 30)

  services:
    mon: 1 daemons, quorum controller-0
    mgr: controller-0(active)
    osd: 60 osds: 60 up, 60 in

  data:
    pools:   5 pools, 160 pgs
    objects: 1286 objects, 10240 MB
    usage:   37260 MB used, 218 TB / 218 TB avail
    pgs:     160 active+clean

[heat-admin@controller-0 ~]$

[heat-admin@controller-0 ~]$ ceph osd dump | grep -i pool
pool 1 'images' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 88 flags hashpspool stripe_width 0 expected_num_objects 1
pool 2 'metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 76 flags hashpspool stripe_width 0 expected_num_objects 1
pool 3 'backups' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 78 flags hashpspool stripe_width 0 expected_num_objects 1
pool 4 'vms' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 80 flags hashpspool stripe_width 0 expected_num_objects 1
pool 5 'volumes' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 82 flags hashpspool stripe_width 0 expected_num_objects 1
[heat-admin@controller-0 ~]$

Comment 5 Giulio Fidente 2019-01-24 07:53:34 UTC
Fixed in openstack-tripleo-heat-templates-8.0.4-3.el7ost

*** This bug has been marked as a duplicate of bug 1583333 ***


Note You need to log in before you can comment on or make changes to this bug.