Bug 1583333
Summary: | THT should default to setting ceph pool application type when using luminous | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | John Fulton <johfulto> |
Component: | openstack-tripleo-heat-templates | Assignee: | John Fulton <johfulto> |
Status: | CLOSED ERRATA | QA Contact: | Yogev Rabl <yrabl> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 13.0 (Queens) | CC: | gamado, gfidente, jcollin, johfulto, jschluet, karan, mburns, rbartal, tbarron |
Target Milestone: | z2 | Keywords: | Triaged, ZStream |
Target Release: | 13.0 (Queens) | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | openstack-tripleo-heat-templates-8.0.4-3.el7ost | Doc Type: | Bug Fix |
Doc Text: |
Cause: Not all Ceph pools had an application type set.
Consequence: Ceph would emit a HEALTH_WARN message because not every pools created during the deployment had an application type set.
Fix: Apply an appliation type to every pool created during the deployment.
Result: Ceph status does not warn about any pool missing an application type.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2018-08-29 16:36:45 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1562220 | ||
Bug Blocks: |
Description
John Fulton
2018-05-28 18:54:12 UTC
Master change has merged but I need this backported to queens before I can mark post. https://review.openstack.org/#/c/571196/ *** Bug 1598046 has been marked as a duplicate of this bug. *** How to reproduce: Deploy the overcloud and SSH into one of the monitors and run `ceph osd dump | grep pool`. The bug is fixed if you see an application tag listed for each pool. In the example below, you see that each pool has "rbd" after "application" with the exception of metrics which has the application tag "openstack_gnocchi". # ceph osd dump | grep pool pool 1 'images' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2048 pgp_num 2048 last_change 1120 lfor 0/1118 flags hashpspool stripe_width 0 application rbd pool 2 'metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 148 flags hashpspool stripe_width 0 application openstack_gnocchi pool 3 'backups' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 131 flags hashpspool stripe_width 0 pool 4 'vms' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 4096 pgp_num 4096 last_change 1129 lfor 0/1127 flags hashpspool stripe_width 0 application rbd pool 5 'volumes' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 61741 pgp_num 61741 last_change 1175 lfor 0/1173 flags hashpspool stripe_width 0 application rbd (overcloud) root@overcloud-controller-0:~ # deployed on sealusa18 using IR, core_puddle=2018-08-07.5 (undercloud) [stack@undercloud-0 ~]$ rpm -aq| grep ceph-ansibl ceph-ansible-3.1.0-0.1.rc10.el7cp.noarch resaut as expected on controller: Last login: Thu Aug 9 18:19:19 2018 from 192.168.24.1 [heat-admin@controller-0 ~]$ ceph osd dump | grep pool pool 1 'images' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 13 flags hashpspool stripe_width 0 expected_num_objects 1 application rbd pool 2 'metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 14 flags hashpspool stripe_width 0 expected_num_objects 1 application openstack_gnocchi pool 3 'backups' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 15 flags hashpspool stripe_width 0 expected_num_objects 1 application rbd pool 4 'vms' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 16 flags hashpspool stripe_width 0 expected_num_objects 1 application rbd pool 5 'volumes' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 17 flags hashpspool stripe_width 0 expected_num_objects 1 application rbd This bug is marked for inclusion in the errata but does not currently contain draft documentation text. To ensure the timely release of this advisory please provide draft documentation text for this bug as soon as possible. If you do not think this bug requires errata documentation, set the requires_doc_text flag to "-". To add draft documentation text: * Select the documentation type from the "Doc Type" drop down field. * A template will be provided in the "Doc Text" field based on the "Doc Type" value selected. Enter draft text in the "Doc Text" field. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2574 *** Bug 1626647 has been marked as a duplicate of this bug. *** |