Bug 1758655
| Summary: | Prevent creation of unnecessary Ceph pools | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | John Fulton <johfulto> |
| Component: | openstack-tripleo-heat-templates | Assignee: | Francesco Pantano <fpantano> |
| Status: | CLOSED ERRATA | QA Contact: | Nathan Weinberg <nweinber> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 16.0 (Train) | CC: | dsorrent, fpantano, gfidente, mburns, moddi |
| Target Milestone: | z2 | Keywords: | Triaged |
| Target Release: | 16.0 (Train on RHEL 8.1) | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | openstack-tripleo-heat-templates-11.3.2-0.20200315025718.033aae9.el8ost | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-05-14 12:15:28 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
I didn't deploy with "-e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml" so having backups is unexpected.
Detecting this:
NovaEnableRbdBackend: false
GlanceBackend: swift
Might be overkill though.
Perhaps it's best to make CephPools override the default list of pools.
Deployment options used:
openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates
-e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \
-e ~/containers-env-file.yaml \
-e ceph.yaml \
-e overrides.yaml
# cat ceph.yaml
---
parameter_defaults:
CephAnsiblePlaybookVerbosity: 3
CephPoolDefaultSize: 3
CephConfigOverrides:
osd_recovery_op_priority: 3
osd_recovery_max_active: 3
osd_max_backfills: 1
LocalCephAnsibleFetchDirectoryBackup: /tmp/fetch_dir
CephAnsibleDisksConfig:
osd_scenario: lvm
osd_objectstore: bluestore
devices:
- /dev/sda
- /dev/sdb
- /dev/sdc
- /dev/sdd
- /dev/sde
- /dev/sdf
- /dev/sdg
- /dev/sdh
- /dev/sdi
- /dev/sdj
- /dev/sdk
- /dev/sdl
CephAnsibleExtraConfig:
ceph_osd_docker_cpu_limit: 1
is_hci: true
CephAnsibleEnvironmentVariables:
ANSIBLE_PRIVATE_KEY_FILE: '/home/stack/.ssh/id_rsa'
ANSIBLE_HOST_KEY_CHECKING: 'False'
EnableRhcs4Beta: true
NovaEnableRbdBackend: false
GlanceBackend: swift
CephPools:
- {"name": volumes, "pg_num": 512, "pgp_num": 512, "application": rbd, "size": 3}
#
# cat overrides.yaml
---
parameter_defaults:
NtpServer:
- clock.redhat.com
- clock2.redhat.com
ControllerCount: 3
ComputeCount: 0
ComputeHCICount: 3
OvercloudControlFlavor: baremetal
OvercloudComputeFlavor: baremetal
OvercloudComputeHCIFlavor: baremetal
ControllerSchedulerHints:
'capabilities:node': '0-controller-%index%'
ComputeHCISchedulerHints:
'capabilities:node': '0-ceph-%index%'
#
WORKAROUND: After deployment or stack update do the following: 1. ssh into a server where the ceph monitor is running 2. identify the name of the ceph monitor container by running: `podman ps | grep ceph-mon` 3. run a shell script like the following (this example assumes that step 2 told you the monitor container is called ceph-mon-overcloud-controller-1) MON=ceph-mon-overcloud-controller-1 podman exec $MON ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'; for POOL in backups; do \ podman exec $MON ceph osd pool rm $POOL $POOL --yes-i-really-really-mean-it; done podman exec $MON ceph tell mon.\* injectargs '--mon-allow-pool-delete=false'; The above can be modified to add additional pools to the loop. For example: for POOL in vms backups; do \ Be careful not to delete unwanted pools. *** Bug 1674526 has been marked as a duplicate of this bug. *** Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2114 |
If I deploy with the following: parameter_defaults: NovaEnableRbdBackend: false GlanceBackend: swift CephPools: - {"name": volumes, "pg_num": 512, "pgp_num": 512, "application": rbd, "size": 3} Then it would be nice if the CephPools parameter was used to only create the volumes pool. Instead I end up with the unwanted images, vms and backups pool: [root@overcloud-controller-1 ~]# podman exec ceph-mon-overcloud-controller-1 ceph df RAW STORAGE: CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 1.7 TiB 1.7 TiB 331 MiB 36 GiB 2.06 TOTAL 1.7 TiB 1.7 TiB 331 MiB 36 GiB 2.06 POOLS: POOL ID STORED OBJECTS USED %USED MAX AVAIL volumes 1 0 B 0 0 B 0 546 GiB backups 2 0 B 0 0 B 0 546 GiB vms 3 0 B 0 0 B 0 546 GiB images 4 0 B 0 0 B 0 546 GiB [root@overcloud-controller-1 ~]#