If I deploy with the following: parameter_defaults: NovaEnableRbdBackend: false GlanceBackend: swift CephPools: - {"name": volumes, "pg_num": 512, "pgp_num": 512, "application": rbd, "size": 3} Then it would be nice if the CephPools parameter was used to only create the volumes pool. Instead I end up with the unwanted images, vms and backups pool: [root@overcloud-controller-1 ~]# podman exec ceph-mon-overcloud-controller-1 ceph df RAW STORAGE: CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 1.7 TiB 1.7 TiB 331 MiB 36 GiB 2.06 TOTAL 1.7 TiB 1.7 TiB 331 MiB 36 GiB 2.06 POOLS: POOL ID STORED OBJECTS USED %USED MAX AVAIL volumes 1 0 B 0 0 B 0 546 GiB backups 2 0 B 0 0 B 0 546 GiB vms 3 0 B 0 0 B 0 546 GiB images 4 0 B 0 0 B 0 546 GiB [root@overcloud-controller-1 ~]#
I didn't deploy with "-e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml" so having backups is unexpected. Detecting this: NovaEnableRbdBackend: false GlanceBackend: swift Might be overkill though. Perhaps it's best to make CephPools override the default list of pools. Deployment options used: openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ -e ~/containers-env-file.yaml \ -e ceph.yaml \ -e overrides.yaml # cat ceph.yaml --- parameter_defaults: CephAnsiblePlaybookVerbosity: 3 CephPoolDefaultSize: 3 CephConfigOverrides: osd_recovery_op_priority: 3 osd_recovery_max_active: 3 osd_max_backfills: 1 LocalCephAnsibleFetchDirectoryBackup: /tmp/fetch_dir CephAnsibleDisksConfig: osd_scenario: lvm osd_objectstore: bluestore devices: - /dev/sda - /dev/sdb - /dev/sdc - /dev/sdd - /dev/sde - /dev/sdf - /dev/sdg - /dev/sdh - /dev/sdi - /dev/sdj - /dev/sdk - /dev/sdl CephAnsibleExtraConfig: ceph_osd_docker_cpu_limit: 1 is_hci: true CephAnsibleEnvironmentVariables: ANSIBLE_PRIVATE_KEY_FILE: '/home/stack/.ssh/id_rsa' ANSIBLE_HOST_KEY_CHECKING: 'False' EnableRhcs4Beta: true NovaEnableRbdBackend: false GlanceBackend: swift CephPools: - {"name": volumes, "pg_num": 512, "pgp_num": 512, "application": rbd, "size": 3} # # cat overrides.yaml --- parameter_defaults: NtpServer: - clock.redhat.com - clock2.redhat.com ControllerCount: 3 ComputeCount: 0 ComputeHCICount: 3 OvercloudControlFlavor: baremetal OvercloudComputeFlavor: baremetal OvercloudComputeHCIFlavor: baremetal ControllerSchedulerHints: 'capabilities:node': '0-controller-%index%' ComputeHCISchedulerHints: 'capabilities:node': '0-ceph-%index%' #
WORKAROUND: After deployment or stack update do the following: 1. ssh into a server where the ceph monitor is running 2. identify the name of the ceph monitor container by running: `podman ps | grep ceph-mon` 3. run a shell script like the following (this example assumes that step 2 told you the monitor container is called ceph-mon-overcloud-controller-1) MON=ceph-mon-overcloud-controller-1 podman exec $MON ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'; for POOL in backups; do \ podman exec $MON ceph osd pool rm $POOL $POOL --yes-i-really-really-mean-it; done podman exec $MON ceph tell mon.\* injectargs '--mon-allow-pool-delete=false'; The above can be modified to add additional pools to the loop. For example: for POOL in vms backups; do \ Be careful not to delete unwanted pools.
*** Bug 1674526 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2114