Bug 1758655 - Prevent creation of unnecessary Ceph pools
Summary: Prevent creation of unnecessary Ceph pools
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 16.0 (Train)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: z2
: 16.0 (Train on RHEL 8.1)
Assignee: Francesco Pantano
QA Contact: Nathan Weinberg
URL:
Whiteboard:
: 1674526 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-10-04 18:03 UTC by John Fulton
Modified: 2020-05-14 12:16 UTC (History)
5 users (show)

Fixed In Version: openstack-tripleo-heat-templates-11.3.2-0.20200315025718.033aae9.el8ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-14 12:15:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1864477 0 None None None 2020-02-24 13:52:13 UTC
OpenStack gerrit 709083 0 None MERGED Add a new set of tasks to build the openstack pool and key list 2020-05-11 07:36:11 UTC
OpenStack gerrit 709297 0 None MERGED Add CephBasePoolVars and CephKeyVars structures 2020-05-11 07:36:11 UTC
OpenStack gerrit 712131 0 None MERGED Add CephBasePoolVars and CephKeyVars structures 2020-05-11 07:36:11 UTC
OpenStack gerrit 713106 0 None MERGED Add a new set of tasks to build the openstack pool and key list 2020-05-11 07:36:11 UTC
Red Hat Product Errata RHBA-2020:2114 0 None None None 2020-05-14 12:16:13 UTC

Description John Fulton 2019-10-04 18:03:12 UTC
If I deploy with the following:

parameter_defaults:
  NovaEnableRbdBackend: false
  GlanceBackend: swift
  CephPools:
    - {"name": volumes,  "pg_num": 512, "pgp_num": 512, "application": rbd, "size": 3}

Then it would be nice if the CephPools parameter was used to only create the volumes pool.
Instead I end up with the unwanted images, vms and backups pool:

[root@overcloud-controller-1 ~]# podman exec ceph-mon-overcloud-controller-1 ceph df
RAW STORAGE:
    CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED 
    hdd       1.7 TiB     1.7 TiB     331 MiB       36 GiB          2.06 
    TOTAL     1.7 TiB     1.7 TiB     331 MiB       36 GiB          2.06 
 
POOLS:
    POOL        ID     STORED     OBJECTS     USED     %USED     MAX AVAIL 
    volumes      1        0 B           0      0 B         0       546 GiB 
    backups      2        0 B           0      0 B         0       546 GiB 
    vms          3        0 B           0      0 B         0       546 GiB 
    images       4        0 B           0      0 B         0       546 GiB 
[root@overcloud-controller-1 ~]#

Comment 1 John Fulton 2019-10-04 18:11:31 UTC
I didn't deploy with "-e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml" so having backups is unexpected.

Detecting this:
  NovaEnableRbdBackend: false
  GlanceBackend: swift
Might be overkill though.

Perhaps it's best to make CephPools override the default list of pools.


Deployment options used:

openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates
  -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \
  -e ~/containers-env-file.yaml \
  -e ceph.yaml \
  -e overrides.yaml


# cat ceph.yaml
---
parameter_defaults:
  CephAnsiblePlaybookVerbosity: 3
  CephPoolDefaultSize: 3
  CephConfigOverrides:
    osd_recovery_op_priority: 3
    osd_recovery_max_active: 3
    osd_max_backfills: 1
  LocalCephAnsibleFetchDirectoryBackup: /tmp/fetch_dir
  CephAnsibleDisksConfig:
    osd_scenario: lvm
    osd_objectstore: bluestore
    devices:
      - /dev/sda
      - /dev/sdb
      - /dev/sdc
      - /dev/sdd
      - /dev/sde
      - /dev/sdf
      - /dev/sdg
      - /dev/sdh
      - /dev/sdi
      - /dev/sdj
      - /dev/sdk
      - /dev/sdl
  CephAnsibleExtraConfig:
    ceph_osd_docker_cpu_limit: 1
    is_hci: true
  CephAnsibleEnvironmentVariables:
    ANSIBLE_PRIVATE_KEY_FILE: '/home/stack/.ssh/id_rsa'
    ANSIBLE_HOST_KEY_CHECKING: 'False'
  EnableRhcs4Beta: true
  NovaEnableRbdBackend: false
  GlanceBackend: swift
  CephPools:
    - {"name": volumes,  "pg_num": 512, "pgp_num": 512, "application": rbd, "size": 3}
# 

# cat overrides.yaml
---
parameter_defaults:
  NtpServer:
    - clock.redhat.com
    - clock2.redhat.com
  ControllerCount: 3
  ComputeCount: 0
  ComputeHCICount: 3
  OvercloudControlFlavor: baremetal
  OvercloudComputeFlavor: baremetal
  OvercloudComputeHCIFlavor: baremetal
  ControllerSchedulerHints:
    'capabilities:node': '0-controller-%index%'
  ComputeHCISchedulerHints:
    'capabilities:node': '0-ceph-%index%'
#

Comment 2 John Fulton 2019-10-04 18:26:10 UTC
WORKAROUND:

After deployment or stack update do the following:

1. ssh into a server where the ceph monitor is running
2. identify the name of the ceph monitor container by running: `podman ps | grep ceph-mon`
3. run a shell script like the following (this example assumes that step 2 told you the monitor container is called ceph-mon-overcloud-controller-1)

MON=ceph-mon-overcloud-controller-1
podman exec $MON ceph tell mon.\* injectargs '--mon-allow-pool-delete=true';
for POOL in backups; do \
  podman exec $MON ceph osd pool rm $POOL $POOL --yes-i-really-really-mean-it;
done
podman exec $MON ceph tell mon.\* injectargs '--mon-allow-pool-delete=false';


The above can be modified to add additional pools to the loop. For example: 

for POOL in vms backups; do \

Be careful not to delete unwanted pools.

Comment 6 Francesco Pantano 2020-03-13 14:07:09 UTC
*** Bug 1674526 has been marked as a duplicate of this bug. ***

Comment 12 errata-xmlrpc 2020-05-14 12:15:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2114


Note You need to log in before you can comment on or make changes to this bug.