Bug 1822705 - TripleO's Ceph integration should be tested with ceph-ansible's class/pool/crush_rule creation feature
Summary: TripleO's Ceph integration should be tested with ceph-ansible's class/pool/cr...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: ceph-ansible
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z13
: 13.0 (Queens)
Assignee: Francesco Pantano
QA Contact: Yogev Rabl
URL:
Whiteboard:
Depends On: 1636508 1793525 1812927 1821194
Blocks: 1851047
TreeView+ depends on / blocked
 
Reported: 2020-04-09 16:32 UTC by Francesco Pantano
Modified: 2021-01-28 17:44 UTC (History)
4 users (show)

Fixed In Version: ceph-ansible-3.2.41-1.el7cp
Doc Type: Enhancement
Doc Text:
Clone Of: 1793525
Environment:
Last Closed: 2021-01-28 17:44:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Comment 7 Yogev Rabl 2020-10-14 13:48:20 UTC
Verified on ceph-ansible-3.2.49-1.el7cp.noarch

$ cat /home/stack/overcloud_deploy.sh
openstack overcloud deploy \
--timeout 100 \
--templates /usr/share/openstack-tripleo-heat-templates \
  --environment-file /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml \
--stack overcloud \
--libvirt-type kvm \
--ntp-server clock1.rdu2.redhat.com \
-e /home/stack/virt/internal.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /home/stack/virt/network/network-environment.yaml \
-e /home/stack/virt/inject-trust-anchor.yaml \
-e /home/stack/virt/hostnames.yml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \
-e /home/stack/virt/debug.yaml \
-e /home/stack/virt/nodes_data.yaml \
-e /home/stack/virt/ceph-osd-encryption.yaml \
-e /home/stack/virt/set-nova-scheduler-filter.yaml \
-e /home/stack/virt/nova-resize-on-the-same-host.yaml \
-e /home/stack/virt/ceph-crush-class.yaml \
-e /home/stack/virt/docker-images.yaml \
--log-file overcloud_deployment_13.log

$ cat /home/stack/virt/internal.yaml
parameter_defaults:
    CephAnsiblePlaybookVerbosity: 3
    CinderEnableIscsiBackend: false
    CinderEnableRbdBackend: true
    CinderEnableNfsBackend: false
    NovaEnableRbdBackend: true
    GlanceBackend: rbd
    CinderRbdPoolName: "volumes"
    NovaRbdPoolName: "vms"
    GlanceRbdPoolName: "images"
    CephPoolDefaultPgNum: 32
    CephAnsibleDisksConfig:
        devices:
            - '/dev/vdb'
            - '/dev/vdc'
            - '/dev/vdd'
            - '/dev/vde'
            - '/dev/vdf'
        osd_scenario: lvm
        osd_objectstore: bluestore

        journal_size: 512

$ cat /home/stack/virt/ceph-crush-class.yaml
parameter_defaults:
    CephAnsibleDisksConfig:
        lvm_volumes:
        -   crush_device_class: hdd
            data: /dev/vdb
        -   crush_device_class: hdd
            data: /dev/vdc
        -   crush_device_class: hdd
            data: /dev/vdd
        -   crush_device_class: ssd
            data: /dev/vde
        -   crush_device_class: ssd
            data: /dev/vdf
        osd_objectstore: bluestore
        osd_scenario: lvm
    CephAnsibleExtraConfig:
        create_crush_tree: true
        crush_rule_config: true
        crush_rules:
        -   class: hdd
            default: true
            name: HDD
            root: default
            type: host
        -   class: ssd
            default: false
            name: SSD
            root: default
            type: host
    CephPools:
    -   application: rbd
        name: fastpool
        pg_num: 32
        rule_name: SSD
    CinderRbdExtraPools: fastpool

[heat-admin@controller-0 ~]$ sudo docker exec ceph-mon-$HOSTNAME ceph -s 
  cluster:
    id:     002b8580-0d66-11eb-b0b0-5254005c6887
    health: HEALTH_WARN
            too few PGs per OSD (19 < min 30)
 
  services:
    mon: 3 daemons, quorum controller-2,controller-1,controller-0
    mgr: controller-2(active), standbys: controller-1, controller-0
    osd: 30 osds: 30 up, 30 in
 
  data:
    pools:   6 pools, 192 pgs
    objects: 7.85k objects, 209MiB
    usage:   56.9GiB used, 303GiB / 360GiB avail
    pgs:     192 active+clean

[heat-admin@controller-0 ~]$ sudo docker exec ceph-mon-$HOSTNAME ceph osd tree
ID  CLASS WEIGHT  TYPE NAME       STATUS REWEIGHT PRI-AFF 
 -1       0.35065 root default                            
-16       0.05844     host ceph-0                         
  0   hdd 0.01169         osd.0       up  1.00000 1.00000 
  7   hdd 0.01169         osd.7       up  1.00000 1.00000 
 13   hdd 0.01169         osd.13      up  1.00000 1.00000 
 19   ssd 0.01169         osd.19      up  1.00000 1.00000 
 25   ssd 0.01169         osd.25      up  1.00000 1.00000 
 -7       0.05844     host ceph-1                         
  2   hdd 0.01169         osd.2       up  1.00000 1.00000 
 10   hdd 0.01169         osd.10      up  1.00000 1.00000 
 15   hdd 0.01169         osd.15      up  1.00000 1.00000 
 22   ssd 0.01169         osd.22      up  1.00000 1.00000 
 29   ssd 0.01169         osd.29      up  1.00000 1.00000 
 -4       0.05844     host ceph-2                         
  3   hdd 0.01169         osd.3       up  1.00000 1.00000 
 11   hdd 0.01169         osd.11      up  1.00000 1.00000 
 17   hdd 0.01169         osd.17      up  1.00000 1.00000 
 23   ssd 0.01169         osd.23      up  1.00000 1.00000 
 28   ssd 0.01169         osd.28      up  1.00000 1.00000 
-13       0.05844     host ceph-3                         
  5   hdd 0.01169         osd.5       up  1.00000 1.00000 
  9   hdd 0.01169         osd.9       up  1.00000 1.00000 
 16   hdd 0.01169         osd.16      up  1.00000 1.00000 
 21   ssd 0.01169         osd.21      up  1.00000 1.00000 
 27   ssd 0.01169         osd.27      up  1.00000 1.00000 
-10       0.05844     host ceph-4                         
  4   hdd 0.01169         osd.4       up  1.00000 1.00000 
  8   hdd 0.01169         osd.8       up  1.00000 1.00000 
 14   hdd 0.01169         osd.14      up  1.00000 1.00000 
 20   ssd 0.01169         osd.20      up  1.00000 1.00000 
 26   ssd 0.01169         osd.26      up  1.00000 1.00000 
-19       0.05844     host ceph-5                         
  1   hdd 0.01169         osd.1       up  1.00000 1.00000 
  6   hdd 0.01169         osd.6       up  1.00000 1.00000 
 12   hdd 0.01169         osd.12      up  1.00000 1.00000 
 18   ssd 0.01169         osd.18      up  1.00000 1.00000 
 24   ssd 0.01169         osd.24      up  1.00000 1.00000


Note You need to log in before you can comment on or make changes to this bug.