Verified: Environment: openstack-puppet-modules-2015.1.8-51.el7ost.noarch Deployed with: export THT=/usr/share/openstack-tripleo-heat-templates openstack overcloud deploy --templates $THT \ -e $THT/environments/network-isolation.yaml \ -e $THT/environments/storage-environment.yaml \ -e /home/stack/network-environment.yaml \ -e /home/stack/ssl-heat-templates/environments/enable-tls.yaml \ -e /home/stack/ssl-heat-templates/environments/inject-trust-anchor.yaml \ --control-scale 3 \ --compute-scale 2 \ --ceph-storage-scale 3 \ --compute-flavor compute --control-flavor control --ceph-storage-flavor ceph-storage \ --neutron-disable-tunneling \ --neutron-network-type vlan \ --neutron-network-vlan-ranges tenantvlan:<vlan range> \ --neutron-bridge-mappings datacentre:br-ex,tenantvlan:br-nic4 \ --ntp-server x.x.x.x \ --rhel-reg --reg-method satellite --reg-sat-url <url> --reg-org <id> --reg-activation-key <key> --reg-force \ --timeout 180 [stack@undercloud ~]$ cat /usr/share/openstack-tripleo-heat-templates/puppet/hieradata/ceph.yaml ceph::profile::params::osd_journal_size: 2048 ceph::profile::params::osd_pool_default_pg_num: 128 ceph::profile::params::osd_pool_default_pgp_num: 128 ceph::profile::params::osd_pool_default_size: 3 ceph::profile::params::osd_pool_default_min_size: 1 ceph::profile::params::manage_repo: false ceph::profile::params::authentication_type: cephx ceph::profile::params::osds: '/dev/sdb': journal: '/dev/sdd' '/dev/sdc': journal: '/dev/sdd' ceph_pools: - "%{hiera('cinder_rbd_pool_name')}" - "%{hiera('nova::compute::rbd::libvirt_images_rbd_pool')}" - "%{hiera('glance::backend::rbd::rbd_store_pool')}" ceph_osd_selinux_permissive: true ceph_classes: [] Note: had to zap the disks and label them with gpt before deploying: sudo sgdisk -Z /dev/<name> && sudo sgdisk -o /dev/<name> After deployment the ceph is UP (was able to create image,volume,vm on it: cluster dfa4fdec-d669-11e5-90ca-5254003ec993 health HEALTH_OK monmap e1: 3 mons at {overcloud-controller-0=x.x.x.x:6789/0,overcloud-controller-1=x.x.x.x:6789/0,overcloud-controller-2=x.x.x.x:6789/0} election epoch 6, quorum 0,1,2 overcloud-controller-1,overcloud-controller-2,overcloud-controller-0 osdmap e22: 6 osds: 6 up, 6 in pgmap v47: 256 pgs, 4 pools, 12891 kB data, 8 objects 253 MB used, 11053 GB / 11053 GB avail 256 active+clean [root@overcloud-cephstorage-2 ~]# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 10.79993 root default -2 3.59998 host overcloud-cephstorage-1 0 1.79999 osd.0 up 1.00000 1.00000 3 1.79999 osd.3 up 1.00000 1.00000 -3 3.59998 host overcloud-cephstorage-2 1 1.79999 osd.1 up 1.00000 1.00000 4 1.79999 osd.4 up 1.00000 1.00000 -4 3.59998 host overcloud-cephstorage-0 2 1.79999 osd.2 up 1.00000 1.00000 5 1.79999 osd.5 up 1.00000 1.00000
Is this relevant? As far as I am aware OSP8, OSP9 and OSP10 deploy Ceph without any issues.
Closing. If this issue is not resolved, please reopen this bug or file a new bug. Thanks