Bug 1309812 - Ovecloud Deploy OSP7 y2 on RHEL 7.2 fails on Ceph Install
Summary: Ovecloud Deploy OSP7 y2 on RHEL 7.2 fails on Ceph Install
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-puppet-modules
Version: 7.0 (Kilo)
Hardware: x86_64
OS: Linux
high
high
Target Milestone: async
: 7.0 (Kilo)
Assignee: Emilien Macchi
QA Contact: Alexander Chuzhoy
URL:
Whiteboard:
Depends On: 1297251
Blocks: 1191185
TreeView+ depends on / blocked
 
Reported: 2016-02-18 18:41 UTC by Scott Lewis
Modified: 2019-10-10 11:15 UTC (History)
30 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1297251
Environment:
Last Closed: 2016-12-01 12:18:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 276141 0 None None None 2016-02-18 18:41:53 UTC

Comment 2 Alexander Chuzhoy 2016-02-18 22:18:21 UTC
Verified:
Environment:
openstack-puppet-modules-2015.1.8-51.el7ost.noarch


Deployed with:
export THT=/usr/share/openstack-tripleo-heat-templates
openstack overcloud deploy --templates $THT \
-e $THT/environments/network-isolation.yaml \
-e $THT/environments/storage-environment.yaml \
-e /home/stack/network-environment.yaml \
-e /home/stack/ssl-heat-templates/environments/enable-tls.yaml \
-e /home/stack/ssl-heat-templates/environments/inject-trust-anchor.yaml \
--control-scale 3 \
--compute-scale 2 \
--ceph-storage-scale 3 \
--compute-flavor compute --control-flavor control --ceph-storage-flavor ceph-storage \
--neutron-disable-tunneling \
--neutron-network-type vlan \
--neutron-network-vlan-ranges tenantvlan:<vlan range> \
--neutron-bridge-mappings datacentre:br-ex,tenantvlan:br-nic4 \
--ntp-server x.x.x.x \
--rhel-reg --reg-method satellite --reg-sat-url <url> --reg-org <id> --reg-activation-key <key> --reg-force \
--timeout 180



[stack@undercloud ~]$  cat /usr/share/openstack-tripleo-heat-templates/puppet/hieradata/ceph.yaml
ceph::profile::params::osd_journal_size: 2048
ceph::profile::params::osd_pool_default_pg_num: 128
ceph::profile::params::osd_pool_default_pgp_num: 128
ceph::profile::params::osd_pool_default_size: 3
ceph::profile::params::osd_pool_default_min_size: 1
ceph::profile::params::manage_repo: false
ceph::profile::params::authentication_type: cephx
ceph::profile::params::osds:
    '/dev/sdb':
        journal: '/dev/sdd'
    '/dev/sdc':
        journal: '/dev/sdd'

ceph_pools:
  - "%{hiera('cinder_rbd_pool_name')}"
  - "%{hiera('nova::compute::rbd::libvirt_images_rbd_pool')}"
  - "%{hiera('glance::backend::rbd::rbd_store_pool')}"

ceph_osd_selinux_permissive: true
ceph_classes: []


Note: had to zap the disks and label them with gpt before deploying:
sudo sgdisk -Z /dev/<name> && sudo sgdisk -o /dev/<name>
After deployment the ceph is UP (was able to create image,volume,vm on it:
    cluster dfa4fdec-d669-11e5-90ca-5254003ec993
     health HEALTH_OK
     monmap e1: 3 mons at {overcloud-controller-0=x.x.x.x:6789/0,overcloud-controller-1=x.x.x.x:6789/0,overcloud-controller-2=x.x.x.x:6789/0}
            election epoch 6, quorum 0,1,2 overcloud-controller-1,overcloud-controller-2,overcloud-controller-0
     osdmap e22: 6 osds: 6 up, 6 in
      pgmap v47: 256 pgs, 4 pools, 12891 kB data, 8 objects
            253 MB used, 11053 GB / 11053 GB avail
                 256 active+clean
[root@overcloud-cephstorage-2 ~]# ceph osd tree
ID WEIGHT   TYPE NAME                        UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 10.79993 root default
-2  3.59998     host overcloud-cephstorage-1
 0  1.79999         osd.0                         up  1.00000          1.00000
 3  1.79999         osd.3                         up  1.00000          1.00000
-3  3.59998     host overcloud-cephstorage-2
 1  1.79999         osd.1                         up  1.00000          1.00000
 4  1.79999         osd.4                         up  1.00000          1.00000
-4  3.59998     host overcloud-cephstorage-0
 2  1.79999         osd.2                         up  1.00000          1.00000
 5  1.79999         osd.5                         up  1.00000          1.00000

Comment 3 arkady kanevsky 2016-12-01 02:12:44 UTC
Is this relevant?
As far as I am aware OSP8, OSP9 and OSP10 deploy Ceph without any issues.

Comment 4 Mike Burns 2016-12-01 12:18:28 UTC
Closing.  If this issue is not resolved, please reopen this bug or file a new bug.  Thanks


Note You need to log in before you can comment on or make changes to this bug.