Description of problem: we should allow the deployer to customize the username of the Ceph user to configure in the OpenStack clients and the name of the Ceph pools to use for all Glance, Cinder and Nova Version-Release number of selected component (if applicable): openstack-tripleo-heat-templates-0.8.6-46.el7ost.noarch
openstack overcloud deploy --templates ~/templates/my-overcloud -e ~/templates/my-overcloud/environments/network-isolation.yaml -e ~/templates/network-environment.yaml --control-scale 3 --compute-scale 1 --ceph-storage-scale 3 --ntp-server 192.168.0.1 -e ~/templates/snmpd.yaml -e ~/templates/ceph-environment.yaml --libvirt-type qemu [stack@instack ~]$ cat templates/ceph-environment.yaml parameters: CephClientUserName: marius NovaEnableRbdBackend: true CinderEnableRbdBackend: true GlanceBackend: rbd NovaRbdPoolName: nova_vms CinderRbdPoolName: cinder_volumes GlanceRbdPoolName: glance_images # finally we disable the Cinder LVM backend CinderEnableIscsiBackend: false [stack@instack ~]$ nova list +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ | 88babf7c-752d-403c-b105-e7a62de79b67 | overcloud-cephstorage-0 | ACTIVE | - | Running | ctlplane=192.0.2.7 | | 6dbd8c0e-17f3-45ff-864f-52e8216e5b5b | overcloud-cephstorage-1 | ACTIVE | - | Running | ctlplane=192.0.2.8 | | 5bcbe911-6ed0-4943-b7c0-38328d41882c | overcloud-cephstorage-2 | ACTIVE | - | Running | ctlplane=192.0.2.9 | | f2361647-ab94-4047-8fcd-9026a08c27bc | overcloud-compute-0 | ACTIVE | - | Running | ctlplane=192.0.2.10 | | a1f83a62-9568-444b-b016-0c5fe2cb228b | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.12 | | 2bf4567c-6615-4500-9da6-021c48378591 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.11 | | 017e7fc0-3f03-462a-b750-e3dc60ed9ad0 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.13 | +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ [stack@instack ~]$ ssh heat-admin.2.12 'sudo rados df' pool name KB objects clones degraded unfound rd rd KB wr wr KB cinder_volumes 0 0 0 0 0 0 0 0 0 glance_images 0 0 0 0 0 0 0 0 0 nova_vms 0 0 0 0 0 0 0 0 0 rbd 0 0 0 0 0 0 0 0 0 total used 23261752 0 total avail 105671204 total space 128932956 [stack@instack ~]$ source overcloudrc; glance image-create --name Fedora22 --file Fedora-Cloud-Base-22-20150521.x86_64.qcow2 --disk-format qcow2 --container-format bare --is-public true --progress [stack@instack ~]$ ssh heat-admin.2.12 'sudo rados df' pool name KB objects clones degraded unfound rd rd KB wr wr KB cinder_volumes 0 0 0 0 0 0 0 0 0 glance_images 223242 31 0 0 0 47 38 65 223243 nova_vms 0 0 0 0 0 0 0 0 0 rbd 0 0 0 0 0 0 0 0 0 total used 23725640 31 total avail 105207316 total space 128932956 [stack@instack ~]$ nova boot --image Fedora22 --nic net-id=97ed68f3-0987-4e7a-9a7b-f959c47ecf43 --flavor m1.demo vm0 [stack@instack ~]$ ssh heat-admin.2.12 'sudo rados df' pool name KB objects clones degraded unfound rd rd KB wr wr KB cinder_volumes 0 0 0 0 0 0 0 0 0 glance_images 223242 31 0 0 0 229 180 165 446486 nova_vms 700417 174 0 0 0 64 52 350 700417 rbd 0 0 0 0 0 0 0 0 0 total used 25975876 205 total avail 102957080 total space 128932956 [stack@instack ~]$ cinder create --display-name vol0 5 [stack@instack ~]$ ssh heat-admin.2.12 'sudo rados df' pool name KB objects clones degraded unfound rd rd KB wr wr KB cinder_volumes 1 3 0 0 0 5 3 7 1 glance_images 223242 31 0 0 0 229 180 165 446486 nova_vms 705533 176 0 0 0 2167 37684 1922 713157 rbd 0 0 0 0 0 0 0 0 0 total used 26101972 210 total avail 102830984 total space 128932956 [stack@instack ~]$ ssh heat-admin.2.12 'sudo grep rbd /etc/glance/glance-api.conf | grep -v ^#' stores=glance.store.http.Store,glance.store.rbd.Store default_store=rbd rbd_store_ceph_conf=/etc/ceph/ceph.conf rbd_store_user=marius rbd_store_pool=glance_images rbd_store_chunk_size=8 [stack@instack ~]$ ssh heat-admin.2.12 'sudo grep rbd /etc/cinder/cinder.conf | grep -v ^#' volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_flatten_volume_from_snapshot=False rbd_max_clone_depth=5 rbd_pool=cinder_volumes rbd_secret_uuid=03370f4a-639f-11e5-9ed2-525400c35932 rbd_user=marius rbd_ceph_conf=/etc/ceph/ceph.conf [stack@instack ~]$ ssh heat-admin.2.10 'sudo grep rbd /etc/nova/nova.conf | grep -v ^#' images_type=rbd images_rbd_pool=nova_vms images_rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=marius rbd_secret_uuid=03370f4a-639f-11e5-9ed2-525400c35932
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2015:1862