Description of problem: After https://review.openstack.org/#/c/197734/ lands into tht tuskar based deploys will fail like ERROR: openstack ERROR: Property error : CephClusterConfig: ceph_storage_count The Parameter (Ceph-Storage-1::CephStorageCount) was not provided. This is because the scaling param CephStorageCount which is used to signal 'enable_ceph' in that patch is namespaced as Ceph-Storage-1::CephStorageCount I think this may be due to special handling for the count params, am still investigating. To be clear, once that lands, all --tuskar deploys will fail. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. Use tripleo heat templates that include https://review.openstack.org/#/c/197734 2. openstack overcloud deploy plan --overcloud --compute-scale 1 --control-scale 1 3. ERROR: openstack ERROR: Property error : CephClusterConfig: ceph_storage_count The Parameter (Ceph-Storage-1::CephStorageCount) was not provided. Actual results: ERROR: openstack ERROR: Property error : CephClusterConfig: ceph_storage_count The Parameter (Ceph-Storage-1::CephStorageCount) was not provided. Expected results: No explosions. Additional info:
So it is indeed a clash with the special handling we do for the count params. Basically, what is CephStorageCount, becomes CephStorage-1::count and the original param is no more. The external ceph patch tries to reference this like get_param: Ceph-Storage-1::CephStorageCount (tuskar also namespaces it). Long-story short, using any other param name here would work fine. Am trying 'LeParam' for example: in overcloud-without-mergepy.yaml 1181 CephClusterConfig: 1182 type: OS::TripleO::CephClusterConfig::SoftwareConfig 1183 properties: 1184 ceph_storage_count: {get_param: LeParam} # instead of CephStorageCount 1185 ceph_fsid: {get_param: CephClusterFSID} 624 CephStorageCount: 625 type: number 626 default: 0 627 LeParam: 628 type: number 629 default: 0 Using this the deploy with tuskar is OK. Am trying to come up with a happy compromise in the templates.
propsed fixup @ https://review.openstack.org/#/c/213179/ (still needs more testing but fyi)
With us removing tuskar, we can probably not fix this.
*** Bug 1258631 has been marked as a duplicate of this bug. ***
It turns out that this breaks all deployments with tuskar which currently includes the UI. Unless we plan on the UI not being an option even for POC, we need to at least make it work. One possible hackish solution would be to hard code the UI to pass the right value as 0 in all cases and document that you can't do any ceph deployment with the UI.
Ana, would you please sync up with marios on perhaps setting this in the UI, he can give you some guidance on where it could possibly be done.
We decided to go with the tuskar fix rather than disable all Ceph deployments in the GUI.
Since Tusker deprecated I think we should close this ticket ?
I think the fix was still needed because of the UI like comment 7
Deployed two configurations from the web UI: a) 1 controller + 1 compute + 1 Ceph storage b) 1 controller + 1 compute Each type of node had a specific flavor, which was associated (if defined) to the matching deployment role. The remaining roles were assigned to baremetal flavor. Verified on a RHEL7.1 environment, relevant packages: openstack-puppet-modules-2015.1.8-21.el7ost.noarch openstack-tripleo-0.0.7-0.1.1664e566.el7ost.noarch openstack-tripleo-common-0.0.1.dev6-3.git49b57eb.el7ost.noarch openstack-tripleo-heat-templates-0.8.6-69.el7ost.noarch openstack-tripleo-image-elements-0.9.6-10.el7ost.noarch openstack-tripleo-puppet-elements-0.0.1-5.el7ost.noarch openstack-tuskar-0.4.18-4.el7ost.noarch openstack-tuskar-ui-0.4.0-3.el7ost.noarch openstack-tuskar-ui-extras-0.0.4-1.el7ost.noarch python-tuskarclient-0.1.18-4.el7ost.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2015:1862