Dell testing of A3 has run into this and it is a regression of their A2 testing. +++ This bug was initially created as a clone of Bug #1082187 +++ Description of problem: After icehouse-3 foreman install, the default role in keystone gets created as _member_ but horizon's local_settings has this config OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member" These two differences, prevent quite a few things like quota modification, user editing etc. Setting OPENSTACK_KEYSTONE_DEFAULT_ROLE to _member_ and `service httpd restart` fixes the issue. Version-Release number of selected component (if applicable): icehouse-3 How reproducible: Always Steps to Reproduce: 1. Deploy Controller (Neutron) hostgroup using foreman 2. Login to Horizon and attempt to modify admin quota Actual results: Horizon will error Expected results: The quota should be modified Additional info:
I just took a quick look at the openstack-puppet-modules, and the defaults in fact do not match between keystone/roles/admin.pp and horizon/init.pp. I think we should be able to fix this by amending our horizon config in controller_common.pp to have: keystone_default_role => '_member_', which would match keystone.
Is neutron required to reproduce this issue or it also happening for Nova networking.
The keystone configuration should be consistent between both nova networking and neutron controllers, as it is specified in one place for both. If you care to change it on your own setup to test before the next release with the fix, you can simply edit /usr/share/openstack-foreman-installer/puppet/modules/quickstack/manifests/controller_common.pp. The change should be roughly on line 288, which is this block: https://github.com/redhat-openstack/astapor/blob/openstack-foreman-installer-1.0.4/puppet/modules/quickstack/manifests/controller_common.pp#L288 Simply add anywhere in that block: keystone_default_role => '_member_', And that should fix your setup on next agent run.
Patch posted: https://github.com/redhat-openstack/astapor/pull/157
Above patch was already merged, but some concern has been expressed that is we set horizon to have _member_ to match the new keystone puppet, and the user tries to edit a previously created member of type 'Member', there could be issues. Full fix that will not break existing installations requires more research and probably another patch. The existing patch should be fine for new installations though.
This is not fixed for 4.0 A4. The keystone role is still "_member_" and the dashboard's local_settings still states OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member" as always. openstack-puppet-modules-2013.2-9.el6ost.noarch
temporary workaround: 1. sed -i 's/^OPENSTACK_KEYSTONE_DEFAULT_ROLE =.*/OPENSTACK_KEYSTONE_DEFAULT_ROLE ="_member_"/g' /etc/openstack-dashboard/local_settings 2. service httpd restart
Udi, what version of foreman are you testing? the openstack-puppet-modules version is correct for A4, but has not changes related to this that I know of. We set the _horizon_ default role to be _member_ now in both standalone controllers and HA, with the output landing in /etc/openstack-dashboard/local_settings, which will look like this: OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_" I just verified this on a fresh setup we have using foreman-openstack-installer rpm version openstack-foreman-installer-1.0.6-1.el6ost. There is currently no setting on the keystone-puppet side to allow us to change that, so this is the best we can do for this timeframe. Further flexibility would be needed in opensatck-puppet-modules/keystone to allow us to configure that as well.
Verified in Havana A4 with a foreman install. Will open a separate bug on the packstack issue.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2014-0517.html
I have just tested RHOS 5 packstack and the issue still exists, repopening the bug.
happens in RDO icehouse packages as well
openstack-dashboard-2014.1-1.el6.noarch
openstack-packstack-puppet-2014.1.1-0.12.dev1068.el6.noarch
This bug it not for packstack, it is for openstack-foreman-installer. Udi mentioned above that there was a separate bug created for packstack. This one is verified.
I'm wondering why for foreman, since we don't use foreman at all and it happens while using packstack from rdo?