Description of problem: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/12/html-single/integrate_with_identity_service/ > Section 1.8.3, Step 2 and Step 3 This has you restart the nova services with systemctl, these services are containerized, not managed by systemd > Section 1.8.4 Step 2 This has you restart openstack-cinder-api with systemctl, it looks like this should be running in httpd (based on how my lab is configured). The other cinder services are correctly documented here. In my lab: [heat-admin@controller-0 ~]$ sudo systemctl list-units |grep 'cinder\|nova' openstack-cinder-scheduler.service loaded active running OpenStack Cinder Scheduler Server openstack-cinder-volume.service loaded active running Cluster Controlled openstack-cinder-volume [heat-admin@controller-0 ~]$ sudo docker ps |grep nova f95323744c1a 192.168.24.1:8787/rhosp12/openstack-nova-api:12.0-20171201.1 "kolla_start" 4 days ago Up 4 days (healthy) nova_metadata 0cecf6deaad0 192.168.24.1:8787/rhosp12/openstack-nova-api:12.0-20171201.1 "kolla_start" 4 days ago Up 4 days (healthy) nova_api 8c03f020136e 192.168.24.1:8787/rhosp12/openstack-nova-conductor:12.0-20171201.1 "kolla_start" 4 days ago Up 4 days (healthy) nova_conductor 98c271fe3679 192.168.24.1:8787/rhosp12/openstack-nova-novncproxy:12.0-20171201.1 "kolla_start" 4 days ago Up 4 days (healthy) nova_vnc_proxy 413c1c5b63b5 192.168.24.1:8787/rhosp12/openstack-nova-consoleauth:12.0-20171201.1 "kolla_start" 4 days ago Up 4 days (healthy) nova_consoleauth 3c2ef1d8a8ba 192.168.24.1:8787/rhosp12/openstack-nova-api:12.0-20171201.1 "kolla_start" 4 days ago Up 4 days (healthy) nova_api_cron ab7d6fa4e144 192.168.24.1:8787/rhosp12/openstack-nova-scheduler:12.0-20171201.1 "kolla_start" 4 days ago Up 4 days (healthy) nova_scheduler 92b8c84e7678 192.168.24.1:8787/rhosp12/openstack-nova-placement-api:12.0-20171201.1 "kolla_start" 11 days ago Up 11 days nova_placement
The following commands must be reviewed: # systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-conductor.service openstack-nova-consoleauth.service openstack-nova-novncproxy.service openstack-nova-scheduler.service # systemctl restart openstack-nova-compute.service # systemctl restart openstack-cinder-api # systemctl restart openstack-cinder-scheduler # pcs resource restart openstack-cinder-volume
Checking with SME
These should be ok: # systemctl restart openstack-cinder-scheduler # pcs resource restart openstack-cinder-volume There is also an issue with the config file location. The documentation Section 1.8.3 points to /etc/nova/nova.conf, but the nova containers pull the config from a different directory: controller: /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf compute: /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf All of this also applies to chapters 2 and 4 of this document.
(In reply to nalmond from comment #3) > There is also an issue with the config file location. Thanks, I'll update these. I do think the docs need a definitive listing of where all the configuration files now reside for OSP12.
Republished the guide with updated nova.conf paths: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/12/html-single/integrate_with_identity_service/
Further container related issues within section "2.8.3. Configure Compute to use keystone v3" ~~~ 2. Restart these services on the controller to apply the changes: # systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-conductor.service openstack-nova-consoleauth.service openstack-nova-novncproxy.service openstack-nova-scheduler.service # sudo docker exec -it keystone pkill -HUP -f keystone ~~~ - None of the service unit files have been tailored to restart or reload the containers. Same procedure is required as for the keystone service - Why sudo for docker command? If sudo here, why not sudo everywhere? ~~~ 3. Restart these services on each Compute node to apply the changes: # systemctl restart openstack-nova-compute.service ~~~ - Same issue here on the compute, either sig HUP the process inside the container or restart it altogether. Cheers, MM
When using ldaps, I was seeing: 2018-03-28 20:10:12.878 29 ERROR keystone.common.wsgi BackendError: {'info': "TLS error -8179:Peer's Certificate issuer is not recognized.", 'desc': "Can't contact LDAP server"} In addition to what is currently in section 1.7, I did the following on each controller to get this to work (I'm not sure if all of these steps are needed, this should be verified if possible): $ sudo mkdir -p /var/lib/config-data/puppet-generated/keystone/etc/pki/ca-trust/source/anchors/ $ sudo cp /etc/pki/ca-trust/source/anchors/addc.lab.local.pem /var/lib/config-data/puppet-generated/keystone/etc/pki/ca-trust/source/anchors/ $ sudo docker restart keystone $ sudo docker exec -it keystone update-ca-trust
Section 1.8.2 'Configure the controller' step 4 has you edit: /etc/openstack-dashboard/local_settings and then restart httpd. In OSP 12, this file should be: /var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard/local_settings and then restart the horizon container: $ sudo docker restart horizon
Moving the remaining work to BZ#1568068. *** This bug has been marked as a duplicate of bug 1568068 ***