Bug 1552749 - Integrate with Identity Service guide overlooks some containerized services
Summary: Integrate with Identity Service guide overlooks some containerized services
Keywords:
Status: CLOSED DUPLICATE of bug 1568068
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: documentation
Version: 12.0 (Pike)
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: ---
Assignee: Martin Lopes
QA Contact: RHOS Documentation Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-07 16:31 UTC by nalmond
Modified: 2022-08-16 11:03 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-19 02:26:11 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-4923 0 None None None 2022-08-16 11:03:53 UTC

Description nalmond 2018-03-07 16:31:06 UTC
Description of problem:

https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/12/html-single/integrate_with_identity_service/

> Section 1.8.3, Step 2 and Step 3
This has you restart the nova services with systemctl, these services are containerized, not managed by systemd

> Section 1.8.4 Step 2
This has you restart openstack-cinder-api with systemctl, it looks like this should be running in httpd (based on how my lab is configured). The other cinder services are correctly documented here.



In my lab:
[heat-admin@controller-0 ~]$ sudo systemctl list-units |grep 'cinder\|nova'
  openstack-cinder-scheduler.service                                                                    loaded active running   OpenStack Cinder Scheduler Server
  openstack-cinder-volume.service                                                                       loaded active running   Cluster Controlled openstack-cinder-volume
[heat-admin@controller-0 ~]$ sudo docker ps |grep nova
f95323744c1a        192.168.24.1:8787/rhosp12/openstack-nova-api:12.0-20171201.1                  "kolla_start"            4 days ago          Up 4 days (healthy)                              nova_metadata
0cecf6deaad0        192.168.24.1:8787/rhosp12/openstack-nova-api:12.0-20171201.1                  "kolla_start"            4 days ago          Up 4 days (healthy)                              nova_api
8c03f020136e        192.168.24.1:8787/rhosp12/openstack-nova-conductor:12.0-20171201.1            "kolla_start"            4 days ago          Up 4 days (healthy)                              nova_conductor
98c271fe3679        192.168.24.1:8787/rhosp12/openstack-nova-novncproxy:12.0-20171201.1           "kolla_start"            4 days ago          Up 4 days (healthy)                              nova_vnc_proxy
413c1c5b63b5        192.168.24.1:8787/rhosp12/openstack-nova-consoleauth:12.0-20171201.1          "kolla_start"            4 days ago          Up 4 days (healthy)                              nova_consoleauth
3c2ef1d8a8ba        192.168.24.1:8787/rhosp12/openstack-nova-api:12.0-20171201.1                  "kolla_start"            4 days ago          Up 4 days (healthy)                              nova_api_cron
ab7d6fa4e144        192.168.24.1:8787/rhosp12/openstack-nova-scheduler:12.0-20171201.1            "kolla_start"            4 days ago          Up 4 days (healthy)                              nova_scheduler
92b8c84e7678        192.168.24.1:8787/rhosp12/openstack-nova-placement-api:12.0-20171201.1        "kolla_start"            11 days ago         Up 11 days                                       nova_placement

Comment 1 Martin Lopes 2018-03-08 01:20:28 UTC
The following commands must be reviewed:

# systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-conductor.service openstack-nova-consoleauth.service openstack-nova-novncproxy.service openstack-nova-scheduler.service

# systemctl restart openstack-nova-compute.service

# systemctl restart openstack-cinder-api

# systemctl restart openstack-cinder-scheduler

# pcs resource restart openstack-cinder-volume

Comment 2 Martin Lopes 2018-03-08 01:20:43 UTC
Checking with SME

Comment 3 nalmond 2018-03-09 21:55:56 UTC
These should be ok:

# systemctl restart openstack-cinder-scheduler
# pcs resource restart openstack-cinder-volume

There is also an issue with the config file location. The documentation Section 1.8.3 points to /etc/nova/nova.conf, but the nova containers pull the config from a different directory:

controller:
/var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf
compute:
/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf

All of this also applies to chapters 2 and 4 of this document.

Comment 5 Martin Lopes 2018-03-19 03:47:51 UTC
(In reply to nalmond from comment #3)

> There is also an issue with the config file location.

Thanks, I'll update these. I do think the docs need a definitive listing of where all the configuration files now reside for OSP12.

Comment 7 Martin Lopes 2018-03-19 05:37:06 UTC
Republished the guide with updated nova.conf paths:

https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/12/html-single/integrate_with_identity_service/

Comment 10 Marc Methot 2018-03-21 17:03:55 UTC
Further container related issues within section "2.8.3. Configure Compute to use keystone v3"

~~~
2. Restart these services on the controller to apply the changes: 
# systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-conductor.service openstack-nova-consoleauth.service openstack-nova-novncproxy.service openstack-nova-scheduler.service
# sudo docker exec -it keystone pkill -HUP -f keystone
~~~
- None of the service unit files have been tailored to restart or reload the containers. Same procedure is required as for the keystone service
- Why sudo for docker command? If sudo here, why not sudo everywhere?

~~~
3. Restart these services on each Compute node to apply the changes: 
# systemctl restart openstack-nova-compute.service
~~~
- Same issue here on the compute, either sig HUP the process inside the container or restart it altogether.


Cheers,
MM

Comment 11 nalmond 2018-03-29 21:55:49 UTC
When using ldaps, I was seeing:

2018-03-28 20:10:12.878 29 ERROR keystone.common.wsgi BackendError: {'info': "TLS error -8179:Peer's Certificate issuer is not recognized.", 'desc': "Can't contact LDAP server"}

In addition to what is currently in section 1.7, I did the following on each controller to get this to work (I'm not sure if all of these steps are needed, this should be verified if possible):

$ sudo mkdir -p /var/lib/config-data/puppet-generated/keystone/etc/pki/ca-trust/source/anchors/
$ sudo cp /etc/pki/ca-trust/source/anchors/addc.lab.local.pem /var/lib/config-data/puppet-generated/keystone/etc/pki/ca-trust/source/anchors/
$ sudo docker restart keystone
$ sudo docker exec -it keystone update-ca-trust

Comment 12 nalmond 2018-04-03 15:43:15 UTC
Section 1.8.2 'Configure the controller' step 4 has you edit:

/etc/openstack-dashboard/local_settings

and then restart httpd. In OSP 12, this file should be:

/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard/local_settings

and then restart the horizon container:

$ sudo docker restart horizon

Comment 16 Martin Lopes 2018-04-19 02:26:11 UTC
Moving the remaining work to BZ#1568068.

*** This bug has been marked as a duplicate of bug 1568068 ***


Note You need to log in before you can comment on or make changes to this bug.