Bug 1653816 - nova_wait_for_db_sync container doesn't mount ipa certificate causing tls everywhere deployment to fail
Summary: nova_wait_for_db_sync container doesn't mount ipa certificate causing tls eve...
Keywords:
Status: CLOSED DUPLICATE of bug 1652287
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 14.0 (Rocky)
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: ---
: ---
Assignee: Emilien Macchi
QA Contact: Gurenko Alex
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-27 16:59 UTC by August Simonelli
Modified: 2018-11-28 13:08 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-28 12:55:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description August Simonelli 2018-11-27 16:59:05 UTC
Description of problem:

When deploying TLS everywhere in OSP 14 deployment fails when starting nova_wait_for_db_sync with errors reflecting failed mysql logins.

Version-Release number of selected component (if applicable):
openstack-tripleo-heat-templates-9.0.1-0.20181013060890.el7ost.noarch

rhosp14/openstack-nova-placement-api:14.0-84

8a5cea405ce8        172.16.0.1:8787/rhosp14/openstack-nova-placement-api:14.0-84          "/docker-config-sc..."   2 hours ago         Exited (0) 2 hours ago                       nova_wait_for_db_sync

How reproducible:
Every time

Steps to Reproduce:
1. Deploy TLS everywhere as per the normal steps.
2.
3.

Actual results:
Deployment fails with the following two errors:

        "Error running ['docker', 'run', '--name', 'nova_wait_for_db_sync', '--label', 'config_id=tripleo_step3', '--label', 'container_name=nova_wait_for_db_sync', '--label', 'managed_by=paunch', '--label', 'config_data={\"start_order\": 1, \"image\": \"172.16.0.1:8787/rhosp14/openstack-nova-placement-api:14.0-84\", \"command\": \"/docker-config-scripts/nova_wait_for_db_sync.py\", \"user\": \"root\", \"volumes\": [\"/var/lib/nova:/var/lib/nova:shared\", \"/var/lib/docker-config-scripts/:/docker-config-scripts/\", \"/var/lib/config-data/puppet-generated/nova_placement/etc/nova:/etc/nova:ro\"], \"net\": \"host\", \"detach\": false, \"privileged\": false}', '--net=host', '--privileged=false', '--user=root', '--volume=/var/lib/nova:/var/lib/nova:shared', '--volume=/var/lib/docker-config-scripts/:/docker-config-scripts/', '--volume=/var/lib/config-data/puppet-generated/nova_placement/etc/nova:/etc/nova:ro', '172.16.0.1:8787/rhosp14/openstack-nova-placement-api:14.0-84', '/docker-config-scripts/nova_wait_for_db_sync.py']. [1]",    


 "ERROR:nova_wait_for_db_sync:uuups something went wrong: %s (pymysql.err.OperationalError) (1045, u\"Access denied for user 'nova_api'@'172.17.1.201' (using password: YES)\")",


Expected results:

Deployment succeeds


Additional info:

SSL-enabled cnf file is created correctly:

root@lab-controller01 my.cnf.d]# cat /var/lib/config-data/puppet-generated/nova_placement/etc/my.cnf.d/tripleo.cnf
[tripleo]
bind-address=172.17.1.201
ssl=1
ssl-ca=/etc/ipa/ca.crt

Connection built correctly:

connection=mysql+pymysql://nova_api:oGyhPvnh2PHyxURyHoBdtH9ES.redhat.local/nova_api?read_default_group=tripleo&read_default_file=/etc/my.cnf.d/tripleo.cnf

Non-secure logins fail as expected:

root@lab-controller01 my.cnf.d]# mysql -u nova_api -h 172.17.1.201 -p
Enter password:
ERROR 1045 (28000): Access denied for user 'nova_api'@'172.17.1.201' (using password: YES)

Secure logins using the cert work as expected:

[root@lab-controller01 my.cnf.d]# mysql -u nova_api -h 172.17.1.201 -p --ssl --ssl-ca=/etc/ipa/ca.crt
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 7411
Server version: 10.1.20-MariaDB MariaDB Server

nova_wait_for_db_sync container doesn't mount cert but another using same image (int this case nova_api_db_sync) does:

[root@lab-controller01 my.cnf.d]# docker inspect nova_wait_for_db_sync | grep ipa
[root@lab-controller01 my.cnf.d]# docker inspect nova_api_db_sync | grep ipa
                "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro",
                "Source": "/etc/ipa/ca.crt",
                "Destination": "/etc/ipa/ca.crt",
                "config_data": "{\"start_order\": 0, \"command\": \"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'\", \"user\": \"root\", \"volumes\": [\"/etc/hosts:/etc/hosts:ro\", \"/etc/localtime:/etc/localtime:ro\", \"/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro\", \"/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro\", \"/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro\", \"/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro\", \"/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro\", \"/dev/log:/dev/log\", \"/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro\", \"/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro\", \"/etc/puppet:/etc/puppet:ro\", \"/var/log/containers/nova:/var/log/nova\", \"/var/log/containers/httpd/nova-api:/var/log/httpd\", \"/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro\", \"/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro\"], \"image\": \"172.16.0.1:8787/rhosp14/openstack-nova-api:14.0-87\", \"detach\": false, \"net\": \"host\"}",

Comment 1 Juan Antonio Osorio 2018-11-28 12:42:30 UTC
That code was introduced here https://review.openstack.org/#/c/610966/

And subsequently reverted here: https://review.openstack.org/#/c/619607/

Is the revert scheduled to merge downstream too?

Comment 2 Raildo Mascena de Sousa Filho 2018-11-28 12:47:17 UTC
Moving this BZ to the DFG:Compute since it was a nova change that causes this issue

Comment 3 Lee Yarwood 2018-11-28 12:55:36 UTC
Marking this as a duplicate of 1652287 where the reworked fix for this issue will be landing shortly.

*** This bug has been marked as a duplicate of bug 1652287 ***

Comment 4 Martin Schuppert 2018-11-28 13:08:57 UTC
(In reply to Juan Antonio Osorio from comment #1)
> That code was introduced here https://review.openstack.org/#/c/610966/
> 
> And subsequently reverted here: https://review.openstack.org/#/c/619607/
> 
> Is the revert scheduled to merge downstream too?

yes, we reverted it downstream and wait on the reworked fix to merge to backport it.

[1] https://review.openstack.org/619586


Note You need to log in before you can comment on or make changes to this bug.