Bug 1806100
Summary: | OpenStack All-In-One post-install Gateway Not Found | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Brian J. Atkisson <batkisso> |
Component: | openstack-tripleo | Assignee: | Brent Eagles <beagles> |
Status: | CLOSED INSUFFICIENT_DATA | QA Contact: | nlevinki <nlevinki> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 16.1 (Train) | CC: | apetrich, aschultz, beagles, bhaley, dalvarez, dhill, jbeaudoi, jlibosva, mburns, msufiyan, njohnston, thashimo |
Target Milestone: | z2 | Keywords: | Triaged |
Target Release: | 16.0 (Train on RHEL 8.1) | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-04-30 17:18:18 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Brian J. Atkisson
2020-02-22 06:15:10 UTC
This is likely a docs bug. It's timing out because you likely don't have correct access to the network that you have created. Since ControlPlaneStaticRoutes: [] is empty, the 192.168.25.x network is likely not routable to your external host that you are using for horizon. If you use something like sshuttle to provide 192.168.25.x to you're host, does it work? Hrm, the machine I'm running the openstack cli and loading the horizon web ui is on the same subnet - 192.168.25.0/24. 14:54:36 [aioadmin] seraph ~ $ traceroute 192.168.25.2 traceroute to 192.168.25.2 (192.168.25.2), 64 hops max, 52 byte packets 1 192.168.25.2 (192.168.25.2) 0.435 ms 0.240 ms 0.202 ms 14:55:15 [aioadmin] seraph ✘ ~ $ ping -c 2 192.168.25.2 PING 192.168.25.2 (192.168.25.2): 56 data bytes 64 bytes from 192.168.25.2: icmp_seq=0 ttl=64 time=0.431 ms 64 bytes from 192.168.25.2: icmp_seq=1 ttl=64 time=0.198 ms 14:56:57 [] seraph ✘ ~ $ nc -d -v 192.168.25.2 9696 Connection to 192.168.25.2 port 9696 [tcp/*] succeeded! 14:53:00 [aioadmin] seraph ✘ ~ $ openstack --debug --insecure network list [...] Network client initialized using OpenStack SDK: <openstack.network.v2._proxy.Proxy object at 0x10d862450> Instantiating identity client: <class 'keystoneclient.v3.client.Client'> REQ: curl -g -i --insecure -X GET http://192.168.25.2:9696/v2.0/networks -H "Accept: application/json" -H "User-Agent: openstacksdk/0.39.0 keystoneauth1/3.18.0 python-requests/2.22.0 CPython/3.7.6" -H "X-Auth-Token: {SHA256}f5a1197a339fe12d2da4697d56518621777444f321d7cfbc4bb2a90587b2b30f" Starting new HTTP connection (1): 192.168.25.2:9696 The port is clearly open and reachable, but the connection just hangs. The strange part is this exact same config worked fine with the beta, which is why is suspect a bug here. I was able to reproduce it. It seems to be related to the neutron api as other calls like 'openstack endpoint list' work just fine. In the neutron logs, I'm seeing: 2020-02-26 21:45:08.531 36 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for AllServicesNeutronWorker with retry 2020-02-26 21:45:08.534 36 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused 2020-02-26 21:45:08.752 26 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for WorkerService with retry 2020-02-26 21:45:08.755 26 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused 2020-02-26 21:45:08.756 27 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for WorkerService with retry 2020-02-26 21:45:08.759 27 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused 2020-02-26 21:45:08.820 29 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for WorkerService with retry 2020-02-26 21:45:08.823 29 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused 2020-02-26 21:45:08.827 34 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for RpcReportsWorker with retry 2020-02-26 21:45:08.828 28 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for WorkerService with retry 2020-02-26 21:45:08.830 34 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused 2020-02-26 21:45:08.829 35 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for MaintenanceWorker with retry 2020-02-26 21:45:08.831 28 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused 2020-02-26 21:45:08.832 35 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused 2020-02-26 21:45:08.894 32 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for RpcWorker with retry 2020-02-26 21:45:08.897 32 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused 2020-02-26 21:45:08.902 30 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for RpcWorker with retry 2020-02-26 21:45:08.905 30 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused 2020-02-26 21:45:08.932 31 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for RpcWorker with retry 2020-02-26 21:45:08.935 31 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused 2020-02-26 21:45:08.996 33 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for RpcWorker with retry 2020-02-26 21:45:09.000 33 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused Few things of additional note, 1) ovn_north_db_server and ovn_south_db_server are logging in their containers and not to the /var/log/containers/openvswitch path. Logs are in /var/log/kolla/openvswitch in the containers 2) neutron api just hangs The north and south db servers are not getting executing with a listening address for some reason. I took the liberty of hardcoding the IP in to the start scripts in the nb and sb db servers and restarting them and now it seems to work. I'll loop some of the OVN devs in. ' Without having looked into this AOI model (do we use ovn-dbs managed by pacemaker even if it's just one node?), could this be a Dup of: https://bugzilla.redhat.com/show_bug.cgi?id=1807826 ? Yes it is pacemaker by default because the downstream forces pacemaker on by default. *** Bug 1823212 has been marked as a duplicate of this bug. *** |