Bug 1806100 - OpenStack All-In-One post-install Gateway Not Found
Summary: OpenStack All-In-One post-install Gateway Not Found
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo
Version: 16.1 (Train)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: z2
: 16.0 (Train on RHEL 8.1)
Assignee: Brent Eagles
QA Contact: nlevinki
URL:
Whiteboard:
: 1823212 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-02-22 06:15 UTC by Brian J. Atkisson
Modified: 2023-09-07 22:00 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-04-30 17:18:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-3524 0 None None None 2022-08-23 16:05:20 UTC
Red Hat Knowledge Base (Solution) 4990751 0 None None None 2020-04-16 00:49:34 UTC

Description Brian J. Atkisson 2020-02-22 06:15:10 UTC
Description of problem:

Following the All-in-One OSP16 instructions on https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/quick_start_guide/index results in services hanging. Horizon gives a 'Gateway Not Found' error post-auth (regardless of the user). The OpenStack CLI commands simply hang. It doesn't seem like something is running on the right interface.

FWIW, the OpenStack 16 beta RPMs/containers work with exactly the same configs. Not sure what is missing in the GA bits.

[stack@aio ~]$ cat containers-prepare-parameters.yaml 


parameter_defaults:
  ContainerImagePrepare:
  - set:
      ceph_alertmanager_image: alertmanager
      ceph_alertmanager_namespace: docker.io/prom
      ceph_alertmanager_tag: v0.16.2
      ceph_grafana_image: grafana
      ceph_grafana_namespace: docker.io/grafana
      ceph_grafana_tag: 5.2.4
      ceph_image: rhceph-4.0-rhel8
      ceph_namespace: docker-registry.upshift.redhat.com/ceph
      ceph_node_exporter_image: node-exporter
      ceph_node_exporter_namespace: docker.io/prom
      ceph_node_exporter_tag: v0.17.0
      ceph_prometheus_image: prometheus
      ceph_prometheus_namespace: docker.io/prom
      ceph_prometheus_tag: v2.7.2
      ceph_tag: latest
      name_prefix: openstack-
      name_suffix: ''
      namespace: registry.redhat.io/rhosp-rhel8
      neutron_driver: ovn
      rhel_containers: false
      tag: 16.0
    tag_from_label: '{version}-{release}'
  ContainerImageRegistryLogin: true
  ContainerImageRegistryCredentials:
    registry.redhat.io:
      'username': 'password'

parameter_defaults:
  CloudName: 192.168.25.2
  ControlPlaneStaticRoutes: []
  Debug: true
  DeploymentUser: stack
  DnsServers:
    - 192.168.100.8
  DockerInsecureRegistryAddress:
    - 192.168.25.2:8787
  NeutronPublicInterface: enp0s8
  NeutronDnsDomain: localdomain
  NeutronBridgeMappings: datacentre:br-ctlplane
  NeutronPhysicalBridge: br-ctlplane
  StandaloneEnableRoutedNetworks: false
  StandaloneHomeDir: /home/stack
  StandaloneLocalMtu: 1500
  NtpServer: 192.168.100.8


br-ctlplane: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.25.2  netmask 255.255.255.0  broadcast 192.168.25.255
        inet6 fe80::a00:27ff:fe9a:b805  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:9a:b8:05  txqueuelen 1000  (Ethernet)
        RX packets 650  bytes 43844 (42.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 256  bytes 1418282 (1.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.100.100  netmask 255.255.255.0  broadcast 192.168.100.255
        inet6 fe80::3e35:3395:1db2:2e8e  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:4f:67:39  txqueuelen 1000  (Ethernet)
        RX packets 360  bytes 40386 (39.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 333  bytes 40902 (39.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::a00:27ff:fe9a:b805  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:9a:b8:05  txqueuelen 1000  (Ethernet)
        RX packets 650  bytes 52944 (51.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 252  bytes 1417926 (1.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Version-Release number of selected component (if applicable):

openstack-tripleo-puppet-elements-11.2.2-0.20200128210949.d668f88.el8ost.noarch
puppet-openstack_extras-15.4.1-0.20191014142330.8ba5522.el8ost.noarch
python3-openstackclient-4.0.0-0.20191025160014.aa64eb6.el8ost.noarch
ansible-role-openstack-operations-0.0.1-0.20191022044056.29cc537.el8ost.noarch
openstack-tripleo-image-elements-10.6.1-0.20191022065313.7338463.el8ost.noarch
openstack-tripleo-heat-templates-11.3.2-0.20200131125640.cc909b6.el8ost.noarch
python3-openstacksdk-0.36.0-0.20191004153514.8b85e8c.el8ost.noarch
openstack-heat-common-13.0.1-0.20191127204014.0703ca7.el8ost.noarch
openstack-tripleo-common-containers-11.3.3-0.20200121231250.3c68b48.el8ost.noarch
openstack-heat-api-13.0.1-0.20191127204014.0703ca7.el8ost.noarch
python-openstackclient-lang-4.0.0-0.20191025160014.aa64eb6.el8ost.noarch
openstack-heat-monolith-13.0.1-0.20191127204014.0703ca7.el8ost.noarch
puppet-openstacklib-15.4.1-0.20191014170135.94b2016.el8ost.noarch
openstack-ironic-python-agent-builder-1.1.1-0.20191203040321.a34dfda.el8ost.noarch
openstack-tripleo-common-11.3.3-0.20200121231250.3c68b48.el8ost.noarch
openstack-selinux-0.8.20-0.20191202205815.09846a2.el8ost.noarch
openstack-heat-agents-1.10.1-0.20191022061131.96b819c.el8ost.noarch
openstack-heat-engine-13.0.1-0.20191127204014.0703ca7.el8ost.noarch
openstack-tripleo-validations-11.3.1-0.20191126041901.2bba53a.el8ost.noarch

registry.redhat.io/rhosp-rhel8/openstack-neutron-server-ovn           16.0-75      41fd05bdc190   3 weeks ago   1.06 GB
registry.redhat.io/rhosp-rhel8/openstack-cinder-api                   16.0-76      4cc60446f48b   3 weeks ago   1.18 GB
registry.redhat.io/rhosp-rhel8/openstack-nova-compute                 16.0-80      cc913a1cd6a7   3 weeks ago   2.02 GB
registry.redhat.io/rhosp-rhel8/openstack-swift-object                 16.0-79      29c86cc29283   3 weeks ago   771 MB
registry.redhat.io/rhosp-rhel8/openstack-nova-scheduler               16.0-79      ca2a27d1ff2c   3 weeks ago   1.22 GB
registry.redhat.io/rhosp-rhel8/openstack-neutron-metadata-agent-ovn   16.0-78      280f8cc281a4   3 weeks ago   1.06 GB
registry.redhat.io/rhosp-rhel8/openstack-nova-api                     16.0-80      3a079325cab4   3 weeks ago   1.13 GB
registry.redhat.io/rhosp-rhel8/openstack-keystone                     16.0-79      6024c1bdc12a   3 weeks ago   769 MB
registry.redhat.io/rhosp-rhel8/openstack-swift-proxy-server           16.0-75      f4209dcb788b   3 weeks ago   818 MB
registry.redhat.io/rhosp-rhel8/openstack-nova-novncproxy              16.0-78      a6c46612c22a   3 weeks ago   1.12 GB
cluster.common.tag/openstack-cinder-volume                            pcmklatest   ca5b1736c010   3 weeks ago   1.24 GB
registry.redhat.io/rhosp-rhel8/openstack-cinder-volume                16.0-77      ca5b1736c010   3 weeks ago   1.24 GB
registry.redhat.io/rhosp-rhel8/openstack-cinder-scheduler             16.0-78      e803ca191946   3 weeks ago   1.1 GB
registry.redhat.io/rhosp-rhel8/openstack-glance-api                   16.0-77      dd7482af139b   3 weeks ago   1.02 GB
registry.redhat.io/rhosp-rhel8/openstack-ovn-nb-db-server             16.0-78      137dc9323f20   3 weeks ago   603 MB
registry.redhat.io/rhosp-rhel8/openstack-ovn-controller               16.0-76      cfc8fb7bfdfd   3 weeks ago   603 MB
registry.redhat.io/rhosp-rhel8/openstack-swift-account                16.0-79      1e64a47e507e   3 weeks ago   771 MB
registry.redhat.io/rhosp-rhel8/openstack-swift-container              16.0-79      319fd5dd4660   3 weeks ago   771 MB
registry.redhat.io/rhosp-rhel8/openstack-ovn-northd                   16.0-81      1c9e95ddf286   3 weeks ago   736 MB
registry.redhat.io/rhosp-rhel8/openstack-nova-conductor               16.0-79      e134257fdb59   3 weeks ago   1.03 GB
registry.redhat.io/rhosp-rhel8/openstack-ovn-sb-db-server             16.0-80      1b3410c477af   3 weeks ago   603 MB
registry.redhat.io/rhosp-rhel8/openstack-horizon                      16.0-81      59c0315a4508   3 weeks ago   872 MB
registry.redhat.io/rhosp-rhel8/openstack-placement-api                16.0-79      78b86b2fb1fd   3 weeks ago   642 MB
registry.redhat.io/rhosp-rhel8/openstack-nova-libvirt                 16.0-86      a75a6c08c35a   3 weeks ago   2.03 GB
cluster.common.tag/openstack-rabbitmq                                 pcmklatest   613e83616b29   3 weeks ago   594 MB
registry.redhat.io/rhosp-rhel8/openstack-rabbitmq                     16.0-86      613e83616b29   3 weeks ago   594 MB
registry.redhat.io/rhosp-rhel8/openstack-cron                         16.0-82      2d40b517db90   3 weeks ago   413 MB
registry.redhat.io/rhosp-rhel8/openstack-memcached                    16.0-85      c897693a8bc8   3 weeks ago   434 MB
registry.redhat.io/rhosp-rhel8/openstack-iscsid                       16.0-84      c68d23ae62b1   3 weeks ago   433 MB
cluster.common.tag/openstack-mariadb                                  pcmklatest   b9c6851d7eed   3 weeks ago   766 MB
registry.redhat.io/rhosp-rhel8/openstack-mariadb                      16.0-87      b9c6851d7eed   3 weeks ago   766 MB




How reproducible:

Always, I've tried using the other interface on the VM with no luck.


Steps to Reproduce:
1. Save containers-prepare-parameters.yaml and standalone_parameters.yaml above
2. sudo openstack tripleo deploy   --templates   --local-ip=$IP/$NETMASK   -e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml   -r /usr/share/openstack-tripleo-heat-templates/roles/Standalone.yaml   -e $HOME/containers-prepare-parameters.yaml   -e $HOME/standalone_parameters.yaml   --output-dir $HOME   --standalone

Actual results:

http://192.168.25.2/dashboard/project/

Gateway Timeout

The gateway did not receive a timely response from the upstream server or application.


Expected results:

Horizon to load correctly



Additional info:

Comment 1 Alex Schultz 2020-02-25 19:36:20 UTC
This is likely a docs bug. It's timing out because you likely don't have correct access to the network that you have created. Since ControlPlaneStaticRoutes: [] is empty, the 192.168.25.x network is likely not routable to your external host that you are using for horizon.  If you use something like sshuttle to provide 192.168.25.x to you're host, does it work?

Comment 2 Brian J. Atkisson 2020-02-25 19:59:19 UTC
Hrm, the machine I'm running the openstack cli and loading the horizon web ui is on the same subnet - 192.168.25.0/24. 


 14:54:36  [aioadmin]  seraph  ~ 
$ traceroute 192.168.25.2                  
traceroute to 192.168.25.2 (192.168.25.2), 64 hops max, 52 byte packets
 1  192.168.25.2 (192.168.25.2)  0.435 ms  0.240 ms  0.202 ms

 14:55:15  [aioadmin]  seraph  ✘  ~ 
$ ping -c 2 192.168.25.2
PING 192.168.25.2 (192.168.25.2): 56 data bytes
64 bytes from 192.168.25.2: icmp_seq=0 ttl=64 time=0.431 ms
64 bytes from 192.168.25.2: icmp_seq=1 ttl=64 time=0.198 ms


 14:56:57  []  seraph  ✘  ~ 
$ nc -d -v 192.168.25.2 9696
Connection to 192.168.25.2 port 9696 [tcp/*] succeeded!


 14:53:00  [aioadmin]  seraph  ✘  ~ 
$ openstack --debug --insecure network list

[...]
Network client initialized using OpenStack SDK: <openstack.network.v2._proxy.Proxy object at 0x10d862450>
Instantiating identity client: <class 'keystoneclient.v3.client.Client'>
REQ: curl -g -i --insecure -X GET http://192.168.25.2:9696/v2.0/networks -H "Accept: application/json" -H "User-Agent: openstacksdk/0.39.0 keystoneauth1/3.18.0 python-requests/2.22.0 CPython/3.7.6" -H "X-Auth-Token: {SHA256}f5a1197a339fe12d2da4697d56518621777444f321d7cfbc4bb2a90587b2b30f"
Starting new HTTP connection (1): 192.168.25.2:9696


The port is clearly open and reachable, but the connection just hangs. The strange part is this exact same config worked fine with the beta, which is why is suspect a bug here.

Comment 3 Alex Schultz 2020-02-26 21:49:09 UTC
I was able to reproduce it. It seems to be related to the neutron api as other calls like 'openstack endpoint list' work just fine.

In the neutron logs, I'm seeing:

2020-02-26 21:45:08.531 36 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for AllServicesNeutronWorker with retry                                                                     
2020-02-26 21:45:08.534 36 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused                                            
2020-02-26 21:45:08.752 26 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for WorkerService with retry                                                                                
2020-02-26 21:45:08.755 26 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused                                            
2020-02-26 21:45:08.756 27 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for WorkerService with retry                                                                                
2020-02-26 21:45:08.759 27 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused                                            
2020-02-26 21:45:08.820 29 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for WorkerService with retry                                                                                
2020-02-26 21:45:08.823 29 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused                                            
2020-02-26 21:45:08.827 34 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for RpcReportsWorker with retry                                                                             
2020-02-26 21:45:08.828 28 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for WorkerService with retry
2020-02-26 21:45:08.830 34 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused
2020-02-26 21:45:08.829 35 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for MaintenanceWorker with retry
2020-02-26 21:45:08.831 28 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused
2020-02-26 21:45:08.832 35 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused
2020-02-26 21:45:08.894 32 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for RpcWorker with retry
2020-02-26 21:45:08.897 32 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused
2020-02-26 21:45:08.902 30 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for RpcWorker with retry
2020-02-26 21:45:08.905 30 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused
2020-02-26 21:45:08.932 31 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for RpcWorker with retry
2020-02-26 21:45:08.935 31 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused
2020-02-26 21:45:08.996 33 INFO networking_ovn.ovsdb.impl_idl_ovn [-] Getting OvsdbNbOvnIdl for RpcWorker with retry
2020-02-26 21:45:09.000 33 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:192.168.25.2:6641 to retrieve schema: Connection refused

Comment 4 Alex Schultz 2020-02-26 22:10:03 UTC
Few things of additional note,

1) ovn_north_db_server and ovn_south_db_server are logging in their containers and not to the /var/log/containers/openvswitch path. Logs are in /var/log/kolla/openvswitch in the containers
2) neutron api just hangs

Comment 5 Brent Eagles 2020-02-26 22:59:15 UTC
The north and south db servers are not getting executing with a listening address for some reason. I took the liberty of hardcoding the IP in to the start scripts in the nb and sb db servers and restarting them and now it seems to work. I'll loop some of the OVN devs in.
'

Comment 8 Daniel Alvarez Sanchez 2020-03-03 15:10:36 UTC
Without having looked into this AOI model (do we use ovn-dbs managed by pacemaker even if it's just one node?), could this be a Dup of: https://bugzilla.redhat.com/show_bug.cgi?id=1807826 ?

Comment 9 Alex Schultz 2020-03-03 15:20:45 UTC
Yes it is pacemaker by default because the downstream forces pacemaker on by default.

Comment 11 David Hill 2020-04-15 21:03:07 UTC
*** Bug 1823212 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.