Description of problem: I am using with the Beta bits since I need to support partners. config 1 - baremetal, 1 controller, 2 computes, no isolatied nets, vxlan config 2 - baremetal, 1 controller, 2 computes, 1 ceph, no isolated nets, vxlan Issue is that my neutron networking has never worked when deployed with ceph. Without ceph, neutron has been working. Joe Talerico looked at a config that included Ceph and got it to work by making several changes to the conifg, thus I think this is a director problem not a neutron problem. I save some infro from the 2 configs here: http://refarch.cloud.lab.eng.bos.redhat.com/pub/tmp/sprospd/ I since have deployed a HA config (3 controllers) which requred Ceph. So I have 3 controllers, 1 compute, 1 Ceph; no network isolation; vxlan; and neutron is not working. I will gather sosreports from this config and put them under a separate directory at the above url. Version-Release number of selected component (if applicable): [stack@ospha-inst ~]$ yum list installed | grep -i -e director -e tripleo -eheat *Note* Spacewalk repositories are not listed below. You must run this command as root to access Spacewalk repositories. Repo rhel-7-server-extras-rpms forced skip_if_unavailable=True due to: /etc/pki/entitlement/4418785739260824032-key.pem Repo rhel-7-server-rh-common-rpms forced skip_if_unavailable=True due to: /etc/pki/entitlement/4418785739260824032-key.pem Repo rhel-7-server-rpms forced skip_if_unavailable=True due to: /etc/pki/entitlement/4418785739260824032-key.pem Repo rhel-7-server-optional-rpms forced skip_if_unavailable=True due to: /etc/pki/entitlement/4418785739260824032-key.pem ahc-tools.noarch 0.1.1-5.el7ost @RH7-RHOS-7.0-director fio.x86_64 2.2.8-1.el7ost @RH7-RHOS-7.0-director hdf5.x86_64 1.8.13-7.el7ost @RH7-RHOS-7.0-director instack.noarch 0.0.7-1.el7ost @RH7-RHOS-7.0-director instack-undercloud.noarch 2.1.2-6.el7ost @RH7-RHOS-7.0-director openstack-heat-api.noarch 2015.1.0-4.el7ost @RH7-RHOS-7.0 openstack-heat-api-cfn.noarch 2015.1.0-4.el7ost @RH7-RHOS-7.0 openstack-heat-api-cloudwatch.noarch openstack-heat-common.noarch 2015.1.0-4.el7ost @RH7-RHOS-7.0 openstack-heat-engine.noarch 2015.1.0-4.el7ost @RH7-RHOS-7.0 openstack-heat-templates.noarch 0-0.6.20150605git.el7ost openstack-ironic-discoverd.noarch 1.1.0-4.el7ost @RH7-RHOS-7.0-director openstack-puppet-modules.noarch 2015.1.7-5.el7ost @RH7-RHOS-7.0-director openstack-tripleo.noarch 0.0.7-0.1.1664e566.el7ost @RH7-RHOS-7.0-director openstack-tripleo-common.noarch 0.0.1.dev6-0.git49b57eb.el7ost @RH7-RHOS-7.0-director openstack-tripleo-heat-templates.noarch 0.8.6-22.el7ost @RH7-RHOS-7.0-director openstack-tripleo-image-elements.noarch 0.9.6-4.el7ost @RH7-RHOS-7.0-director openstack-tripleo-puppet-elements.noarch 0.0.1-2.el7ost @RH7-RHOS-7.0-director openstack-tuskar.noarch 0.4.18-3.el7ost @RH7-RHOS-7.0-director openstack-tuskar-ui.noarch 0.3.0-6.el7ost @RH7-RHOS-7.0-director openstack-tuskar-ui-extras.noarch 0.0.4-1.el7ost @RH7-RHOS-7.0-director openwsman-python.x86_64 2.3.6-13.el7 @RH7-RHOS-7.0-director os-apply-config.noarch 0.1.31-1.el7ost @RH7-RHOS-7.0-director os-cloud-config.noarch 0.2.8-4.el7ost @RH7-RHOS-7.0-director os-collect-config.noarch 0.1.35-2.el7ost @RH7-RHOS-7.0-director os-net-config.noarch 0.1.4-2.el7ost @RH7-RHOS-7.0-director os-refresh-config.noarch 0.1.10-1.el7ost @RH7-RHOS-7.0-director python-Bottleneck.x86_64 0.6.0-4.el7ost @RH7-RHOS-7.0-director python-flask-babel.noarch 0.9-1.el7ost @RH7-RHOS-7.0-director python-hardware.noarch 0.14-6.el7ost @RH7-RHOS-7.0-director python-heatclient.noarch 0.6.0-1.el7ost @RH7-RHOS-7.0 python-ironic-discoverd.noarch 1.1.0-4.el7ost @RH7-RHOS-7.0-director python-numexpr.x86_64 2.3-4.el7ost @RH7-RHOS-7.0-director python-pandas.x86_64 0.16.0-2.el7ost @RH7-RHOS-7.0-director python-ptyprocess.noarch 0.4-1.el7ost @RH7-RHOS-7.0-director 0.0.8-13.el7ost @RH7-RHOS-7.0-director python-tables.x86_64 3.1.1-2.el7ost @RH7-RHOS-7.0-director python-tuskarclient.noarch 0.1.18-2.el7ost @RH7-RHOS-7.0-director sysbench.x86_64 0.4.12-12.el7ost @RH7-RHOS-7.0-director [stack@ospha-inst ~]$ How reproducible: Every time. Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
I might have a similar issue. What is the specific problem/error you are seeing? I can't access my instances via floating IP address when I have a Ceph OSD server. (I am using GRE) It may be related to this bug.
We need to try to reproduce this on the current puddle. We haven't seen any problems with neutron and Ceph on that code that I'm aware of (it's passing CI and tempest is running successfully).
Unsure if this is related, but notice the routing on the nodes looks like it would cuase problems: The compute has to deafaults with different paths (unsure why other nets are also duplicates, but aleast they are the same path) [root@overcloud-compute-0 ~]# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.0.2.1 0.0.0.0 UG 100 0 0 em1 default 10.19.143.254 0.0.0.0 UG 101 0 0 em2 10.19.136.0 0.0.0.0 255.255.248.0 U 0 0 0 em2 10.19.136.0 0.0.0.0 255.255.248.0 U 100 0 0 em2 169.254.169.254 192.0.2.1 255.255.255.255 UGH 100 0 0 em1 192.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 em1 192.0.2.0 0.0.0.0 255.255.255.0 U 100 0 0 em1 [root@overcloud-compute-0 ~]# [root@overcloud-controller-0 ~]# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.19.143.254 0.0.0.0 UG 0 0 0 br-ex default 192.0.2.1 0.0.0.0 UG 100 0 0 em1 10.19.136.0 0.0.0.0 255.255.248.0 U 0 0 0 br-ex 169.254.169.254 192.0.2.1 255.255.255.255 UGH 100 0 0 em1 192.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 em1 192.0.2.0 0.0.0.0 255.255.255.0 U 100 0 0 em1 [root@overcloud-controller-0 ~]#
I got a succesful deployment on baremetal with 3 controllers, 1 compute and 1 ceph node, just the provisioning network deployed with vxlan networks. I then created a tenant network, external network and router and could successfully reach the l3 agent from outside networks. openstack overcloud deploy --control-scale 3 --compute-scale 1 --ceph-storage-scale 1 --plan overcloud --neutron-tunnel-types vxlan --neutron-network-type vxlan --neutron-public-interface eth1 --control-flavor control --compute-flavor compute --ceph-storage-flavor storage neutron net-create ext-net --router:external --provider:physical_network datacentre --provider:network_type flat neutron subnet-create ext-net 10.3.58.0/24 --name ext-subnet --allocation-pool start=10.3.58.190,end=10.3.58.200 --disable-dhcp --gateway 10.3.58.254 neutron net-create tenant-net neutron subnet-create tenant-net 192.168.0.0/24 --name tenant-subnet --gateway 192.168.0.1 neutron router-create tenant-router neutron router-interface-add tenant-router tenant-subnet neutron router-gateway-set tenant-router ext-net [stack@bldr16cc09 ~]$ neutron l3-agent-list-hosting-router tenant-router +--------------------------------------+------------------------------------+----------------+-------+----------+ | id | host | admin_state_up | alive | ha_state | +--------------------------------------+------------------------------------+----------------+-------+----------+ | 5fa480fe-6889-46fe-9a1a-efc9ca5e38b3 | overcloud-controller-2.localdomain | True | :-) | standby | | c7a07819-9171-4590-9424-89fed1a7e7bd | overcloud-controller-0.localdomain | True | :-) | active | | 5150d444-57ad-43da-a91d-b0bc943e0c77 | overcloud-controller-1.localdomain | True | :-) | standby | +--------------------------------------+------------------------------------+----------------+-------+----------+ [root@overcloud-controller-0 ~]# ip netns exec qrouter-59792c55-2eb1-4477-81fc-8bed488e1acf ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=48 time=23.5 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=48 time=23.2 ms
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2015:1549