Bug 1311985 - force_metadata = True : VMs fail receiveing meatdata ( no interface with 169.254.169.254 ip in qdhcp namespace)
force_metadata = True : VMs fail receiveing meatdata ( no interface with 169....
Status: POST
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron (Show other bugs)
7.0 (Kilo)
x86_64 Linux
medium Severity medium
: ---
: 9.0 (Mitaka)
Assigned To: Assaf Muller
Toni Freger
: Triaged, ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-25 07:57 EST by Alexander Stafeyev
Modified: 2017-07-14 17:46 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Launchpad 1549793 None None None 2016-02-25 07:57 EST
OpenStack gerrit 286345 None None None 2016-03-01 05:01 EST
OpenStack gerrit 305615 None None None 2016-06-04 16:49 EDT
OpenStack gerrit 336872 None None None 2017-01-16 15:38 EST

  None (edit)
Description Alexander Stafeyev 2016-02-25 07:57:16 EST
Description of problem:
[root@overcloud-controller-0 ~]# cat /etc/neutron/dhcp_agent.ini | grep metadata | grep -v "#"
force_metadata = True
enable_isolated_metadata = False
enable_metadata_network = False

[stack@undercloud ~]$ neutron net-list
+--------------------------------------+----------------------------------------------------+-------------------------------------------------------+
| id | name | subnets |

| d7ebddcd-9989-4068-a8d9-66381e83d1f5 | int_net | 739b813d-4863-44e3-acd5-0bf6c3aaec76 192.168.3.0/24 |
+--------------------------------------+----------------------------------------------------+-------------------------------------------------------+

[root@overcloud-controller-0 ~]# ip netns exec qdhcp-d7ebddcd-9989-4068-a8d9-66381e83d1f5 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
36: tap7002581e-a4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:3b:e9:ae brd ff:ff:ff:ff:ff:ff
    inet 192.168.3.3/24 brd 192.168.3.255 scope global tap7002581e-a4
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe3b:e9ae/64 scope link
       valid_lft forever preferred_lft forever

We should have interface on qdhcp namespace with 169.254.169.254 ip for metadata when "force_metadata = True" in /etc/neutron/dhcp-agent.ini.

VMs are not receiving metadata in this scenario

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. set force_metadata = True in /etc/neutron/dhcp-agent.ini and restart dhcp agent
2. Create net and subnet. 
3. ip netns exec qdhcp-... ip a


Actual results:
No interface with 169.254.169.254 ip and VMs fail to receive metadata

Expected results:
we should have an interface with the ip address 169.254.169.254 so VMs will receive metadata 

Additio[root@overcloud-controller-0 ~]# rpm -qa | grep neutron
openstack-neutron-bigswitch-lldp-2015.1.38-1.el7ost.noarch
openstack-neutron-ml2-2015.1.2-9.el7ost.noarch
python-neutronclient-2.4.0-2.el7ost.noarch
python-neutron-2015.1.2-9.el7ost.noarch
openstack-neutron-2015.1.2-9.el7ost.noarch
openstack-neutron-lbaas-2015.1.2-1.el7ost.noarch
python-neutron-lbaas-2015.1.2-1.el7ost.noarch
openstack-neutron-common-2015.1.2-9.el7ost.noarch
openstack-neutron-openvswitch-2015.1.2-9.el7ost.noarch
openstack-neutron-metering-agent-2015.1.2-9.el7ost.noarch

[root@overcloud-controller-0 ~]# rpm -qa | grep meta
yum-metadata-parser-1.1.4-10.el7.x86_64nal info:
Comment 2 Phil Sutter 2016-04-15 08:28:55 EDT
Hi,

I just tripped over this while playing with OSP7 allinone setup. Contrary to what was suggested above I think the IP address should be added to the qrouter namespace. My rationale behind this:

- Metadata proxy runs in qrouter namespace, not qdhcp:

# ip netns list
qdhcp-8e62c61b-bbb8-4d39-aaf3-c192345bfbed
qrouter-3054833f-f130-4d8f-ab3f-7ce54af27ec7
# ip netns pids qrouter-3054833f-f130-4d8f-ab3f-7ce54af27ec7
4247
# ps ax | grep 4247
 4247 ?        S      0:01 /usr/bin/python2 /bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/3054833f-f130-4d8f-ab3f-7ce54af27ec7.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --router_id=3054833f-f130-4d8f-ab3f-7ce54af27ec7 --state_path=/var/lib/neutron --metadata_port=9697 --metadata_proxy_user=991 --metadata_proxy_group=988 --verbose --log-file=neutron-ns-metadata-proxy-3054833f-f130-4d8f-ab3f-7ce54af27ec7.log --log-dir=/var/log/neutron

- Qrouter namespace has relevant iptables rule:

# ip netns exec qrouter-3054833f-f130-4d8f-ab3f-7ce54af27ec7 iptables -t nat -nL | grep 169
REDIRECT   tcp  --  0.0.0.0/0            169.254.169.254      tcp dpt:80 redir ports 9697

IIRC, I had a similar problem when playing with the same setup using packages from RDO when NetworkManager was still active.

Have you already analyzed the cause of this issue and can provide further information on how this will be solved?

Thanks, Phil
Comment 3 John Schwarz 2016-07-03 08:09:38 EDT
Please note that the upstream patch [1] that was written (not by me) to fix this patch was recently merged. I was waiting until it would merge to address this bug again, so whomever picks this up again should be aware of said patch :)

John.

[1]: https://review.openstack.org/#/c/305615/
Comment 4 Assaf Muller 2017-01-16 15:38:10 EST
Patch 336872 was merged, which is a backport to Mitaka. Not available in OSP 9 yet, will be in the next rebase.

Note You need to log in before you can comment on or make changes to this bug.