Bug 1003820

Summary: Quantum DHCP fails to provide private IP for starting VM . Install openstack via RDO on F19
Product: [Fedora] Fedora Reporter: Boris Derzhavets <bderzhavets>
Component: openstack-quantumAssignee: lpeer <lpeer>
Status: CLOSED EOL QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: high Docs Contact:
Priority: unspecified    
Version: 19CC: apevec, bderzhavets, breu, chrisw, gkotton, itamar, Jan.van.Eldik, jose.castro.leon, lpeer, markmc, maurizio.antillon, mmagr, twilson
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-02-17 17:02:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
/var/log/quantum/dhcp-agent.log none

Description Boris Derzhavets 2013-09-03 09:47:16 UTC
Description of problem:

2013-09-03 11:58:00     INFO [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server on 192.168.1.52:5672
2013-09-03 11:58:00     INFO [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server on 192.168.1.52:5672
2013-09-03 11:58:00     INFO [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server on 192.168.1.52:5672
2013-09-03 11:58:00     INFO [quantum.agent.dhcp_agent] DHCP agent started
2013-09-03 11:59:00    ERROR [quantum.openstack.common.rpc.amqp] Timed out waiting for RPC response.
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/quantum/openstack/common/rpc/amqp.py", line 495, in __iter__
    data = self._dataqueue.get(timeout=self._timeout)
  File "/usr/lib/python2.7/site-packages/eventlet/queue.py", line 298, in get
    return waiter.wait()
  File "/usr/lib/python2.7/site-packages/eventlet/queue.py", line 129, in wait
    return get_hub().switch()
  File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 187, in switch
    return self.greenlet.switch()
Empty
2013-09-03 11:59:00    ERROR [quantum.agent.dhcp_agent] Failed reporting state!
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/quantum/agent/dhcp_agent.py", line 702, in _report_state
    self.agent_state)
  File "/usr/lib/python2.7/site-packages/quantum/agent/rpc.py", line 66, in report_state
    topic=self.topic)
  File "/usr/lib/python2.7/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call
    return rpc.call(context, self._get_topic(topic), msg, timeout)
  File "/usr/lib/python2.7/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call
    return _get_impl().call(CONF, context, topic, msg, timeout)
  File "/usr/lib/python2.7/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call
    rpc_amqp.get_connection_pool(conf, Connection))
  File "/usr/lib/python2.7/site-packages/quantum/openstack/common/rpc/amqp.py", line 614, in call
    rv = list(rv)
  File "/usr/lib/python2.7/site-packages/quantum/openstack/common/rpc/amqp.py", line 500, in __iter__
    raise rpc_common.Timeout()
Timeout: Timeout while waiting on RPC response.
2013-09-03 11:59:00  WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 56.13429 sec
2013-09-03 11:59:00     INFO [quantum.agent.dhcp_agent] Synchronizing state

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.sudo yum update
2.$ sudo yum install -y http://rdo.fedorapeople.org/openstack/openstack-grizzly/rdo-release-grizzly.rpm
$ sudo yum install -y openstack-packstack
$ packstack --allinone
3. Recreated external and private networks via command line
4. Creating router with External Gateway Interface and internal interface to
private network
4. Attempt to start VM -F19 
Network service failed

Actual results:

VM is running with network down.

Expected results:

Obtain IP via quantum dhcp-agent

Additional info:
[root@localhost ~(keystone_admin)]# source /root/keystonerc_admin
[root@localhost ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api:           active
openstack-nova-cert:          active
openstack-nova-compute:       active
openstack-nova-network:       inactive (disabled on boot)
openstack-nova-scheduler:     active
openstack-nova-volume:        inactive (disabled on boot)
openstack-nova-conductor:     active
== Glance services ==
openstack-glance-api:         active
openstack-glance-registry:    active
== Keystone service ==
openstack-keystone:           active
== Horizon service ==
openstack-dashboard:          active
== Quantum services ==
quantum-server:               active
quantum-dhcp-agent:           active
quantum-l3-agent:             active
quantum-linuxbridge-agent:    inactive (disabled on boot)
quantum-openvswitch-agent:    active
openvswitch:                  active
== Swift services ==
openstack-swift-proxy:        active
openstack-swift-account:      active
openstack-swift-container:    active
openstack-swift-object:       active
== Cinder services ==
openstack-cinder-api:         active
openstack-cinder-scheduler:   active
openstack-cinder-volume:      active
== Support services ==
mysqld:                       active
httpd:                        active
libvirtd:                     active
tgtd:                         active
qpidd:                        active
memcached:                    active
== Keystone users ==
+----------------------------------+----------+---------+-------------------+
|                id                |   name   | enabled |       email       |
+----------------------------------+----------+---------+-------------------+
| 122c102187b64a83aeeb72d98d2f1381 |  admin   |   True  |   test   |
| 76ceb3b842ba4a1983b8be6117c585df | alt_demo |   True  |                   |
| ccd3c3aab39946f296a0e59f8f306cb6 |  cinder  |   True  |  cinder@localhost |
| 5b8b797d9a7d402fbc5fe57401194f14 |   demo   |   True  |                   |
| b8278ed069294627b48da5b3777f1f81 |  glance  |   True  |  glance@localhost |
| 7c11994e57cf46de9486d622ba6b1b5a |   nova   |   True  |   nova@localhost  |
| 51fe985f09154284bb08100de9a45f13 | quantum  |   True  | quantum@localhost |
| b13549cb8b67423c8ab12a9d0fc500d8 |  swift   |   True  |  swift@localhost  |
+----------------------------------+----------+---------+-------------------+
 Install interrupts at :-

boris@localhost ~]$  sudo packstack --answer-file=/home/boris/packstack-answers-20130903-114312.txt
Welcome to Installer setup utility
Packstack changed given value  to required value /root/.ssh/id_rsa.pub

Installing:
Clean Up...                                            [ DONE ]
Adding pre install manifest entries...                 [ DONE ]
Setting up ssh keys...                                 [ DONE ]
Adding MySQL manifest entries...                       [ DONE ]
Adding QPID manifest entries...                        [ DONE ]
Adding Keystone manifest entries...                    [ DONE ]
Adding Glance Keystone manifest entries...             [ DONE ]
Adding Glance manifest entries...                      [ DONE ]
Adding Cinder Keystone manifest entries...             [ DONE ]
Installing dependencies for Cinder...                  [ DONE ]
Checking if the Cinder server has a cinder-volumes vg...[ DONE ]
Adding Cinder manifest entries...                      [ DONE ]
Adding Nova API manifest entries...                    [ DONE ]
Adding Nova Keystone manifest entries...               [ DONE ]
Adding Nova Cert manifest entries...                   [ DONE ]
Adding Nova Conductor manifest entries...              [ DONE ]
Adding Nova Compute manifest entries...                [ DONE ]
Adding Nova Scheduler manifest entries...              [ DONE ]
Adding Nova VNC Proxy manifest entries...              [ DONE ]
Adding Nova Common manifest entries...                 [ DONE ]
Adding Openstack Network-related Nova manifest entries...[ DONE ]
Adding Quantum API manifest entries...                 [ DONE ]
Adding Quantum Keystone manifest entries...            [ DONE ]
Adding Quantum L3 manifest entries...                  [ DONE ]
Adding Quantum L2 Agent manifest entries...            [ DONE ]
Adding Quantum DHCP Agent manifest entries...          [ DONE ]
Adding Quantum Metadata Agent manifest entries...      [ DONE ]
Adding OpenStack Client manifest entries...            [ DONE ]
Adding Horizon manifest entries...                     [ DONE ]
Adding Swift Keystone manifest entries...              [ DONE ]
Adding Swift builder manifest entries...               [ DONE ]
Adding Swift proxy manifest entries...                 [ DONE ]
Adding Swift storage manifest entries...               [ DONE ]
Adding Swift common manifest entries...                [ DONE ]
Adding Provisioning manifest entries...                [ DONE ]
Preparing servers...                                   [ DONE ]
Adding Nagios server manifest entries...               [ DONE ]
Adding Nagios host manifest entries...                 [ DONE ]
Adding post install manifest entries...                [ DONE ]
Installing Dependencies...                             [ DONE ]
Copying Puppet modules and manifests...                [ DONE ]
Applying Puppet manifests...
Applying 192.168.1.52_prescript.pp
192.168.1.52_prescript.pp :                                          [ DONE ]
Applying 192.168.1.52_mysql.pp
Applying 192.168.1.52_qpid.pp
192.168.1.52_mysql.pp :                                              [ DONE ]
192.168.1.52_qpid.pp :                                               [ DONE ]
Applying 192.168.1.52_keystone.pp
Applying 192.168.1.52_glance.pp
Applying 192.168.1.52_cinder.pp
192.168.1.52_keystone.pp :                                           [ DONE ]
192.168.1.52_glance.pp :                                             [ DONE ]
192.168.1.52_cinder.pp :                                             [ DONE ]
Applying 192.168.1.52_api_nova.pp
192.168.1.52_api_nova.pp :                                           [ DONE ]
Applying 192.168.1.52_nova.pp
192.168.1.52_nova.pp :                                               [ DONE ]
Applying 192.168.1.52_quantum.pp
192.168.1.52_quantum.pp :                                            [ DONE ]
Applying 192.168.1.52_osclient.pp
Applying 192.168.1.52_horizon.pp
192.168.1.52_osclient.pp :                                           [ DONE ]
192.168.1.52_horizon.pp :                                            [ DONE ]
Applying 192.168.1.52_ring_swift.pp
192.168.1.52_ring_swift.pp :                                         [ DONE ]
Applying 192.168.1.52_swift.pp
Applying 192.168.1.52_provision.pp
Applying 192.168.1.52_nagios.pp
Applying 192.168.1.52_nagios_nrpe.pp
192.168.1.52_swift.pp :                                              [ DONE ]
192.168.1.52_provision.pp :                                          [ DONE ]
                                                                                               [ ERROR ]

ERROR : Error during puppet run : Error: Could not start Service[nagios]: Execution of '/sbin/service nagios start' returned 1: 
Please check log file /var/tmp/packstack/20130903-121154-PooWqA/openstack-setup.log for more information

Comment 1 Boris Derzhavets 2013-09-03 09:58:16 UTC
Created attachment 793096 [details]
/var/log/quantum/dhcp-agent.log

Comment 2 Boris Derzhavets 2013-09-03 10:57:51 UTC
Version-Release number of selected component (if applicable):

[boris@localhost ~]$ rpm -qa|grep openstack-quantum
openstack-quantum-2013.1.2-2.fc19.noarch
openstack-quantum-openvswitch-2013.1.2-2.fc19.noarch

Comment 3 Boris Derzhavets 2013-09-11 15:53:06 UTC
Next install from scratch per http://openstack.redhat.com/Neutron-Quickstart.
Issue seems to be fixed on F19

[boris@localhost Downloads]$ rpm -qa|grep openstack-quantum
openstack-quantum-openvswitch-2013.1.3-1.fc19.noarch
openstack-quantum-2013.1.3-1.fc19.noarch


Inside F19 instance :

[boris@localhost Downloads]$ ssh -l fedora -i key2.pem 172.24.4.228
Last login: Wed Sep 11 15:37:30 2013 from 172.24.4.225
[fedora@vf19s ~]$ sudo su -
[root@vf19s ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.2  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::f816:3eff:fecb:db7d  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:cb:db:7d  txqueuelen 1000  (Ethernet)
        RX packets 642  bytes 63194 (61.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 553  bytes 62331 (60.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@vf19s ~]# ping 172.24.4.225 
PING 172.24.4.225 (172.24.4.225) 56(84) bytes of data.
64 bytes from 172.24.4.225: icmp_seq=1 ttl=63 time=0.289 ms
64 bytes from 172.24.4.225: icmp_seq=2 ttl=63 time=0.133 ms
64 bytes from 172.24.4.225: icmp_seq=3 ttl=63 time=0.133 ms
64 bytes from 172.24.4.225: icmp_seq=4 ttl=63 time=0.139 ms
64 bytes from 172.24.4.225: icmp_seq=5 ttl=63 time=0.155 ms
64 bytes from 172.24.4.225: icmp_seq=6 ttl=63 time=0.089 ms
64 bytes from 172.24.4.225: icmp_seq=7 ttl=63 time=0.132 ms
64 bytes from 172.24.4.225: icmp_seq=8 ttl=63 time=0.135 ms
64 bytes from 172.24.4.225: icmp_seq=9 ttl=63 time=0.168 ms
64 bytes from 172.24.4.225: icmp_seq=10 ttl=63 time=0.141 ms
^C
--- 172.24.4.225 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9000ms
rtt min/avg/max/mdev = 0.089/0.151/0.289/0.050 ms

Comment 5 Fedora End Of Life 2015-01-09 19:42:06 UTC
This message is a notice that Fedora 19 is now at end of life. Fedora 
has stopped maintaining and issuing updates for Fedora 19. It is 
Fedora's policy to close all bug reports from releases that are no 
longer maintained. Approximately 4 (four) weeks from now this bug will
be closed as EOL if it remains open with a Fedora 'version' of '19'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 19 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 6 Fedora End Of Life 2015-02-17 17:02:18 UTC
Fedora 19 changed to end-of-life (EOL) status on 2015-01-06. Fedora 19 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.