Bug 998388 - Instance stay on Spawning state forever
Instance stay on Spawning state forever
Status: CLOSED INSUFFICIENT_DATA
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-packstack (Show other bugs)
unspecified
All Linux
high Severity high
: ---
: 4.0
Assigned To: Martin Magr
Nir Magnezi
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-19 04:16 EDT by Hangbin Liu
Modified: 2013-09-01 11:52 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-09-01 11:52:57 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
packstack.cfg (10.25 KB, text/plain)
2013-08-19 04:16 EDT, Hangbin Liu
no flags Details

  None (edit)
Description Hangbin Liu 2013-08-19 04:16:26 EDT
Created attachment 787926 [details]
packstack.cfg

Description of problem:
After set up openstack environment with packstack and launch an instance. The instance stay on Spawning state and won't finish build.

Version-Release number of selected component (if applicable):
openstack-packstack-2013.1.1-0.27.dev660.el6ost.noarch

How reproducible:
every time

Steps to Reproduce:
1. packstack --answer-file=packstack.cfg
2. source keystonerc_admin
3. glance image-create --name RHEL6.4 --is-public true --disk-format qcow2 --container-format bare --file /tmp/images/RHEL-Server-6.4-64-virtio.qcow2
4. quantum net-create Network_Vlan_10 --provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id 10 --shared
5. quantum subnet-create --gateway 192.168.10.254 --allocation-pool start=192.168.10.2,end=192.168.10.253 Network_Vlan_10 192.168.10.0/24
6. nova keypair-add --pub-key ~/.ssh/network_rsa.pub network-qe
7. nova boot --flavor 1 --key_name network-qe --image e207fdd9-e7a8-4747-9d18-ecaf780f7bd7 Test_1 --nic net-id=1a4188cd-4892-4171-b836-47d023beaab5

Actual results:
instance stay on Spawning state forever

Expected results:
instance build success


Additional info:

# rpm -qa | grep openstack
openstack-nova-common-2013.1.3-1.el6ost.noarch
openstack-nova-console-2013.1.3-1.el6ost.noarch
python-django-openstack-auth-1.0.6-2.el6ost.noarch
openstack-glance-2013.1.3-1.el6ost.noarch
openstack-nova-novncproxy-0.4-6.el6ost.noarch
openstack-packstack-2013.1.1-0.27.dev660.el6ost.noarch
openstack-utils-2013.1-8.1.el6ost.noarch
kernel-2.6.32-358.118.1.openstack.el6.x86_64
openstack-nova-conductor-2013.1.3-1.el6ost.noarch
openstack-quantum-2013.1.3-1.el6ost.noarch
openstack-dashboard-2013.1.3-1.el6ost.noarch
openstack-nova-scheduler-2013.1.3-1.el6ost.noarch
openstack-quantum-openvswitch-2013.1.3-1.el6ost.noarch
openstack-selinux-0.1.2-10.el6ost.noarch
openstack-keystone-2013.1.3-1.el6ost.noarch
openstack-nova-compute-2013.1.3-1.el6ost.noarch
openstack-cinder-2013.1.3-2.el6ost.noarch
openstack-nova-api-2013.1.3-1.el6ost.noarch
openstack-nova-cert-2013.1.3-1.el6ost.noarch

# ovs-vsctl show
d153bd29-59a9-43c0-9881-ea3e85b2efa1
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "tap199d24f2-a1"
            tag: 1
            Interface "tap199d24f2-a1"
        Port "int-br-link1"
            Interface "int-br-link1"
    Bridge "br-link1"
        Port "eth4"
            Interface "eth4"
        Port "br-link1"
            Interface "br-link1"
                type: internal
        Port "phy-br-link1"
            Interface "phy-br-link1"
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "1.9.0"


# ls /var/lib/nova/images/ | wc -l
# ls /var/lib/nova/instances/8701a4db-474e-46f8-b2b1-8ae69b52abc1/
console.log  disk

# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 2c:76:8a:53:e2:30 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 2c:76:8a:53:e2:31 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 2c:76:8a:53:e2:32 brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 2c:76:8a:53:e2:33 brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:10:18:e4:10:e4 brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:10:18:e4:10:e5 brd ff:ff:ff:ff:ff:ff
8: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 52:54:00:a4:28:6e brd ff:ff:ff:ff:ff:ff
9: virbr0-nic: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether 52:54:00:a4:28:6e brd ff:ff:ff:ff:ff:ff
11: br-link1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 00:10:18:e4:10:e4 brd ff:ff:ff:ff:ff:ff
12: br-int: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether e6:17:4a:30:ab:4e brd ff:ff:ff:ff:ff:ff
13: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 86:8a:a3:c7:ff:4d brd ff:ff:ff:ff:ff:ff
14: phy-br-link1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 92:71:27:f2:0b:57 brd ff:ff:ff:ff:ff:ff
15: int-br-link1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 9e:2f:1c:c6:4c:88 brd ff:ff:ff:ff:ff:ff
18: tap7edd6363-d9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 2e:e2:53:1f:e2:78 brd ff:ff:ff:ff:ff:ff

# openstack-status
== Nova services ==
openstack-nova-api:           active
openstack-nova-cert:          active
openstack-nova-compute:       active
openstack-nova-network:       dead (disabled on boot)
openstack-nova-scheduler:     active
openstack-nova-volume:        dead (disabled on boot)
openstack-nova-conductor:     active
== Glance services ==
openstack-glance-api:         active
openstack-glance-registry:    active
== Keystone service ==
openstack-keystone:           active
== Horizon service ==
openstack-dashboard:          active
== Quantum services ==
quantum-server:               active
quantum-dhcp-agent:           active
quantum-l3-agent:             active
quantum-linuxbridge-agent:    dead (disabled on boot)
quantum-openvswitch-agent:    active
openvswitch:                  active
== Cinder services ==
openstack-cinder-api:         active
openstack-cinder-scheduler:   active
openstack-cinder-volume:      active
== Support services ==
mysqld:                       active
libvirtd:                     active
tgtd:                         active
qpidd:                        active
memcached:                    active
== Keystone users ==
+----------------------------------+---------+---------+-------------------+
|                id                |   name  | enabled |       email       |
+----------------------------------+---------+---------+-------------------+
| 9aaeaf36e00643d0ba4da848aecb4dbd |  admin  |   True  |   test@test.com   |
| 5c82735001534a748534ceafb4b08989 |  cinder |   True  |  cinder@localhost |
| 824322fe97dd4465876961546a4b220d |  glance |   True  |  glance@localhost |
| 0650c90f77924cafa1d8dea20e6603ca |   nova  |   True  |   nova@localhost  |
| 9f9278b32c5c4ef6b19278481cefbe99 | quantum |   True  | quantum@localhost |
+----------------------------------+---------+---------+-------------------+
== Glance images ==
ID                                   Name                           Disk Format          Container Format     Size
------------------------------------ ------------------------------ -------------------- -------------------- --------------
59fa243e-45eb-420f-ba26-bfb27ca30559 RHEL6.4                        qcow2                bare                     3636002816
== Nova instance flavors ==
m1.medium: Memory: 4096MB, VCPUS: 2, Root: 40GB, Ephemeral: 0Gb, FlavorID: 3, Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.large: Memory: 8192MB, VCPUS: 4, Root: 80GB, Ephemeral: 0Gb, FlavorID: 4, Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.tiny: Memory: 512MB, VCPUS: 1, Root: 0GB, Ephemeral: 0Gb, FlavorID: 1, Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.xlarge: Memory: 16384MB, VCPUS: 8, Root: 160GB, Ephemeral: 0Gb, FlavorID: 5, Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
m1.small: Memory: 2048MB, VCPUS: 1, Root: 20GB, Ephemeral: 0Gb, FlavorID: 2, Swap: 0MB, RXTX Factor: 1.0, public, ExtraSpecs {}
== Nova instances ==
+--------------------------------------+--------+--------+------------------------------+
| ID                                   | Name   | Status | Networks                     |
+--------------------------------------+--------+--------+------------------------------+
| 8701a4db-474e-46f8-b2b1-8ae69b52abc1 | Test_1 | BUILD  | Network_Vlan_10=192.168.10.2 |
+--------------------------------------+--------+--------+------------------------------+
Comment 3 Martin Magr 2013-08-28 11:25:32 EDT
So I used your answer file and same packstack version and did the same steps you did, but I cannot reproduce it. What's in your /var/log/nova/compute.log?
Comment 4 Hangbin Liu 2013-08-30 12:22:48 EDT
(In reply to Martin Magr from comment #3)
> So I used your answer file and same packstack version and did the same steps
> you did, but I cannot reproduce it. What's in your /var/log/nova/compute.log?

As my testing machine had reinstalled, I tried it again on a new host, but didn't reproduce it. I guess that some packages' version is not new enough. I will retest it this weekend. If all works find, then we can close this bug.

Thanks
Hangbin Liu
Comment 5 Hangbin Liu 2013-09-01 11:52:57 EDT
Retest it, but also didn't reproduce it :( Maybe I didn't update some utils last time, but not sure. So close this bug as insufficient data. Sorry for the disturb.

Note You need to log in before you can comment on or make changes to this bug.