Bug 974156 - Fail scheduling network
Fail scheduling network
Status: CLOSED EOL
Product: Fedora EPEL
Classification: Fedora
Component: openstack-quantum (Show other bugs)
el6
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: lpeer
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-06-13 10:25 EDT by Mahmoud Alkelany
Modified: 2017-08-15 14:05 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-08-15 14:05:10 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Mahmoud Alkelany 2013-06-13 10:25:58 EDT
Description of problem:


I have a setup of Grizzly(epel: 2013.1-3.el6.noarch, Centos 6.4) on 1 controller and 4 compute nodes, the controller is providing the network also, i followed the documents to create VLAN OVS bridge br-int on all the boxes, the instances spawn successfully but but no network configured on the instance while showing IP allocated in the dashboard.



Steps to Reproduce:
1.Launch an instance from the dashboard.

2.The instance is spawned successfully, here is nova-compute.log:
2013-06-13 14:37:20.477 AUDIT nova.compute.manager [req-5867836d-a2e2-4de9-9d79-6e89f8c15c63 1474ab8fb66745ee91eb792e560947e6 7f01e5fecd0e47d58084136469b1e1fa] [instan$
2013-06-13 14:37:29.879 AUDIT nova.compute.claims [req-5867836d-a2e2-4de9-9d79-6e89f8c15c63 1474ab8fb66745ee91eb792e560947e6 7f01e5fecd0e47d58084136469b1e1fa] [instanc$
2013-06-13 14:37:29.879 AUDIT nova.compute.claims [req-5867836d-a2e2-4de9-9d79-6e89f8c15c63 1474ab8fb66745ee91eb792e560947e6 7f01e5fecd0e47d58084136469b1e1fa] [instanc$
2013-06-13 14:37:29.880 AUDIT nova.compute.claims [req-5867836d-a2e2-4de9-9d79-6e89f8c15c63 1474ab8fb66745ee91eb792e560947e6 7f01e5fecd0e47d58084136469b1e1fa] [instanc$
2013-06-13 14:37:29.880 AUDIT nova.compute.claims [req-5867836d-a2e2-4de9-9d79-6e89f8c15c63 1474ab8fb66745ee91eb792e560947e6 7f01e5fecd0e47d58084136469b1e1fa] [instanc$
2013-06-13 14:37:29.880 AUDIT nova.compute.claims [req-5867836d-a2e2-4de9-9d79-6e89f8c15c63 1474ab8fb66745ee91eb792e560947e6 7f01e5fecd0e47d58084136469b1e1fa] [instanc$
2013-06-13 14:37:29.881 AUDIT nova.compute.claims [req-5867836d-a2e2-4de9-9d79-6e89f8c15c63 1474ab8fb66745ee91eb792e560947e6 7f01e5fecd0e47d58084136469b1e1fa] [instanc$
2013-06-13 14:37:29.881 AUDIT nova.compute.claims [req-5867836d-a2e2-4de9-9d79-6e89f8c15c63 1474ab8fb66745ee91eb792e560947e6 7f01e5fecd0e47d58084136469b1e1fa] [instanc$
2013-06-13 14:37:29.882 AUDIT nova.compute.claims [req-5867836d-a2e2-4de9-9d79-6e89f8c15c63 1474ab8fb66745ee91eb792e560947e6 7f01e5fecd0e47d58084136469b1e1fa] [instanc$
2013-06-13 14:38:05.670 INFO nova.virt.libvirt.driver [req-5867836d-a2e2-4de9-9d79-6e89f8c15c63 1474ab8fb66745ee91eb792e560947e6 7f01e5fecd0e47d58084136469b1e1fa] [ins$
2013-06-13 14:38:06.151 INFO nova.virt.libvirt.driver [req-5867836d-a2e2-4de9-9d79-6e89f8c15c63 1474ab8fb66745ee91eb792e560947e6 7f01e5fecd0e47d58084136469b1e1fa] [ins$
2013-06-13 14:38:22.895 10090 INFO nova.compute.manager [-] Lifecycle event 0 on VM 353efd97-5acf-41a0-81ef-7394b06835d4
2013-06-13 14:38:24.470 10090 INFO nova.virt.libvirt.driver [-] [instance: 353efd97-5acf-41a0-81ef-7394b06835d4] Instance spawned successfully.

3.No network configuration for the virtual machine with only this error log:
2013-06-13 14:37:44  WARNING [quantum.db.agentschedulers_db] Fail scheduling network {'status': u'ACTIVE', 'subnets': [u'515bf1a4-9f9a-40cd-b6d0-cbe70557be00'], 'name': u'priv_net', 'provider:physical_network': u'priv_net', 'admin_state_up': True, 'tenant_id': u'7f01e5fecd0e47d58084136469b1e1fa', 'provider:network_type': u'vlan', 'router:external': False, 'shared': True, 'id': u'0a4c56bb-e5a9-479e-af40-8df4db4fac80', 'provider:segmentation_id': 1L}

Thanks.

Note You need to log in before you can comment on or make changes to this bug.