Bug 1294085 - Creating an instance on RDO overcloud, errors out
Creating an instance on RDO overcloud, errors out
Status: CLOSED WORKSFORME
Product: RDO
Classification: Community
Component: rdo-manager (Show other bugs)
Liberty
Unspecified Unspecified
urgent Severity urgent
: ---
: Kilo
Assigned To: Hugh Brock
yeylon@redhat.com
: Automation, AutomationBlocker
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-12-24 09:33 EST by Ronelle Landy
Modified: 2016-04-18 03:12 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-02-24 15:32:19 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ronelle Landy 2015-12-24 09:33:18 EST
Description of problem:

Found from running automated jobs ... see job log at:
https://ci.centos.org/view/rdo/job/rdo_manager-liberty-scale_out-feature-minimal_scale_compute/48/consoleFull

RDO is installed on CentOS 7.0 and an overcloud with one Controller, one Compute and one Ceph node is deployed.
Tempest smoke is run.
Then the validate test is run and it fails when creating a test instance on the overcloud. The operation times out:

****************

22:26:22 failed: [undercloud] => {"failed": true}
22:26:22 msg: Timeouted when waiting for the server to come up:
22:26:22 {u'OS-EXT-STS:task_state': u'spawning', u'addresses': {u'private_2vm9f-rhos-ci-48-rmgr-rdo-': [{u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:c9:d2:8e', u'version': 4, u'addr': u'xxxx', u'OS-EXT-IPS:type': u'fixed'}]}, u'links': [{u'href': u'http://192.0.2.6:8774/v2/6818120743584c48a10bc8e67af23945/servers/77007813-78d1-46ba-bbf7-41f11f873313', u'rel': u'self'}, {u'href': u'http://192.0.2.6:8774/6818120743584c48a10bc8e67af23945/servers/77007813-78d1-46ba-bbf7-41f11f873313', u'rel': u'bookmark'}], u'image': {u'id': u'89a4ac09-251d-4b36-8b1b-cefd9daf0c6e', u'links': [{u'href': u'http://192.0.2.6:8774/6818120743584c48a10bc8e67af23945/images/89a4ac09-251d-4b36-8b1b-cefd9daf0c6e', u'rel': u'bookmark'}]}, u'OS-EXT-STS:vm_state': u'building', u'OS-EXT-SRV-ATTR:instance_name': u'instance-0000000d', u'OS-SRV-USG:launched_at': None, u'flavor': {u'id': u'2', u'links': [{u'href': u'http://192.0.2.6:8774/6818120743584c48a10bc8e67af23945/flavors/2', u'rel': u'bookmark'}]}, u'id': u'77007813-78d1-46ba-bbf7-41f11f873313', u'security_groups': [{u'name': u'default'}], u'user_id': u'8f3a03cfe3da4b8bba704c7a1789ca73', u'OS-DCF:diskConfig': u'MANUAL', u'accessIPv4': u'', u'accessIPv6': u'', u'progress': 0, u'OS-EXT-STS:power_state': 0, u'OS-EXT-AZ:availability_zone': u'nova', u'config_drive': u'', u'status': u'BUILD', u'updated': u'2015-12-23T22:21:23Z', u'hostId': u'8d68a2745fd721df82cab355509bd2732f3b902b1a9d94e7e906aa8d', u'OS-EXT-SRV-ATTR:host': u'overcloud-novacompute-0', u'OS-SRV-USG:terminated_at': None, u'key_name': u'instance-key-2vm9f-rhos-ci-48-rmgr-rdo-', u'OS-EXT-SRV-ATTR:hypervisor_hostname': u'overcloud-novacompute-0.localdomain', u'name': u'khaleesi', u'created': u'2015-12-23T22:21:22Z', u'tenant_id': u'6818120743584c48a10bc8e67af23945', u'os-extended-volumes:volumes_attached': [], u'metadata': {}}

****************

Suspicion is that issue is not related to Tempest.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Install RDO on CentOS 7.0
2. Deploy the overcloud
3. Run the validate test or create an instance

Actual results:

Creating the instance times out (stack trace to be added)

Expected results:

Instance should be created

Additional info:

Note the job name mentions 'scale out' but the vaidate test fails before any scaling happens.
Comment 1 mathieu bultel 2015-12-24 10:40:11 EST
The validate ansible tasks failed also just right after the deployment of rdo.
Unable to boot an instance on rdo-manager.
In the compute log there is errors:

d: 6f381def-1f6a-4c25-bd01-d62a1864a5f0, exception: Requested operation is not valid: cpu affinity is not supported
2015-12-23 18:07:20.553 7218 WARNING nova.virt.libvirt.driver [req-962fb69e-779b-4e49-b320-ecc57d3ec4dc - - - - -] couldn't obtain the vpu count from domain id: 6f381def-1f6a-4c25-bd01-d62a1864a5f0, exception: Requested operation is not valid: cpu affinity is not supported
2015-12-23 18:08:21.390 7218 WARNING nova.virt.libvirt.driver [req-962fb69e-779b-4e49-b320-ecc57d3ec4dc - - - - -] couldn't obtain the vpu count from domain id: 6f381def-1f6a-4c25-bd01-d62a1864a5f0, exception: Requested operation is not valid: cpu affinity is not supported

And 
2015-12-23 18:08:36.153 7218 WARNING nova.virt.libvirt.driver [req-dbaddfdb-3131-429e-9a56-0fabbefcdf43 a9b61210bbfc435282b08ef8b386cc7b 6999507a62ae40029facf8dfd60a3f97 - - -] [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0] Timeout waiting for vif plugging callback for instance 6f381def-1f6a-4c25-bd01-d62a1864a5f0
6999507a62ae40029facf8dfd60a3f97 - - -] [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0] Instance failed to spawn
2015-12-23 18:08:42.698 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0] Traceback (most recent call last):
2015-12-23 18:08:42.698 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2155, in _build_resources
2015-12-23 18:08:42.698 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]     yield resources
2015-12-23 18:08:42.698 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2009, in _build_and_run_instance
2015-12-23 18:08:42.698 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]     block_device_info=block_device_info)
2015-12-23 18:08:42.698 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2444, in spawn
2015-12-23 18:08:42.698 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]     block_device_info=block_device_info)
2015-12-23 18:08:42.698 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4541, in _create_domain_and_network
2015-12-23 18:08:42.698 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]     raise exception.VirtualInterfaceCreateException()
2015-12-23 18:08:42.698 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0] VirtualInterfaceCreateException: Virtual Interface creation failed
2015-12-23 18:08:42.698 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0] 
2015-12-23 18:08:44.094 7218 ERROR nova.compute.manager [req-dbaddfdb-3131-429e-9a56-0fabbefcdf43 a9b61210bbfc435282b08ef8b386cc7b 6999507a62ae40029facf8dfd60a3f97 - - -] [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0] Failed to allocate network(s)
2015-12-23 18:08:44.094 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0] Traceback (most recent call last):
2015-12-23 18:08:44.094 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2009, in _build_and_run_instance
2015-12-23 18:08:44.094 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]     block_device_info=block_device_info)
2015-12-23 18:08:44.094 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2444, in spawn
2015-12-23 18:08:44.094 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]     block_device_info=block_device_info)
2015-12-23 18:08:44.094 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4541, in _create_domain_and_network
2015-12-23 18:08:44.094 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]     raise exception.VirtualInterfaceCreateException()
2015-12-23 18:08:44.094 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0] VirtualInterfaceCreateException: Virtual Interface creation failed
2015-12-23 18:08:44.094 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0] 
2015-12-23 18:08:44.105 7218 ERROR nova.compute.manager [req-dbaddfdb-3131-429e-9a56-0fabbefcdf43 a9b61210bbfc435282b08ef8b386cc7b 6999507a62ae40029facf8dfd60a3f97 - - -] [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0] Build of instance 6f381def-1f6a-4c25-bd01-d62a1864a5f0 aborted: Failed to allocate the network(s), not rescheduling.
2015-12-23 18:08:44.105 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0] Traceback (most recent call last):
2015-12-23 18:08:44.105 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1905, in _do_build_and_run_instance
2015-12-23 18:08:44.105 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]     filter_properties)
2015-12-23 18:08:44.105 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2044, in _build_and_run_instance
2015-12-23 18:08:44.105 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]     reason=msg)
2015-12-23 18:08:44.105 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0] BuildAbortException: Build of instance 6f381def-1f6a-4c25-bd01-d62a1864a5f0 aborted: Failed to allocate the network(s), not rescheduling.
2015-12-23 18:08:44.105 7218 ERROR nova.compute.manager [instance: 6f381def-1f6a-4c25-bd01-d62a1864a5f0]

Note You need to log in before you can comment on or make changes to this bug.