Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RDO tickets are now tracked in Jira https://issues.redhat.com/projects/RDO/issues/

Bug 1062293

Summary: horizon: conflicting messages in horizon on launch of instance
Product: [Community] RDO Reporter: Dafna Ron <dron>
Component: python-django-horizonAssignee: Julie Pichon <jpichon>
Status: CLOSED UPSTREAM QA Contact: Ami Jeain <ajeain>
Severity: medium Docs Contact:
Priority: unspecified    
Version: unspecifiedCC: aortega, athomas, dron, jpichon, mrunge, yeylon
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard: storage
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-06-18 08:39:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
logs and screenshots none

Description Dafna Ron 2014-02-06 15:26:14 UTC
Created attachment 860237 [details]
logs and screenshots

Description of problem:

I launched an instance with create new volume option and got the following error in compute log: 

2014-02-06 16:44:20.546 32633 TRACE nova.compute.manager [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631] VolumeNotCreated: Volume 4904a723-1111-4fed-b723-5af77be02696 did not finish being created even after we waited 70 seconds or 60 at
tempts.

Looking in Horizon I see two messages:
Success: launched instance name dafna2
Error: failed to launch instance name dafna2

Version-Release number of selected component (if applicable):

python-django-horizon-2014.1-0.2b2.fc21.noarch

How reproducible:

100%

Steps to Reproduce:
1. launch an instance with create new volume option
2.
3.

Actual results:

horizon is reporting success and Error for the same action

Expected results:

we should not report success until action is actually successful. 

Additional info:


2014-02-06 16:44:20.536 32633 DEBUG nova.openstack.common.rpc.amqp [req-99ab1f26-402b-4b98-b190-e951c44ccd39 17c7730a55644cb68fd1029ed986c4fb c77ac799f3154c95913bb66cc8638bef] UNIQUE_ID is 1284d4da9e634f038dc46049db6df65c. _add_unique_id
 /usr/lib/python2.7/site-packages/nova/openstack/common/rpc/amqp.py:341
2014-02-06 16:44:20.546 32633 ERROR nova.compute.manager [req-99ab1f26-402b-4b98-b190-e951c44ccd39 17c7730a55644cb68fd1029ed986c4fb c77ac799f3154c95913bb66cc8638bef] [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631] Error: Volume 4904a723
-1111-4fed-b723-5af77be02696 did not finish being created even after we waited 70 seconds or 60 attempts.
2014-02-06 16:44:20.546 32633 TRACE nova.compute.manager [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631] Traceback (most recent call last):
2014-02-06 16:44:20.546 32633 TRACE nova.compute.manager [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1045, in _build_instance
2014-02-06 16:44:20.546 32633 TRACE nova.compute.manager [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631]     context, instance, bdms)
2014-02-06 16:44:20.546 32633 TRACE nova.compute.manager [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1445, in _prep_block_device
2014-02-06 16:44:20.546 32633 TRACE nova.compute.manager [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631]     instance=instance)
2014-02-06 16:44:20.546 32633 TRACE nova.compute.manager [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631]   File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__
2014-02-06 16:44:20.546 32633 TRACE nova.compute.manager [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631]     six.reraise(self.type_, self.value, self.tb)
2014-02-06 16:44:20.546 32633 TRACE nova.compute.manager [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1428, in _prep_block_device
2014-02-06 16:44:20.546 32633 TRACE nova.compute.manager [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631]     self._await_block_device_map_created))
2014-02-06 16:44:20.546 32633 TRACE nova.compute.manager [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631]   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 286, in attach_block_devices
2014-02-06 16:44:20.546 32633 TRACE nova.compute.manager [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631]     block_device_mapping)
2014-02-06 16:44:20.546 32633 TRACE nova.compute.manager [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631]   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 241, in attach
2014-02-06 16:44:20.546 32633 TRACE nova.compute.manager [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631]     wait_func(context, vol['id'])
2014-02-06 16:44:20.546 32633 TRACE nova.compute.manager [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 916, in _await_block_device_map_created
2014-02-06 16:44:20.546 32633 TRACE nova.compute.manager [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631]     attempts=attempts)
2014-02-06 16:44:20.546 32633 TRACE nova.compute.manager [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631] VolumeNotCreated: Volume 4904a723-1111-4fed-b723-5af77be02696 did not finish being created even after we waited 70 seconds or 60 at
tempts.
2014-02-06 16:44:20.546 32633 TRACE nova.compute.manager [instance: a61b5d99-0ad5-4f18-850a-5ebd93783631] 
2014-02-06 16:44:20.548 32633 DEBUG nova.openstack.common.lockutils [req-99ab1f26-402b-4b98-b190-e951c4

Comment 1 Matthias Runge 2014-03-03 08:29:23 UTC
Could you please provide cinder log as well? Since the issue occurred in cinder?

Comment 2 Matthias Runge 2014-03-03 08:30:07 UTC
Is there a way to reproduce this?

Comment 3 Dafna Ron 2014-03-03 10:03:13 UTC
this is not a cinder issue, it's actually a nova issue and there is nothing in the cinder logs since the volume is created successfully. 
the problem is 100% nova  - nova gets a time out waiting for the volume to be created.

I put steps to reproduce when I opened the bug:  

Steps to Reproduce:
1. launch an instance with create new volume option
2.
3.

Comment 4 Julie Pichon 2014-06-18 08:39:28 UTC
There is a bug upstream to change the default message type to "INFO" when starting an asynchronous operation, to avoid this kind of confusion. I think it will cover this case as well.