Bug 693211 - VDSM - VM stays in "Powering Down" for a while then returns to UP state (when there is no OS installed on VM)
Summary: VDSM - VM stays in "Powering Down" for a while then returns to UP state (whe...
Keywords:
Status: CLOSED DUPLICATE of bug 538442
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: vdsm22
Version: 5.6
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Dan Kenigsberg
QA Contact: yeylon@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-04-03 15:00 UTC by Ortal
Modified: 2016-04-18 06:39 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-04-05 13:07:55 UTC
Target Upstream Version:


Attachments (Terms of Use)
RHEVM.log (11.27 KB, text/plain)
2011-04-03 15:00 UTC, Ortal
no flags Details
VDSM.log.1.gz (967.76 KB, application/x-gzip)
2011-04-03 15:01 UTC, Ortal
no flags Details
VDSM.log (5.49 MB, text/x-log)
2011-04-03 15:01 UTC, Ortal
no flags Details
RHEVM Screen Shot (197.74 KB, image/jpeg)
2011-04-03 15:02 UTC, Ortal
no flags Details

Description Ortal 2011-04-03 15:00:43 UTC
Created attachment 489660 [details]
RHEVM.log

Description of problem:

VDSM - VM stays in "Powering Down" for a while then returns to UP state  (when there is no OS installed on VM)

Version-Release number of selected component (if applicable):
RHEVM version: ic108
VDSM Version: vdsm22-4.5-63.23.el5_6

How reproducible:
Always

Steps to Reproduce:
1. Create new VM, but don't install its OS (Windows 2003-32bit for example)
2. Start the VM
3. When the VM is UP, Click on Stop button
 
Actual results:

1. Backend sends ShutDownVmCommand, and the VDSM didn't replay to it.
2. VM remains in status: Powering Down for a while and than back to UP state:

Running vdsCommand -s 0 list table, and the VM is still running.

VDSM log:

Thread-320778::INFO::2011-04-03 14:50:47,947::dispatcher::95::irs::Run and protect: repoStats, args: ()
Thread-320778::DEBUG::2011-04-03 14:50:47,948::task::577::irs::Task b78f68a4-297a-4421-9225-237f2869b87d: moving from state init -> state preparing
Thread-320778::DEBUG::2011-04-03 14:50:47,948::resource::298::irs::Resource Storage/0cf1d732-6f1e-42b4-ba9e-0f5ad0eed6d9: b78f68a4-297a-4421-9225-237f2869b87d acquire shared (120000)
Thread-320778::DEBUG::2011-04-03 14:50:47,948::resource::304::irs::Resource Storage/0cf1d732-6f1e-42b4-ba9e-0f5ad0eed6d9 - lockstate free
Thread-320778::DEBUG::2011-04-03 14:50:47,948::resource::239::irs::Resource 'Storage/0cf1d732-6f1e-42b4-ba9e-0f5ad0eed6d9': __granted shared to 'b78f68a4-297a-4421-9225-237f2869b87d'
Thread-320778::DEBUG::2011-04-03 14:50:47,949::resource::524::irs::Owner b78f68a4-297a-4421-9225-237f2869b87d: _acquired Storage/0cf1d732-6f1e-42b4-ba9e-0f5ad0eed6d9
Thread-320778::DEBUG::2011-04-03 14:50:47,949::task::577::irs::Task b78f68a4-297a-4421-9225-237f2869b87d: _resourcesAcquired: Storage.0cf1d732-6f1e-42b4-ba9e-0f5ad0eed6d9 (shared)
Thread-320778::DEBUG::2011-04-03 14:50:47,949::task::577::irs::Task b78f68a4-297a-4421-9225-237f2869b87d: ref 1 aborting False
Thread-320778::DEBUG::2011-04-03 14:50:47,949::task::577::irs::Task b78f68a4-297a-4421-9225-237f2869b87d: finished: {'bf79e924-91f1-487d-b367-f7a2d9faee11': {'delay': '0.00640416145325', 'lastCheck': 1301842243.50
07961, 'valid': True, 'code': 0}, '15e4fdf5-4ae4-46f5-94d6-4c5a50434c82': {'delay': '0.00605797767639', 'lastCheck': 1301842243.5302579, 'valid': True, 'code': 0}, '749a7b52-4342-47c9-948a-a6d6cca08f37': {'delay':
 '0.0477368831635', 'lastCheck': 1301842246.937156, 'valid': True, 'code': 0}, 'e150d0ee-c110-4223-8d78-d111b3ad339e': {'delay': '0.0407378673553', 'lastCheck': 1301842246.995472, 'valid': True, 'code': 0}}
Thread-320778::DEBUG::2011-04-03 14:50:47,950::task::577::irs::Task b78f68a4-297a-4421-9225-237f2869b87d: moving from state preparing -> state finished
Thread-320778::DEBUG::2011-04-03 14:50:47,950::resource::670::irs::Owner.releaseAll requests [] resources [<storage.resource.Resource object at 0x2aaaac058050>]
Thread-320778::DEBUG::2011-04-03 14:50:47,950::resource::341::irs::Resource Storage/0cf1d732-6f1e-42b4-ba9e-0f5ad0eed6d9: b78f68a4-297a-4421-9225-237f2869b87d releasing
Thread-320778::DEBUG::2011-04-03 14:50:47,950::resource::348::irs::Resource Storage/0cf1d732-6f1e-42b4-ba9e-0f5ad0eed6d9: owners after release []
Thread-320778::DEBUG::2011-04-03 14:50:47,951::resource::351::irs::Resource Storage/0cf1d732-6f1e-42b4-ba9e-0f5ad0eed6d9: requests after release []
Thread-320778::DEBUG::2011-04-03 14:50:47,951::resource::370::irs::Resource Storage/0cf1d732-6f1e-42b4-ba9e-0f5ad0eed6d9: free lock
Thread-320778::DEBUG::2011-04-03 14:50:47,951::resource::503::irs::Owner b78f68a4-297a-4421-9225-237f2869b87d: _released Storage/0cf1d732-6f1e-42b4-ba9e-0f5ad0eed6d9/<storage.resource.Resource object at 0x2aaaac05
8050> (['Storage/0cf1d732-6f1e-42b4-ba9e-0f5ad0eed6d9/<storage.resource.Resource object at 0x2aaaac058050>'])
Thread-320778::DEBUG::2011-04-03 14:50:47,951::task::577::irs::Task b78f68a4-297a-4421-9225-237f2869b87d: resourceReleased: Storage.0cf1d732-6f1e-42b4-ba9e-0f5ad0eed6d9
Thread-320778::DEBUG::2011-04-03 14:50:47,951::resource::176::irs::resource Storage/0cf1d732-6f1e-42b4-ba9e-0f5ad0eed6d9 after decref ref 0
Thread-320778::DEBUG::2011-04-03 14:50:47,952::task::577::irs::Task b78f68a4-297a-4421-9225-237f2869b87d: ref 0 aborting False
Thread-320778::INFO::2011-04-03 14:50:47,952::dispatcher::101::irs::Run and protect: repoStats, Return response: {'status': {'message': 'OK', 'code': 0}, '15e4fdf5-4ae4-46f5-94d6-4c5a50434c82': {'delay': '0.006057
97767639', 'lastCheck': 1301842243.5302579, 'valid': True, 'code': 0}, 'bf79e924-91f1-487d-b367-f7a2d9faee11': {'delay': '0.00640416145325', 'lastCheck': 1301842243.5007961, 'valid': True, 'code': 0}, '749a7b52-43
42-47c9-948a-a6d6cca08f37': {'delay': '0.0477368831635', 'lastCheck': 1301842246.937156, 'valid': True, 'code': 0}, 'e150d0ee-c110-4223-8d78-d111b3ad339e': {'delay': '0.0407378673553', 'lastCheck': 1301842246.9954
72, 'valid': True, 'code': 0}}




Expected results:

I think that we should getting an Error that this action couldn't be implemented since there is no OS on that VM 

Additional info:

** Please see attached Screen Shot
** Please see attached VDSM.log
** Please see attached RHEVM.log

Comment 1 Ortal 2011-04-03 15:01:26 UTC
Created attachment 489661 [details]
VDSM.log.1.gz

Comment 2 Ortal 2011-04-03 15:01:47 UTC
Created attachment 489662 [details]
VDSM.log

Comment 3 Ortal 2011-04-03 15:02:24 UTC
Created attachment 489663 [details]
RHEVM Screen Shot

Comment 4 Dan Kenigsberg 2011-04-05 13:07:55 UTC
I am afraid this is the expected behavior when no guest agent is installed. When bug 538442 is addressed, this this would be clearer to the end user.

*** This bug has been marked as a duplicate of bug 538442 ***


Note You need to log in before you can comment on or make changes to this bug.