Bug 904724 - Get exception on getSpmStatus command during Reconstruct Master Domain for Data Center scenario
Summary: Get exception on getSpmStatus command during Reconstruct Master Domain for Da...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 3.2.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 3.2.0
Assignee: Liron Aravot
QA Contact: vvyazmin@redhat.com
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-01-27 10:06 UTC by vvyazmin@redhat.com
Modified: 2016-02-10 18:20 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-02-06 10:48:29 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
## Logs vdsm, rhevm (1.21 MB, application/x-gzip)
2013-01-27 10:06 UTC, vvyazmin@redhat.com
no flags Details

Description vvyazmin@redhat.com 2013-01-27 10:06:54 UTC
Created attachment 688354 [details]
## Logs vdsm, rhevm

Description of problem:
Get exception on getSpmStatus command during Reconstruct Master Domain for Data Center  scenario

Version-Release number of selected component (if applicable):
RHEVM 3.2 - SF04 environment:

RHEVM: rhevm-3.2.0-5.el6ev.noarch
VDSM: vdsm-4.10.2-4.0.el6ev.x86_64
LIBVIRT: libvirt-0.10.2-16.el6.x86_64
QEMU & KVM: qemu-kvm-rhev-0.12.1.2-2.348.el6.x86_64
SANLOCK: sanlock-2.6-2.el6.x86_64

How reproducible:
100%

Scenario done on iSCSI and FC environments
Build on scenario BZ902408

Steps to Reproduce:
1. Create (iSCSI or FC) DC environment with two hosts and multiple SD
2. Create a new SD that not connected to any DC (DC == none)
3. Enter DC to Maintenance (by Maintenance all SD in DC)
4. Perform Re-Initialize Data Center with a new SD (that created in step 2)
5. Succeed Re-Initialize Data Center

Actual results:
Get exception during Re-Initialize Data Center 

Expected results:
No exception should be found

Additional info:

/var/log/ovirt-engine/engine.log

/var/log/vdsm/vdsm.log

Thread-147966::DEBUG::2013-01-27 13:47:59,464::BindingXMLRPC::161::vds::(wrapper) [10.35.97.56]
Thread-147966::DEBUG::2013-01-27 13:47:59,465::task::568::TaskManager.Task::(_updateState) Task=`7840ea73-a304-41cc-8c20-adb02d008397`::moving from state init -> state preparing
Thread-147966::INFO::2013-01-27 13:47:59,465::logUtils::37::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='3617d44f-b1d6-4867-a311-eb8da016722c', options=None)
Thread-147966::ERROR::2013-01-27 13:47:59,465::task::833::TaskManager.Task::(_setError) Task=`7840ea73-a304-41cc-8c20-adb02d008397`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 840, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 603, in getSpmStatus
    pool = self.getPool(spUUID)
  File "/usr/share/vdsm/storage/hsm.py", line 313, in getPool
    raise se.StoragePoolUnknown(spUUID)
StoragePoolUnknown: Unknown pool id, pool not connected: ('3617d44f-b1d6-4867-a311-eb8da016722c',)
Thread-147966::DEBUG::2013-01-27 13:47:59,467::task::852::TaskManager.Task::(_run) Task=`7840ea73-a304-41cc-8c20-adb02d008397`::Task._run: 7840ea73-a304-41cc-8c20-adb02d008397 ('3617d44f-b1d6-4867-a311-eb8da016722c',) {} failed - stopping task
Thread-147966::DEBUG::2013-01-27 13:47:59,467::task::1177::TaskManager.Task::(stop) Task=`7840ea73-a304-41cc-8c20-adb02d008397`::stopping in state preparing (force False)
Thread-147966::DEBUG::2013-01-27 13:47:59,468::task::957::TaskManager.Task::(_decref) Task=`7840ea73-a304-41cc-8c20-adb02d008397`::ref 1 aborting True
Thread-147966::INFO::2013-01-27 13:47:59,468::task::1134::TaskManager.Task::(prepare) Task=`7840ea73-a304-41cc-8c20-adb02d008397`::aborting: Task is aborted: 'Unknown pool id, pool not connected' - code 309
Thread-147966::DEBUG::2013-01-27 13:47:59,468::task::1139::TaskManager.Task::(prepare) Task=`7840ea73-a304-41cc-8c20-adb02d008397`::Prepare: aborted: Unknown pool id, pool not connected
Thread-147966::DEBUG::2013-01-27 13:47:59,468::task::957::TaskManager.Task::(_decref) Task=`7840ea73-a304-41cc-8c20-adb02d008397`::ref 0 aborting True
Thread-147966::DEBUG::2013-01-27 13:47:59,468::task::892::TaskManager.Task::(_doAbort) Task=`7840ea73-a304-41cc-8c20-adb02d008397`::Task._doAbort: force False
Thread-147966::DEBUG::2013-01-27 13:47:59,469::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-147966::DEBUG::2013-01-27 13:47:59,469::task::568::TaskManager.Task::(_updateState) Task=`7840ea73-a304-41cc-8c20-adb02d008397`::moving from state preparing -> state aborting
Thread-147966::DEBUG::2013-01-27 13:47:59,469::task::523::TaskManager.Task::(__state_aborting) Task=`7840ea73-a304-41cc-8c20-adb02d008397`::_aborting: recover policy none
Thread-147966::DEBUG::2013-01-27 13:47:59,469::task::568::TaskManager.Task::(_updateState) Task=`7840ea73-a304-41cc-8c20-adb02d008397`::moving from state aborting -> state failed
Thread-147966::DEBUG::2013-01-27 13:47:59,469::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-147966::DEBUG::2013-01-27 13:47:59,469::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-147966::ERROR::2013-01-27 13:47:59,470::dispatcher::66::Storage.Dispatcher.Protect::(run) {'status': {'message': "Unknown pool id, pool not connected: ('3617d44f-b1d6-4867-a311-eb8da016722c',)", 'code': 309}}

Comment 2 vvyazmin@redhat.com 2013-02-06 10:47:54 UTC
In this flow, I have same scenario as, create a first new SD in DC.
So, this valid error.


Note You need to log in before you can comment on or make changes to this bug.