RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 852728 - 3.1 - [vdsm] repoStats reports domain inaccessible even though it is valid (as a result host moves to Non-operational by engine)
Summary: 3.1 - [vdsm] repoStats reports domain inaccessible even though it is valid (a...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm
Version: 6.3
Hardware: All
OS: All
unspecified
urgent
Target Milestone: rc
: ---
Assignee: Federico Simoncelli
QA Contact: Haim
URL:
Whiteboard: storage
: 848439 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-08-29 12:34 UTC by Gadi Ickowicz
Modified: 2014-08-22 01:41 UTC (History)
9 users (show)

Fixed In Version: vdsm-4.9.6-32.0.el6_3
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-12-05 07:39:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
vdsm + engine logs (1.60 MB, application/x-gzip)
2012-08-29 12:34 UTC, Gadi Ickowicz
no flags Details

Description Gadi Ickowicz 2012-08-29 12:34:43 UTC
Created attachment 607893 [details]
vdsm + engine logs

Description of problem:
After successfully connecting a storage domain (connectStorageServer and connectStoragePool), repostats reports the domain as inaccessible. 

Version-Release number of selected component (if applicable):
vdsm-4.9.6-30.0.el6_3.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Setup with 2 hosts, 1 storage domain
2. Create a new storage domain
3. Block connection from HSM to domain and wait for host status to reflect change
4. unblock connection from HSM to domain
5. reactivate domain (connectStorageServer + connectStoragePool) successfully
6. repostats reports domain valid=false, causing host to switch back to non-operational

  
Actual results:
On the host, the vg is visible, however repostats continue to report the domain as valid=false:

[root@green-vdsa ~]# vgs
  VG                                   #PV #LV #SN Attr   VSize  VFree
  cc57a4f6-62c5-456a-bf19-c1d52757bb19   1   6   0 wz--n- 49.62g 45.75g
  cfee089f-0bc5-4d39-8c47-6516d7fa014e   1   6   0 wz--n-  9.62g  5.75g
  vg0                                    1   3   0 wz--n- 67.77g     0
[root@green-vdsa ~]# vdsClient -s 0 repoStats
Domain cfee089f-0bc5-4d39-8c47-6516d7fa014e {'delay': '0', 'lastCheck': 1346234205.352031, 'code': 2001, 'valid': False}

[root@green-vdsa ~]# vgs
  VG                                   #PV #LV #SN Attr   VSize  VFree
  cc57a4f6-62c5-456a-bf19-c1d52757bb19   1   6   0 wz--n- 49.62g 45.75g
  cfee089f-0bc5-4d39-8c47-6516d7fa014e   1   6   0 wz--n-  9.62g  5.75g
  vg0                                    1   3   0 wz--n- 67.77g     0


Expected results:
repostats should report correct status of storage domain

Additional info:
This was reproduced several times with automated tests running this scenario.
Furthermore, later adding another domain and just connecting it to the host (without running the block/unblock scenario) still had repostats report it as invalid after several seconds, even though the vg was visible.


ConnectStoragePool+failed repostats:

Thread-34251::INFO::2012-08-29 09:44:22,895::logUtils::37::dispatcher::(wrapper) Run and protect: connectStoragePool(spUUID='ce0bad90-8681-418d-b1ee-d15bf46c2ad2', hostID=1, scsiKey='ce0bad90-8681-418d-b1ee-d15bf4
6c2ad2', msdUUID='cc57a4f6-62c5-456a-bf19-c1d52757bb19', masterVersion=1, options=None)
Thread-34251::DEBUG::2012-08-29 09:44:22,896::resourceManager::175::ResourceManager.Request::(__init__) ResName=`Storage.ce0bad90-8681-418d-b1ee-d15bf46c2ad2`ReqID=`099119f5-7e35-459b-a304-d61a1f620680`::Request w
as made in '/usr/share/vdsm/storage/resourceManager.py' line '485' at 'registerResource'
Thread-34251::DEBUG::2012-08-29 09:44:22,896::resourceManager::486::ResourceManager::(registerResource) Trying to register resource 'Storage.ce0bad90-8681-418d-b1ee-d15bf46c2ad2' for lock type 'shared'
Thread-34251::DEBUG::2012-08-29 09:44:22,896::resourceManager::528::ResourceManager::(registerResource) Resource 'Storage.ce0bad90-8681-418d-b1ee-d15bf46c2ad2' is free. Now locking as 'shared' (1 active user)
Thread-34251::DEBUG::2012-08-29 09:44:22,897::resourceManager::212::ResourceManager.Request::(grant) ResName=`Storage.ce0bad90-8681-418d-b1ee-d15bf46c2ad2`ReqID=`099119f5-7e35-459b-a304-d61a1f620680`::Granted request
Thread-34251::DEBUG::2012-08-29 09:44:22,897::misc::1080::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage)
Thread-34251::DEBUG::2012-08-29 09:44:22,897::misc::1082::SamplingMethod::(__call__) Got in to sampling method
Thread-34251::DEBUG::2012-08-29 09:44:22,897::misc::1080::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)
Thread-34251::DEBUG::2012-08-29 09:44:22,898::misc::1082::SamplingMethod::(__call__) Got in to sampling method
Thread-34251::DEBUG::2012-08-29 09:44:22,898::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)
Thread-34251::DEBUG::2012-08-29 09:44:22,936::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = ''; <rc> = 0
Thread-34251::DEBUG::2012-08-29 09:44:22,937::misc::1090::SamplingMethod::(__call__) Returning last result
Thread-34251::DEBUG::2012-08-29 09:44:23,333::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /sbin/multipath' (cwd None)
Thread-34251::DEBUG::2012-08-29 09:44:23,452::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = ''; <rc> = 0
Thread-34251::DEBUG::2012-08-29 09:44:23,453::lvm::460::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,453::lvm::462::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,453::lvm::472::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,454::lvm::474::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,454::lvm::493::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,454::lvm::495::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,455::misc::1090::SamplingMethod::(__call__) Returning last result
Thread-34251::DEBUG::2012-08-29 09:44:23,455::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,458::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1gickowic-lun113459821|1gickowic-lun213459821|3514f0c5695800001|3514f0c5695800002|3514f0c5695800003|3514f0c5695800004|3514f0c5695800005|360a98000572d45366b4a6c684b324c6c|360a98000572d45366b4a6c684b327234|360a98000572d45366b4a6c684b334551|360a98000572d45366b4a6c684b34786b%\\", \\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free cc57a4f6-62c5-456a-bf19-c1d52757bb19' (cwd None)
Thread-34251::DEBUG::2012-08-29 09:44:23,672::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = ''; <rc> = 0
Thread-34251::DEBUG::2012-08-29 09:44:23,674::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,674::persistentDict::185::Storage.PersistentDict::(__init__) Created a persistant dict with LvMetadataRW backend
Thread-34251::DEBUG::2012-08-29 09:44:23,675::__init__::1164::Storage.Misc.excCmd::(_log) '/bin/dd iflag=direct skip=0 bs=2048 if=/dev/cc57a4f6-62c5-456a-bf19-c1d52757bb19/metadata count=1' (cwd None)
Thread-34251::DEBUG::2012-08-29 09:44:23,686::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = '1+0 records in\n1+0 records out\n2048 bytes (2.0 kB) copied, 0.000515587 s, 4.0 MB/s\n'; <rc> = 0
Thread-34251::DEBUG::2012-08-29 09:44:23,686::misc::334::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '2048 bytes (2.0 kB) copied, 0.000515587 s, 4.0 MB/s'], size: 2048
Thread-34251::DEBUG::2012-08-29 09:44:23,687::persistentDict::226::Storage.PersistentDict::(refresh) read lines (LvMetadataRW)=[]
Thread-34251::WARNING::2012-08-29 09:44:23,687::persistentDict::248::Storage.PersistentDict::(refresh) data has no embedded checksum - trust it as it is
Thread-34251::DEBUG::2012-08-29 09:44:23,687::persistentDict::185::Storage.PersistentDict::(__init__) Created a persistant dict with VGTagMetadataRW backend
Thread-34251::DEBUG::2012-08-29 09:44:23,688::lvm::467::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,688::lvm::469::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,688::lvm::478::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,689::lvm::490::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,689::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex

Thread-34251::DEBUG::2012-08-29 09:44:23,458::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1gickowic-lun113459821|1gickowic-lun213459821|3514f0c5695800001|3514f0c5695800002|3514f0c5695800003|3514f0c5695800004|3514f0c5695800005|360a98000572d45366b4a6c684b324c6c|360a98000572d45366b4a6c684b327234|360a98000572d45366b4a6c684b334551|360a98000572d45366b4a6c684b34786b%\\", \\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free cc57a4f6-62c5-456a-bf19-c1d52757bb19' (cwd None)
Thread-34251::DEBUG::2012-08-29 09:44:23,672::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = ''; <rc> = 0
Thread-34251::DEBUG::2012-08-29 09:44:23,674::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,674::persistentDict::185::Storage.PersistentDict::(__init__) Created a persistant dict with LvMetadataRW backend
Thread-34251::DEBUG::2012-08-29 09:44:23,675::__init__::1164::Storage.Misc.excCmd::(_log) '/bin/dd iflag=direct skip=0 bs=2048 if=/dev/cc57a4f6-62c5-456a-bf19-c1d52757bb19/metadata count=1' (cwd None)
Thread-34251::DEBUG::2012-08-29 09:44:23,686::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = '1+0 records in\n1+0 records out\n2048 bytes (2.0 kB) copied, 0.000515587 s, 4.0 MB/s\n'; <rc> = 0
Thread-34251::DEBUG::2012-08-29 09:44:23,686::misc::334::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '2048 bytes (2.0 kB) copied, 0.000515587 s, 4.0 MB/s'], size: 2048
Thread-34251::DEBUG::2012-08-29 09:44:23,687::persistentDict::226::Storage.PersistentDict::(refresh) read lines (LvMetadataRW)=[]
Thread-34251::WARNING::2012-08-29 09:44:23,687::persistentDict::248::Storage.PersistentDict::(refresh) data has no embedded checksum - trust it as it is
Thread-34251::DEBUG::2012-08-29 09:44:23,687::persistentDict::185::Storage.PersistentDict::(__init__) Created a persistant dict with VGTagMetadataRW backend
Thread-34251::DEBUG::2012-08-29 09:44:23,688::lvm::467::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,688::lvm::469::OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,688::lvm::478::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,689::lvm::490::OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,689::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,689::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1gickowic-lun113459821|1gickowic-lun213459821|3514f0c5695800001|3514f0c5695800002|3514f0c5695800003|3514f0c5695800004|3514f0c5695800005|360a98000572d45366b4a6c684b324c6c|360a98000572d45366b4a6c684b327234|360a98000572d45366b4a6c684b334551|360a98000572d45366b4a6c684b34786b%\\", \\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free cc57a4f6-62c5-456a-bf19-c1d52757bb19' (cwd None)
Thread-34251::DEBUG::2012-08-29 09:44:23,877::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = ''; <rc> = 0
Thread-34251::DEBUG::2012-08-29 09:44:23,879::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,879::persistentDict::226::Storage.PersistentDict::(refresh) read lines (VGTagMetadataRW)=['DESCRIPTION=ISCSIDataDomain', 'CLASS=Data', 'VERSION=3', 'TYPE=ISCSI', 'VGUUID=mKRkiH-GnLV-QiRd-VIRw-GK5k-9oJO-dfJxQl', 'LOGBLKSIZE=512', 'LEASERETRIES=3', 'LOCKRENEWALINTERVALSEC=5', 'LOCKPOLICY=', 'PHYBLKSIZE=512', 'SDUUID=cc57a4f6-62c5-456a-bf19-c1d52757bb19', u'PV0=pv:3514f0c5695800004,uuid:mTFKQR-wLLo-UjVK-J8sw-jv2i-r8di-9OqdM8,pestart:0,pecount:397,mapoffset:0', 'LEASETIMESEC=60', 'IOOPTIMEOUTSEC=10', 'MASTER_VERSION=1', 'ROLE=Master', 'POOL_DOMAINS=cc57a4f6-62c5-456a-bf19-c1d52757bb19:Active', 'POOL_DESCRIPTION=TestDataCenter', 'POOL_UUID=ce0bad90-8681-418d-b1ee-d15bf46c2ad2', '_SHA_CKSUM=60dc066507a02edb775829ebf8c68ef2d2c0b431', 'POOL_SPM_ID=2', 'POOL_SPM_LVER=1']
Thread-34251::DEBUG::2012-08-29 09:44:23,880::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex
Thread-34251::DEBUG::2012-08-29 09:44:23,881::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1gickowic-lun113459821|1gickowic-lun213459821|3514f0c5695800001|3514f0c5695800002|3514f0c5695800003|3514f0c5695800004|3514f0c5695800005|360a98000572d45366b4a6c684b324c6c|360a98000572d45366b4a6c684b327234|360a98000572d45366b4a6c684b334551|360a98000572d45366b4a6c684b34786b%\\", \\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size' (cwd None)
Thread-34251::DEBUG::2012-08-29 09:44:24,075::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS: <err> = ''; <rc> = 0
Thread-34251::DEBUG::2012-08-29 09:44:24,076::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex
Thread-34251::WARNING::2012-08-29 09:44:24,076::sd::317::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace cc57a4f6-62c5-456a-bf19-c1d52757bb19_imageNS already registered
Thread-34251::WARNING::2012-08-29 09:44:24,076::sd::323::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace cc57a4f6-62c5-456a-bf19-c1d52757bb19_volumeNS already registered
Thread-34251::WARNING::2012-08-29 09:44:24,077::blockSD::335::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace cc57a4f6-62c5-456a-bf19-c1d52757bb19_lvmActivationNS already registered
Thread-34251::DEBUG::2012-08-29 09:44:24,077::sp::1528::Storage.StoragePool::(getMasterDomain) Master domain cc57a4f6-62c5-456a-bf19-c1d52757bb19 verified, version 1
Thread-34251::DEBUG::2012-08-29 09:44:24,077::resourceManager::538::ResourceManager::(releaseResource) Trying to release resource 'Storage.ce0bad90-8681-418d-b1ee-d15bf46c2ad2'
Thread-34251::DEBUG::2012-08-29 09:44:24,078::resourceManager::553::ResourceManager::(releaseResource) Released resource 'Storage.ce0bad90-8681-418d-b1ee-d15bf46c2ad2' (0 active users)
Thread-34251::DEBUG::2012-08-29 09:44:24,078::resourceManager::558::ResourceManager::(releaseResource) Resource 'Storage.ce0bad90-8681-418d-b1ee-d15bf46c2ad2' is free, finding out if anyone is waiting for it.
Thread-34251::DEBUG::2012-08-29 09:44:24,078::resourceManager::565::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.ce0bad90-8681-418d-b1ee-d15bf46c2ad2', Clearing records.
Thread-34251::INFO::2012-08-29 09:44:24,079::logUtils::39::dispatcher::(wrapper) Run and protect: connectStoragePool, Return response: None

Thread-34251::DEBUG::2012-08-29 09:44:24,079::task::1172::TaskManager.Task::(prepare) Task=`717afc31-7b61-4431-8db8-15c3276ea3dc`::finished: None
Thread-34251::DEBUG::2012-08-29 09:44:24,079::task::588::TaskManager.Task::(_updateState) Task=`717afc31-7b61-4431-8db8-15c3276ea3dc`::moving from state preparing -> state finished
Thread-34251::DEBUG::2012-08-29 09:44:24,079::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-34251::DEBUG::2012-08-29 09:44:24,080::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-34251::DEBUG::2012-08-29 09:44:24,080::task::978::TaskManager.Task::(_decref) Task=`717afc31-7b61-4431-8db8-15c3276ea3dc`::ref 0 aborting False
Thread-34254::DEBUG::2012-08-29 09:44:30,208::task::588::TaskManager.Task::(_updateState) Task=`bf677435-7c29-488b-8e58-4e0b1970bdd3`::moving from state init -> state preparing
Thread-34254::INFO::2012-08-29 09:44:30,209::logUtils::37::dispatcher::(wrapper) Run and protect: repoStats(options=None)
Thread-34254::INFO::2012-08-29 09:44:30,209::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'cc57a4f6-62c5-456a-bf19-c1d52757bb19': {'delay': '0', 'lastCheck': 1346222324.7464459, 'code': 2001, 'valid': False}}

Comment 5 vvyazmin@redhat.com 2012-09-02 15:13:25 UTC
*** Bug 848439 has been marked as a duplicate of this bug. ***

Comment 6 vvyazmin@redhat.com 2012-09-02 15:15:40 UTC
on QA verification step, please run scenario from BZ848439 

https://bugzilla.redhat.com/show_bug.cgi?id=848439

Comment 7 Federico Simoncelli 2012-09-04 14:49:02 UTC
Fixed in vdsm-4.9.6-32.0.el6_3, most likely by:

BZ#846376 Produce the domain in the domain monitor thread

Comment 8 Gadi Ickowicz 2012-09-05 13:38:10 UTC
This scenario runs successfully on vdsm-4.9.6-32.0.el6_3.x86_64.


Note You need to log in before you can comment on or make changes to this bug.