Bug 1203296 - VDSM throws errors instead of info while add new NFS Storage Domain
Summary: VDSM throws errors instead of info while add new NFS Storage Domain
Keywords:
Status: CLOSED DUPLICATE of bug 1101009
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm-jsonrpc-java
Version: 3.5.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ovirt-3.6.3
: 3.6.0
Assignee: Allon Mureinik
QA Contact: Pavel Stehlik
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-03-18 14:20 UTC by Yuri Obshansky
Modified: 2016-03-10 06:24 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-03-19 14:56:42 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Yuri Obshansky 2015-03-18 14:20:32 UTC
Description of problem:
VDSM throws errors instead of info while add new NFS Storage Domain.
Developers clarify that it is legal verification before create new Storage Domain. But ERROR will confuse everybody.
Thread-63::DEBUG::2015-03-18 10:39:43,379::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'StorageDomain.create' in bridge with {'name': 'SD-1', 'domainTyp
e': 1, 'domainClass': 1, 'typeArgs': 'netapp.qa.lab.tlv.redhat.com:/vol/vol_rhev_stress/sd_1', 'version': '3', 'storagedomainID': 'b2008b5a-920a-4de0-810e-fc5c79d7f869'}
Thread-63::DEBUG::2015-03-18 10:39:43,382::task::595::Storage.TaskManager.Task::(_updateState) Task=`b909ce0f-226d-4046-9921-e23a7db1a00a`::moving from state init -> stat
e preparing
Thread-63::INFO::2015-03-18 10:39:43,382::logUtils::44::dispatcher::(wrapper) Run and protect: createStorageDomain(storageType=1, sdUUID='b2008b5a-920a-4de0-810e-fc5c79d7f869', domainName='SD-1', typeSpecificArg='netapp.qa.lab.tlv.redhat.com:/vol/vol_rhev_stress/sd_1', domClass=1, domVersion='3', options=None)
Thread-63::DEBUG::2015-03-18 10:39:43,382::misc::741::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage)
Thread-63::DEBUG::2015-03-18 10:39:43,382::misc::743::Storage.SamplingMethod::(__call__) Got in to sampling method
Thread-63::DEBUG::2015-03-18 10:39:43,382::misc::741::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)
Thread-63::DEBUG::2015-03-18 10:39:43,383::misc::743::Storage.SamplingMethod::(__call__) Got in to sampling method
Thread-63::DEBUG::2015-03-18 10:39:43,383::iscsi::433::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds
Thread-63::DEBUG::2015-03-18 10:39:43,383::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)
Thread-63::DEBUG::2015-03-18 10:39:43,450::misc::751::Storage.SamplingMethod::(__call__) Returning last result
Thread-63::DEBUG::2015-03-18 10:39:43,451::misc::741::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan)
Thread-63::DEBUG::2015-03-18 10:39:43,451::misc::743::Storage.SamplingMethod::(__call__) Got in to sampling method
Thread-63::DEBUG::2015-03-18 10:39:43,451::hba::53::Storage.HBA::(rescan) Starting scan
Thread-63::DEBUG::2015-03-18 10:39:43,451::utils::739::Storage.HBA::(execCmd) /usr/bin/sudo -n /usr/libexec/vdsm/fc-scan (cwd None)
Thread-63::DEBUG::2015-03-18 10:39:43,516::hba::66::Storage.HBA::(rescan) Scan finished
Thread-63::DEBUG::2015-03-18 10:39:43,517::misc::751::Storage.SamplingMethod::(__call__) Returning last result
Thread-63::DEBUG::2015-03-18 10:39:43,517::multipath::128::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /sbin/multipath (cwd None)
Thread-63::DEBUG::2015-03-18 10:39:43,702::multipath::128::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0
Thread-63::DEBUG::2015-03-18 10:39:43,703::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-63::DEBUG::2015-03-18 10:39:43,704::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-63::DEBUG::2015-03-18 10:39:43,704::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-63::DEBUG::2015-03-18 10:39:43,704::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-63::DEBUG::2015-03-18 10:39:43,704::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-63::DEBUG::2015-03-18 10:39:43,705::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-63::DEBUG::2015-03-18 10:39:43,705::misc::751::Storage.SamplingMethod::(__call__) Returning last result
Thread-63::ERROR::2015-03-18 10:39:43,705::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain b2008b5a-920a-4de0-810e-fc5c79d7f869
Thread-63::ERROR::2015-03-18 10:39:43,705::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain b2008b5a-920a-4de0-810e-fc5c79d7f869
Thread-63::DEBUG::2015-03-18 10:39:43,705::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex
Thread-63::DEBUG::2015-03-18 10:39:43,708::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name b2008b5a-920a-4de0-810e-fc5c79d7f869 (cwd None)
Thread-63::DEBUG::2015-03-18 10:39:43,708::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name b2008b5a-920a-4de0-810e-fc5c79d7f869 (cwd None)
Thread-63::DEBUG::2015-03-18 10:39:43,959::lvm::291::Storage.Misc.excCmd::(cmd) FAILED: <err> = '  Volume group "b2008b5a-920a-4de0-810e-fc5c79d7f869" not found\n  Skipping volume group b2008b5a-920a-4de0-810e-fc5c79d7f869\n'; <rc> = 5
Thread-63::WARNING::2015-03-18 10:39:43,963::lvm::376::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['  Volume group "b2008b5a-920a-4de0-810e-fc5c79d7f869" not found', '  Skipping volume group b2008b5a-920a-4de0-810e-fc5c79d7f869']
Thread-63::DEBUG::2015-03-18 10:39:43,964::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex
Thread-64::DEBUG::2015-03-18 10:39:43,981::__init__::298::IOProcessClient::(_run) Starting IOProcess...
Thread-65::DEBUG::2015-03-18 10:39:44,016::__init__::298::IOProcessClient::(_run) Starting IOProcess...
Thread-66::DEBUG::2015-03-18 10:39:44,048::__init__::298::IOProcessClient::(_run) Starting IOProcess...
JsonRpc (StompReactor)::DEBUG::2015-03-18 10:39:44,081::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
Thread-67::DEBUG::2015-03-18 10:39:44,083::__init__::298::IOProcessClient::(_run) Starting IOProcess...
JsonRpcServer::DEBUG::2015-03-18 10:39:44,112::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-68::DEBUG::2015-03-18 10:39:44,127::__init__::298::IOProcessClient::(_run) Starting IOProcess...
Thread-69::DEBUG::2015-03-18 10:39:44,165::__init__::298::IOProcessClient::(_run) Starting IOProcess...
Thread-70::DEBUG::2015-03-18 10:39:44,202::__init__::298::IOProcessClient::(_run) Starting IOProcess...
Thread-71::DEBUG::2015-03-18 10:39:44,238::__init__::298::IOProcessClient::(_run) Starting IOProcess...
Thread-72::DEBUG::2015-03-18 10:39:44,277::__init__::298::IOProcessClient::(_run) Starting IOProcess...
Thread-73::DEBUG::2015-03-18 10:39:44,324::__init__::298::IOProcessClient::(_run) Starting IOProcess...
Thread-76::DEBUG::2015-03-18 10:39:44,360::task::595::Storage.TaskManager.Task::(_updateState) Task=`e725a3ac-87a1-4e5b-b062-6dda3014b5c4`::moving from state init -> state preparing
Thread-74::DEBUG::2015-03-18 10:39:44,372::__init__::298::IOProcessClient::(_run) Starting IOProcess...
Thread-76::INFO::2015-03-18 10:39:44,376::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)
Thread-76::INFO::2015-03-18 10:39:44,400::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {}
Thread-75::DEBUG::2015-03-18 10:39:44,417::__init__::298::IOProcessClient::(_run) Starting IOProcess...
Thread-76::DEBUG::2015-03-18 10:39:44,418::task::1191::Storage.TaskManager.Task::(prepare) Task=`e725a3ac-87a1-4e5b-b062-6dda3014b5c4`::finished: {}
Thread-76::DEBUG::2015-03-18 10:39:44,440::task::595::Storage.TaskManager.Task::(_updateState) Task=`e725a3ac-87a1-4e5b-b062-6dda3014b5c4`::moving from state preparing -> state finished
Thread-76::DEBUG::2015-03-18 10:39:44,453::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-77::DEBUG::2015-03-18 10:39:44,461::__init__::298::IOProcessClient::(_run) Starting IOProcess...

Moreover, detected error in /usr/share/vdsm/storage/sdc.py
 
Thread-63::ERROR::2015-03-18 10:39:48,110::sdc::143::Storage.StorageDomainCache::(_findDomain) domain b2008b5a-920a-4de0-810e-fc5c79d7f869 not found
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
    dom = findMethod(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain
    raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: ('b2008b5a-920a-4de0-810e-fc5c79d7f869',)
Thread-63::INFO::2015-03-18 10:39:48,117::nfsSD::69::Storage.StorageDomain::(create) sdUUID=b2008b5a-920a-4de0-810e-fc5c79d7f869 domainName=SD-1 remotePath=netapp.qa.lab.tlv.redhat.com:/vol/vol_rhev_stress/sd_1 domClass=1
Thread-63::DEBUG::2015-03-18 10:39:48,132::__init__::298::IOProcessClient::(_run) Starting IOProcess...
Thread-63::DEBUG::2015-03-18 10:39:48,192::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend
Thread-63::DEBUG::2015-03-18 10:39:48,193::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=[]
Thread-63::DEBUG::2015-03-18 10:39:48,193::persistentDict::252::Storage.PersistentDict::(refresh) Empty metadata
Thread-63::DEBUG::2015-03-18 10:39:48,194::persistentDict::167::Storage.PersistentDict::(transaction) Starting transaction
Thread-63::DEBUG::2015-03-18 10:39:48,194::persistentDict::173::Storage.PersistentDict::(transaction) Flushing changes
Thread-63::DEBUG::2015-03-18 10:39:48,194::persistentDict::305::Storage.PersistentDict::(flush) about to write lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=SD-1', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=netapp.qa.lab.tlv.redhat.com:/vol/vol_rhev_stress/sd_1', 'ROLE=Regular', 'SDUUID=b2008b5a-920a-4de0-810e-fc5c79d7f869', 'TYPE=NFS', 'VERSION=3', '_SHA_CKSUM=5058a5cc7eae2bfd848ef9625924c7e8b2ae12ea']
Thread-63::WARNING::2015-03-18 10:39:48,196::outOfProcess::252::Storage.oop::(rename) renaming a directory is not an atomic operation
Thread-63::DEBUG::2015-03-18 10:39:48,197::persistentDict::175::Storage.PersistentDict::(transaction) Finished transaction
Thread-63::DEBUG::2015-03-18 10:39:48,203::fileSD::152::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/netapp.qa.lab.tlv.redhat.com:_vol_vol__rhev__stress_sd__1/b2008b5a-920a-4de0-810e-fc5c79d7f869


Version-Release number of selected component (if applicable):
RHEV-M - 3.5.1-0.1 (build vt14)
VDSM - vdsm-4.16.12-2.el6ev

How reproducible:


Steps to Reproduce:
1. Create new Storage Domain 
2. Go to vdsm.log
3. Look for ERROR in it

Actual results:
ERRORs in vdsm.log file

Expected results:
No ERRORs

Additional info:

Comment 1 Allon Mureinik 2015-03-19 14:56:42 UTC

*** This bug has been marked as a duplicate of bug 1101009 ***


Note You need to log in before you can comment on or make changes to this bug.