Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 858010 Details for
Bug 1058300
VMs do not resume after paused state and storage connection to a gluster domain (they will also fail to be manually resumed)
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
vdsm.log
file_1058300.txt (text/plain), 86.56 KB, created by
Andrew Lau
on 2014-02-01 01:06:02 UTC
(
hide
)
Description:
vdsm.log
Filename:
MIME Type:
Creator:
Andrew Lau
Created:
2014-02-01 01:06:02 UTC
Size:
86.56 KB
patch
obsolete
>hread-66::DEBUG::2014-02-01 11:58:12,437::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.1.5:_VM-DATA/ff3a0c94-11dc-446d-b093-5f68ad81520d/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-66::DEBUG::2014-02-01 11:58:12,458::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n505 bytes (505 B) copied, 0.00081027 s, 623 kB/s\n'; <rc> = 0 >Thread-36468::DEBUG::2014-02-01 11:58:12,792::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36468::INFO::2014-02-01 11:58:12,793::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36468::INFO::2014-02-01 11:58:12,804::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 35L}} >Thread-36469::DEBUG::2014-02-01 11:58:12,811::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36469::INFO::2014-02-01 11:58:12,812::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36469::INFO::2014-02-01 11:58:12,819::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'name': 'NextDC_M1', 'isoprefix': '/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 35, 'domains': 'ff3a0c94-11dc-446d-b093-5f68ad81520d:Active,436c344c-bc57-441b-b311-c9595c6039e1:Active', 'master_uuid': 'ff3a0c94-11dc-446d-b093-5f68ad81520d', 'version': '3', 'spm_id': 1, 'type': 'GLUSTERFS', 'master_ver': 1}, 'dominfo': {'ff3a0c94-11dc-446d-b093-5f68ad81520d': {'status': 'Active', 'diskfree': '916172963840', 'isoprefix': '', 'alerts': [], 'disktotal': '982907879424', 'version': 3}, '436c344c-bc57-441b-b311-c9595c6039e1': {'status': 'Active', 'diskfree': '21406154752', 'isoprefix': '/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '25323634688', 'version': 0}}} >Thread-36470::DEBUG::2014-02-01 11:58:14,527::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} >Thread-36470::DEBUG::2014-02-01 11:58:14,566::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >Thread-36473::DEBUG::2014-02-01 11:58:19,710::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} >Thread-67::DEBUG::2014-02-01 11:58:21,080::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-67::DEBUG::2014-02-01 11:58:21,101::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n364 bytes (364 B) copied, 0.000514731 s, 707 kB/s\n'; <rc> = 0 >Thread-36473::DEBUG::2014-02-01 11:58:21,151::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'OFFLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >Thread-66::ERROR::2014-02-01 11:58:22,471::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain ff3a0c94-11dc-446d-b093-5f68ad81520d monitoring information >Traceback (most recent call last): > File "/usr/share/vdsm/storage/domainMonitor.py", line 215, in _monitorDomain > self.domain.selftest() > File "/usr/share/vdsm/storage/nfsSD.py", line 113, in selftest > fileSD.FileStorageDomain.selftest(self) > File "/usr/share/vdsm/storage/fileSD.py", line 552, in selftest > self.oop.os.statvfs(self.domaindir) > File "/usr/share/vdsm/storage/remoteFileHandler.py", line 297, in callCrabRPCFunction > *args, **kwargs) > File "/usr/share/vdsm/storage/remoteFileHandler.py", line 199, in callCrabRPCFunction > raise err >OSError: [Errno 2] No such file or directory: '/rhev/data-center/mnt/glusterSD/172.16.1.5:_VM-DATA/ff3a0c94-11dc-446d-b093-5f68ad81520d' >Thread-66::DEBUG::2014-02-01 11:58:22,472::domainMonitor::247::Storage.DomainMonitorThread::(_monitorDomain) Domain ff3a0c94-11dc-446d-b093-5f68ad81520d changed its status to Invalid >Thread-36475::DEBUG::2014-02-01 11:58:22,474::misc::884::Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Emitting event >Thread-36475::DEBUG::2014-02-01 11:58:22,475::misc::894::Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Calling registered method `contEIOVms` >Thread-36475::DEBUG::2014-02-01 11:58:22,476::misc::904::Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Event emitted >Thread-36477::DEBUG::2014-02-01 11:58:22,869::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36477::INFO::2014-02-01 11:58:22,870::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36477::ERROR::2014-02-01 11:58:22,874::dispatcher::70::Storage.Dispatcher.Protect::(run) (2, 'Sanlock resource read failure', 'No such file or directory') >Traceback (most recent call last): > File "/usr/share/vdsm/storage/dispatcher.py", line 62, in run > result = ctask.prepare(self.func, *args, **kwargs) > File "/usr/share/vdsm/storage/task.py", line 1159, in prepare > raise self.error >SanlockException: (2, 'Sanlock resource read failure', 'No such file or directory') >libvirtEventLoop::INFO::2014-02-01 11:58:23,445::vm::4507::vm.Vm::(_onAbnormalStop) vmId=`01d705e3-6b62-4796-b9e4-4de1c477401a`::abnormal vm stop device virtio-disk0 error eother >libvirtEventLoop::DEBUG::2014-02-01 11:58:23,455::vm::5098::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`01d705e3-6b62-4796-b9e4-4de1c477401a`::event Suspended detail 2 opaque None >Thread-36478::INFO::2014-02-01 11:58:23,968::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) >Thread-36478::INFO::2014-02-01 11:58:23,968::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'ff3a0c94-11dc-446d-b093-5f68ad81520d': {'code': 200, 'version': -1, 'acquired': False, 'delay': '0', 'lastCheck': '1.5', 'valid': False}, '436c344c-bc57-441b-b311-c9595c6039e1': {'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000514731', 'lastCheck': '2.9', 'valid': True}} >Thread-36478::ERROR::2014-02-01 11:58:23,971::API::1244::vds::(getStats) failed to retrieve Hosted Engine HA score >Traceback (most recent call last): > File "/usr/share/vdsm/API.py", line 1242, in getStats > stats['haScore'] = haClient.HAClient().get_local_host_score() > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/client/client.py", line 204, in get_local_host_score > path.get_metadata_path(self._config), > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/env/path.py", line 47, in get_metadata_path > return os.path.join(get_domain_path(config_), > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/env/path.py", line 40, in get_domain_path > .format(sd_uuid, parent)) >Exception: path to storage domain 562f0160-7b80-42e2-b248-5754455c40fc not found in /rhev/data-center/mnt >Thread-36480::DEBUG::2014-02-01 11:58:24,339::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] flowID [1e41babe] >Thread-36480::INFO::2014-02-01 11:58:24,339::logUtils::44::dispatcher::(wrapper) Run and protect: getAllTasksStatuses(spUUID=None, options=None) >Thread-36480::INFO::2014-02-01 11:58:24,340::logUtils::47::dispatcher::(wrapper) Run and protect: getAllTasksStatuses, Return response: {'allTasksStatus': {}} >Thread-36481::DEBUG::2014-02-01 11:58:24,422::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] flowID [1e41babe] >Thread-36481::INFO::2014-02-01 11:58:24,423::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36481::ERROR::2014-02-01 11:58:24,425::dispatcher::70::Storage.Dispatcher.Protect::(run) (2, 'Sanlock resource read failure', 'No such file or directory') >Traceback (most recent call last): > File "/usr/share/vdsm/storage/dispatcher.py", line 62, in run > result = ctask.prepare(self.func, *args, **kwargs) > File "/usr/share/vdsm/storage/task.py", line 1159, in prepare > raise self.error >SanlockException: (2, 'Sanlock resource read failure', 'No such file or directory') >Thread-36482::DEBUG::2014-02-01 11:58:26,300::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} >Thread-36482::DEBUG::2014-02-01 11:58:26,349::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'OFFLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >Thread-67::DEBUG::2014-02-01 11:58:31,116::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-67::DEBUG::2014-02-01 11:58:31,138::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n364 bytes (364 B) copied, 0.000715841 s, 508 kB/s\n'; <rc> = 0 >Thread-36485::DEBUG::2014-02-01 11:58:31,494::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} >Thread-36485::DEBUG::2014-02-01 11:58:31,546::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'OFFLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >Thread-66::ERROR::2014-02-01 11:58:32,487::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain ff3a0c94-11dc-446d-b093-5f68ad81520d monitoring information >Traceback (most recent call last): > File "/usr/share/vdsm/storage/domainMonitor.py", line 215, in _monitorDomain > self.domain.selftest() > File "/usr/share/vdsm/storage/nfsSD.py", line 113, in selftest > fileSD.FileStorageDomain.selftest(self) > File "/usr/share/vdsm/storage/fileSD.py", line 552, in selftest > self.oop.os.statvfs(self.domaindir) > File "/usr/share/vdsm/storage/remoteFileHandler.py", line 297, in callCrabRPCFunction > *args, **kwargs) > File "/usr/share/vdsm/storage/remoteFileHandler.py", line 199, in callCrabRPCFunction > raise err >OSError: [Errno 2] No such file or directory: '/rhev/data-center/mnt/glusterSD/172.16.1.5:_VM-DATA/ff3a0c94-11dc-446d-b093-5f68ad81520d' >Thread-36487::DEBUG::2014-02-01 11:58:34,512::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36487::INFO::2014-02-01 11:58:34,513::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36487::ERROR::2014-02-01 11:58:34,515::dispatcher::70::Storage.Dispatcher.Protect::(run) (2, 'Sanlock resource read failure', 'No such file or directory') >Traceback (most recent call last): > File "/usr/share/vdsm/storage/dispatcher.py", line 62, in run > result = ctask.prepare(self.func, *args, **kwargs) > File "/usr/share/vdsm/storage/task.py", line 1159, in prepare > raise self.error >SanlockException: (2, 'Sanlock resource read failure', 'No such file or directory') >Thread-36489::DEBUG::2014-02-01 11:58:36,693::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} >Thread-36489::DEBUG::2014-02-01 11:58:36,729::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'OFFLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >Thread-36490::INFO::2014-02-01 11:58:39,578::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) >Thread-36490::INFO::2014-02-01 11:58:39,578::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'ff3a0c94-11dc-446d-b093-5f68ad81520d': {'code': 200, 'version': -1, 'acquired': False, 'delay': '0', 'lastCheck': '7.1', 'valid': False}, '436c344c-bc57-441b-b311-c9595c6039e1': {'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000715841', 'lastCheck': '8.4', 'valid': True}} >Thread-36490::ERROR::2014-02-01 11:58:39,582::API::1244::vds::(getStats) failed to retrieve Hosted Engine HA score >Traceback (most recent call last): > File "/usr/share/vdsm/API.py", line 1242, in getStats > stats['haScore'] = haClient.HAClient().get_local_host_score() > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/client/client.py", line 204, in get_local_host_score > path.get_metadata_path(self._config), > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/env/path.py", line 47, in get_metadata_path > return os.path.join(get_domain_path(config_), > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/env/path.py", line 40, in get_domain_path > .format(sd_uuid, parent)) >Exception: path to storage domain 562f0160-7b80-42e2-b248-5754455c40fc not found in /rhev/data-center/mnt >Thread-67::DEBUG::2014-02-01 11:58:41,153::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-67::DEBUG::2014-02-01 11:58:41,175::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n364 bytes (364 B) copied, 0.000561439 s, 648 kB/s\n'; <rc> = 0 >Thread-36492::DEBUG::2014-02-01 11:58:41,877::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} >Thread-36492::DEBUG::2014-02-01 11:58:41,914::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'OFFLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >Thread-66::ERROR::2014-02-01 11:58:42,495::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain ff3a0c94-11dc-446d-b093-5f68ad81520d monitoring information >Traceback (most recent call last): > File "/usr/share/vdsm/storage/domainMonitor.py", line 215, in _monitorDomain > self.domain.selftest() > File "/usr/share/vdsm/storage/nfsSD.py", line 113, in selftest > fileSD.FileStorageDomain.selftest(self) > File "/usr/share/vdsm/storage/fileSD.py", line 552, in selftest > self.oop.os.statvfs(self.domaindir) > File "/usr/share/vdsm/storage/remoteFileHandler.py", line 297, in callCrabRPCFunction > *args, **kwargs) > File "/usr/share/vdsm/storage/remoteFileHandler.py", line 199, in callCrabRPCFunction > raise err >OSError: [Errno 2] No such file or directory: '/rhev/data-center/mnt/glusterSD/172.16.1.5:_VM-DATA/ff3a0c94-11dc-446d-b093-5f68ad81520d' >Thread-36494::DEBUG::2014-02-01 11:58:44,603::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36494::INFO::2014-02-01 11:58:44,604::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36494::INFO::2014-02-01 11:58:44,652::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 35L}} >VM Channels Listener::DEBUG::2014-02-01 11:58:49,528::vmChannels::91::vds::(_handle_timeouts) Timeout on fileno 38. >Thread-67::DEBUG::2014-02-01 11:58:51,191::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-67::DEBUG::2014-02-01 11:58:51,214::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n364 bytes (364 B) copied, 0.000677361 s, 537 kB/s\n'; <rc> = 0 >Thread-66::DEBUG::2014-02-01 11:58:52,519::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.1.5:_VM-DATA/ff3a0c94-11dc-446d-b093-5f68ad81520d/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-66::DEBUG::2014-02-01 11:58:52,543::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n505 bytes (505 B) copied, 0.000980518 s, 515 kB/s\n'; <rc> = 0 >Thread-66::DEBUG::2014-02-01 11:58:52,553::domainMonitor::247::Storage.DomainMonitorThread::(_monitorDomain) Domain ff3a0c94-11dc-446d-b093-5f68ad81520d changed its status to Valid >Thread-36498::DEBUG::2014-02-01 11:58:52,555::misc::884::Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Emitting event >Thread-36498::DEBUG::2014-02-01 11:58:52,556::misc::894::Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Calling registered method `contEIOVms` >Thread-36498::DEBUG::2014-02-01 11:58:52,557::misc::904::Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Event emitted >Thread-36499::INFO::2014-02-01 11:58:52,560::clientIF::126::vds::(contEIOVms) vmContainerLock acquired >Thread-36499::INFO::2014-02-01 11:58:52,562::clientIF::133::vds::(contEIOVms) Cont vm 01d705e3-6b62-4796-b9e4-4de1c477401a in EIO >libvirtEventLoop::DEBUG::2014-02-01 11:58:52,614::vm::5098::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`01d705e3-6b62-4796-b9e4-4de1c477401a`::event Resumed detail 0 opaque None >libvirtEventLoop::DEBUG::2014-02-01 11:58:52,617::vm::5098::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`01d705e3-6b62-4796-b9e4-4de1c477401a`::event Resumed detail 0 opaque None >libvirtEventLoop::INFO::2014-02-01 11:58:52,619::vm::4507::vm.Vm::(_onAbnormalStop) vmId=`01d705e3-6b62-4796-b9e4-4de1c477401a`::abnormal vm stop device virtio-disk0 error eother >libvirtEventLoop::DEBUG::2014-02-01 11:58:52,619::vm::5098::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`01d705e3-6b62-4796-b9e4-4de1c477401a`::event Suspended detail 2 opaque None >Thread-3062::INFO::2014-02-01 11:58:54,554::logUtils::44::dispatcher::(wrapper) Run and protect: getVolumeSize(sdUUID='ff3a0c94-11dc-446d-b093-5f68ad81520d', spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', imgUUID='a746f514-51eb-4926-80d5-545108438f01', volUUID='ba8685ec-668a-4780-88ba-0b114e1a7e7d', options=None) >Thread-36500::INFO::2014-02-01 11:58:54,888::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) >Thread-36500::INFO::2014-02-01 11:58:54,889::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'ff3a0c94-11dc-446d-b093-5f68ad81520d': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000980518', 'lastCheck': '2.3', 'valid': True}, '436c344c-bc57-441b-b311-c9595c6039e1': {'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000677361', 'lastCheck': '3.7', 'valid': True}} >Thread-36500::ERROR::2014-02-01 11:58:54,891::API::1244::vds::(getStats) failed to retrieve Hosted Engine HA score >Traceback (most recent call last): > File "/usr/share/vdsm/API.py", line 1242, in getStats > stats['haScore'] = haClient.HAClient().get_local_host_score() > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/client/client.py", line 204, in get_local_host_score > path.get_metadata_path(self._config), > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/env/path.py", line 47, in get_metadata_path > return os.path.join(get_domain_path(config_), > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/env/path.py", line 40, in get_domain_path > .format(sd_uuid, parent)) >Exception: path to storage domain 562f0160-7b80-42e2-b248-5754455c40fc not found in /rhev/data-center/mnt >Thread-67::DEBUG::2014-02-01 11:59:01,230::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-67::DEBUG::2014-02-01 11:59:01,254::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n364 bytes (364 B) copied, 0.00064947 s, 560 kB/s\n'; <rc> = 0 >Thread-66::DEBUG::2014-02-01 11:59:02,588::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.1.5:_VM-DATA/ff3a0c94-11dc-446d-b093-5f68ad81520d/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-66::DEBUG::2014-02-01 11:59:02,665::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n505 bytes (505 B) copied, 0.000847941 s, 596 kB/s\n'; <rc> = 0 >Thread-67::DEBUG::2014-02-01 11:59:11,270::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-67::DEBUG::2014-02-01 11:59:11,295::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n364 bytes (364 B) copied, 0.000565064 s, 644 kB/s\n'; <rc> = 0 >Thread-66::DEBUG::2014-02-01 11:59:12,689::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.1.5:_VM-DATA/ff3a0c94-11dc-446d-b093-5f68ad81520d/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-66::DEBUG::2014-02-01 11:59:12,713::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n505 bytes (505 B) copied, 0.000763805 s, 661 kB/s\n'; <rc> = 0 >VM Channels Listener::DEBUG::2014-02-01 11:59:19,570::vmChannels::91::vds::(_handle_timeouts) Timeout on fileno 38. >Thread-67::DEBUG::2014-02-01 11:59:21,312::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-67::DEBUG::2014-02-01 11:59:21,336::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n364 bytes (364 B) copied, 0.000624142 s, 583 kB/s\n'; <rc> = 0 >Thread-66::DEBUG::2014-02-01 11:59:22,740::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.1.5:_VM-DATA/ff3a0c94-11dc-446d-b093-5f68ad81520d/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-66::DEBUG::2014-02-01 11:59:22,763::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n505 bytes (505 B) copied, 0.000799563 s, 632 kB/s\n'; <rc> = 0 >Thread-67::DEBUG::2014-02-01 11:59:31,352::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-67::DEBUG::2014-02-01 11:59:31,376::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n364 bytes (364 B) copied, 0.000581444 s, 626 kB/s\n'; <rc> = 0 >Thread-66::DEBUG::2014-02-01 11:59:32,786::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.1.5:_VM-DATA/ff3a0c94-11dc-446d-b093-5f68ad81520d/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-66::DEBUG::2014-02-01 11:59:32,810::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n505 bytes (505 B) copied, 0.00119589 s, 422 kB/s\n'; <rc> = 0 >Thread-67::DEBUG::2014-02-01 11:59:41,393::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-67::DEBUG::2014-02-01 11:59:41,416::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n364 bytes (364 B) copied, 0.000622734 s, 585 kB/s\n'; <rc> = 0 >Thread-66::DEBUG::2014-02-01 11:59:42,838::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.1.5:_VM-DATA/ff3a0c94-11dc-446d-b093-5f68ad81520d/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-66::DEBUG::2014-02-01 11:59:42,862::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n505 bytes (505 B) copied, 0.00113122 s, 446 kB/s\n'; <rc> = 0 >Thread-36502::DEBUG::2014-02-01 11:59:44,663::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} >Thread-36502::DEBUG::2014-02-01 11:59:44,720::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >Thread-36503::DEBUG::2014-02-01 11:59:44,950::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36503::INFO::2014-02-01 11:59:44,951::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36503::INFO::2014-02-01 11:59:44,957::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'name': 'NextDC_M1', 'isoprefix': '/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 35, 'domains': 'ff3a0c94-11dc-446d-b093-5f68ad81520d:Active,436c344c-bc57-441b-b311-c9595c6039e1:Active', 'master_uuid': 'ff3a0c94-11dc-446d-b093-5f68ad81520d', 'version': '3', 'spm_id': 1, 'type': 'GLUSTERFS', 'master_ver': 1}, 'dominfo': {'ff3a0c94-11dc-446d-b093-5f68ad81520d': {'status': 'Active', 'diskfree': '916172963840', 'isoprefix': '', 'alerts': [], 'disktotal': '982907879424', 'version': 3}, '436c344c-bc57-441b-b311-c9595c6039e1': {'status': 'Active', 'diskfree': '21406154752', 'isoprefix': '/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '25323634688', 'version': 0}}} >Thread-36504::DEBUG::2014-02-01 11:59:45,396::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36504::INFO::2014-02-01 11:59:45,396::logUtils::44::dispatcher::(wrapper) Run and protect: getAllTasksInfo(spUUID=None, options=None) >Thread-36504::INFO::2014-02-01 11:59:45,397::logUtils::47::dispatcher::(wrapper) Run and protect: getAllTasksInfo, Return response: {'allTasksInfo': {}} >VM Channels Listener::DEBUG::2014-02-01 11:59:49,605::vmChannels::91::vds::(_handle_timeouts) Timeout on fileno 38. >Thread-36506::DEBUG::2014-02-01 11:59:50,166::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} >Thread-36506::DEBUG::2014-02-01 11:59:50,205::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >Thread-67::DEBUG::2014-02-01 11:59:51,433::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-67::DEBUG::2014-02-01 11:59:51,525::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n364 bytes (364 B) copied, 0.000532603 s, 683 kB/s\n'; <rc> = 0 >Thread-66::DEBUG::2014-02-01 11:59:52,893::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.1.5:_VM-DATA/ff3a0c94-11dc-446d-b093-5f68ad81520d/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-66::DEBUG::2014-02-01 11:59:52,959::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n505 bytes (505 B) copied, 0.000754688 s, 669 kB/s\n'; <rc> = 0 >Thread-3062::ERROR::2014-02-01 11:59:54,556::dispatcher::70::Storage.Dispatcher.Protect::(run) >Traceback (most recent call last): > File "/usr/share/vdsm/storage/dispatcher.py", line 62, in run > result = ctask.prepare(self.func, *args, **kwargs) > File "/usr/share/vdsm/storage/task.py", line 1159, in prepare > raise self.error >Timeout >Thread-3062::ERROR::2014-02-01 11:59:54,557::vm::3781::vm.Vm::(updateDriveVolume) vmId=`01d705e3-6b62-4796-b9e4-4de1c477401a`::Unable to update the volume ba8685ec-668a-4780-88ba-0b114e1a7e7d (domain: ff3a0c94-11dc-446d-b093-5f68ad81520d image: a746f514-51eb-4926-80d5-545108438f01) for the drive vda >Thread-36509::DEBUG::2014-02-01 11:59:55,354::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36509::INFO::2014-02-01 11:59:55,355::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36510::DEBUG::2014-02-01 11:59:55,371::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} >Thread-36509::INFO::2014-02-01 11:59:55,372::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 35L}} >Thread-36511::DEBUG::2014-02-01 11:59:55,392::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36511::INFO::2014-02-01 11:59:55,393::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36511::INFO::2014-02-01 11:59:55,399::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'name': 'NextDC_M1', 'isoprefix': '/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 35, 'domains': 'ff3a0c94-11dc-446d-b093-5f68ad81520d:Active,436c344c-bc57-441b-b311-c9595c6039e1:Active', 'master_uuid': 'ff3a0c94-11dc-446d-b093-5f68ad81520d', 'version': '3', 'spm_id': 1, 'type': 'GLUSTERFS', 'master_ver': 1}, 'dominfo': {'ff3a0c94-11dc-446d-b093-5f68ad81520d': {'status': 'Active', 'diskfree': '916172963840', 'isoprefix': '', 'alerts': [], 'disktotal': '982907879424', 'version': 3}, '436c344c-bc57-441b-b311-c9595c6039e1': {'status': 'Active', 'diskfree': '21406154752', 'isoprefix': '/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '25323634688', 'version': 0}}} >Thread-36510::DEBUG::2014-02-01 11:59:55,412::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >Thread-36513::INFO::2014-02-01 12:00:00,361::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) >Thread-36513::INFO::2014-02-01 12:00:00,362::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'ff3a0c94-11dc-446d-b093-5f68ad81520d': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000754688', 'lastCheck': '7.4', 'valid': True}, '436c344c-bc57-441b-b311-c9595c6039e1': {'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000532603', 'lastCheck': '8.8', 'valid': True}} >Thread-36513::ERROR::2014-02-01 12:00:00,365::API::1244::vds::(getStats) failed to retrieve Hosted Engine HA score >Traceback (most recent call last): > File "/usr/share/vdsm/API.py", line 1242, in getStats > stats['haScore'] = haClient.HAClient().get_local_host_score() > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/client/client.py", line 204, in get_local_host_score > path.get_metadata_path(self._config), > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/env/path.py", line 47, in get_metadata_path > return os.path.join(get_domain_path(config_), > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/env/path.py", line 40, in get_domain_path > .format(sd_uuid, parent)) >Exception: path to storage domain 562f0160-7b80-42e2-b248-5754455c40fc not found in /rhev/data-center/mnt >Thread-36515::DEBUG::2014-02-01 12:00:00,716::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} >Thread-36515::DEBUG::2014-02-01 12:00:00,755::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >Thread-67::DEBUG::2014-02-01 12:00:01,541::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-67::DEBUG::2014-02-01 12:00:01,650::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n364 bytes (364 B) copied, 0.000657188 s, 554 kB/s\n'; <rc> = 0 >Thread-66::DEBUG::2014-02-01 12:00:03,520::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.1.5:_VM-DATA/ff3a0c94-11dc-446d-b093-5f68ad81520d/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-66::DEBUG::2014-02-01 12:00:03,544::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n505 bytes (505 B) copied, 0.00111769 s, 452 kB/s\n'; <rc> = 0 >Thread-36517::DEBUG::2014-02-01 12:00:05,473::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36517::INFO::2014-02-01 12:00:05,474::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36517::INFO::2014-02-01 12:00:05,488::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 35L}} >Thread-36518::DEBUG::2014-02-01 12:00:05,496::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36518::INFO::2014-02-01 12:00:05,497::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36518::INFO::2014-02-01 12:00:05,504::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'name': 'NextDC_M1', 'isoprefix': '/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 35, 'domains': 'ff3a0c94-11dc-446d-b093-5f68ad81520d:Active,436c344c-bc57-441b-b311-c9595c6039e1:Active', 'master_uuid': 'ff3a0c94-11dc-446d-b093-5f68ad81520d', 'version': '3', 'spm_id': 1, 'type': 'GLUSTERFS', 'master_ver': 1}, 'dominfo': {'ff3a0c94-11dc-446d-b093-5f68ad81520d': {'status': 'Active', 'diskfree': '916172963840', 'isoprefix': '', 'alerts': [], 'disktotal': '982907879424', 'version': 3}, '436c344c-bc57-441b-b311-c9595c6039e1': {'status': 'Active', 'diskfree': '21406154752', 'isoprefix': '/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '25323634688', 'version': 0}}} >Thread-36519::DEBUG::2014-02-01 12:00:05,949::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} >Thread-36519::DEBUG::2014-02-01 12:00:05,987::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >Thread-36522::DEBUG::2014-02-01 12:00:11,141::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} >Thread-36522::DEBUG::2014-02-01 12:00:11,253::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >Thread-67::DEBUG::2014-02-01 12:00:11,676::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-67::DEBUG::2014-02-01 12:00:11,928::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n364 bytes (364 B) copied, 0.000645698 s, 564 kB/s\n'; <rc> = 0 >Thread-66::DEBUG::2014-02-01 12:00:13,585::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.1.5:_VM-DATA/ff3a0c94-11dc-446d-b093-5f68ad81520d/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-66::DEBUG::2014-02-01 12:00:13,609::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n505 bytes (505 B) copied, 0.000749734 s, 674 kB/s\n'; <rc> = 0 >Thread-36524::DEBUG::2014-02-01 12:00:15,714::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36524::INFO::2014-02-01 12:00:15,716::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36524::INFO::2014-02-01 12:00:15,730::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 35L}} >Thread-36525::DEBUG::2014-02-01 12:00:15,740::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36525::INFO::2014-02-01 12:00:15,740::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36525::INFO::2014-02-01 12:00:15,748::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'name': 'NextDC_M1', 'isoprefix': '/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 35, 'domains': 'ff3a0c94-11dc-446d-b093-5f68ad81520d:Active,436c344c-bc57-441b-b311-c9595c6039e1:Active', 'master_uuid': 'ff3a0c94-11dc-446d-b093-5f68ad81520d', 'version': '3', 'spm_id': 1, 'type': 'GLUSTERFS', 'master_ver': 1}, 'dominfo': {'ff3a0c94-11dc-446d-b093-5f68ad81520d': {'status': 'Active', 'diskfree': '916172963840', 'isoprefix': '', 'alerts': [], 'disktotal': '982907879424', 'version': 3}, '436c344c-bc57-441b-b311-c9595c6039e1': {'status': 'Active', 'diskfree': '21406154752', 'isoprefix': '/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '25323634688', 'version': 0}}} >Thread-36526::INFO::2014-02-01 12:00:16,011::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) >Thread-36526::INFO::2014-02-01 12:00:16,011::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'ff3a0c94-11dc-446d-b093-5f68ad81520d': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000749734', 'lastCheck': '2.4', 'valid': True}, '436c344c-bc57-441b-b311-c9595c6039e1': {'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000645698', 'lastCheck': '4.1', 'valid': True}} >Thread-36526::ERROR::2014-02-01 12:00:16,035::API::1244::vds::(getStats) failed to retrieve Hosted Engine HA score >Traceback (most recent call last): > File "/usr/share/vdsm/API.py", line 1242, in getStats > stats['haScore'] = haClient.HAClient().get_local_host_score() > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/client/client.py", line 204, in get_local_host_score > path.get_metadata_path(self._config), > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/env/path.py", line 47, in get_metadata_path > return os.path.join(get_domain_path(config_), > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/env/path.py", line 40, in get_domain_path > .format(sd_uuid, parent)) >Exception: path to storage domain 562f0160-7b80-42e2-b248-5754455c40fc not found in /rhev/data-center/mnt >Thread-36528::DEBUG::2014-02-01 12:00:16,406::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} >Thread-36528::DEBUG::2014-02-01 12:00:16,449::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >VM Channels Listener::DEBUG::2014-02-01 12:00:19,666::vmChannels::91::vds::(_handle_timeouts) Timeout on fileno 38. >Thread-36530::DEBUG::2014-02-01 12:00:21,598::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} flowID [1dd7b905] >Thread-36530::DEBUG::2014-02-01 12:00:21,637::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >Thread-67::DEBUG::2014-02-01 12:00:21,945::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-67::DEBUG::2014-02-01 12:00:22,019::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n364 bytes (364 B) copied, 0.000643879 s, 565 kB/s\n'; <rc> = 0 >Thread-66::DEBUG::2014-02-01 12:00:23,637::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.1.5:_VM-DATA/ff3a0c94-11dc-446d-b093-5f68ad81520d/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-66::DEBUG::2014-02-01 12:00:23,660::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n505 bytes (505 B) copied, 0.000713826 s, 707 kB/s\n'; <rc> = 0 >Thread-36533::DEBUG::2014-02-01 12:00:25,797::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36533::INFO::2014-02-01 12:00:25,798::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36533::INFO::2014-02-01 12:00:25,811::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 35L}} >Thread-36534::DEBUG::2014-02-01 12:00:25,819::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36534::INFO::2014-02-01 12:00:25,820::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36534::INFO::2014-02-01 12:00:25,827::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'name': 'NextDC_M1', 'isoprefix': '/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 35, 'domains': 'ff3a0c94-11dc-446d-b093-5f68ad81520d:Active,436c344c-bc57-441b-b311-c9595c6039e1:Active', 'master_uuid': 'ff3a0c94-11dc-446d-b093-5f68ad81520d', 'version': '3', 'spm_id': 1, 'type': 'GLUSTERFS', 'master_ver': 1}, 'dominfo': {'ff3a0c94-11dc-446d-b093-5f68ad81520d': {'status': 'Active', 'diskfree': '916172963840', 'isoprefix': '', 'alerts': [], 'disktotal': '982907879424', 'version': 3}, '436c344c-bc57-441b-b311-c9595c6039e1': {'status': 'Active', 'diskfree': '21406154752', 'isoprefix': '/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '25323634688', 'version': 0}}} >Thread-36535::DEBUG::2014-02-01 12:00:26,791::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} >Thread-36535::DEBUG::2014-02-01 12:00:26,833::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >Thread-36537::INFO::2014-02-01 12:00:31,336::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) >Thread-36537::INFO::2014-02-01 12:00:31,336::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'ff3a0c94-11dc-446d-b093-5f68ad81520d': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000713826', 'lastCheck': '7.7', 'valid': True}, '436c344c-bc57-441b-b311-c9595c6039e1': {'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000643879', 'lastCheck': '9.3', 'valid': True}} >Thread-36537::ERROR::2014-02-01 12:00:31,340::API::1244::vds::(getStats) failed to retrieve Hosted Engine HA score >Traceback (most recent call last): > File "/usr/share/vdsm/API.py", line 1242, in getStats > stats['haScore'] = haClient.HAClient().get_local_host_score() > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/client/client.py", line 204, in get_local_host_score > path.get_metadata_path(self._config), > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/env/path.py", line 47, in get_metadata_path > return os.path.join(get_domain_path(config_), > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/env/path.py", line 40, in get_domain_path > .format(sd_uuid, parent)) >Exception: path to storage domain 562f0160-7b80-42e2-b248-5754455c40fc not found in /rhev/data-center/mnt >Thread-36539::DEBUG::2014-02-01 12:00:31,988::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} >Thread-67::DEBUG::2014-02-01 12:00:32,023::domainMonitor::192::Storage.DomainMonitorThread::(_monitorDomain) Refreshing domain 436c344c-bc57-441b-b311-c9595c6039e1 >Thread-36539::DEBUG::2014-02-01 12:00:32,028::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >Thread-67::DEBUG::2014-02-01 12:00:32,034::fileSD::140::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1 >Thread-67::DEBUG::2014-02-01 12:00:32,036::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend >Thread-67::DEBUG::2014-02-01 12:00:32,047::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Iso', 'DESCRIPTION=ISOS', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=0', 'POOL_UUID=5849b030-626e-47cb-ad90-3ce782d831b3', 'REMOTE_PATH=engine.melb.example.net:/var/lib/exports/iso', 'ROLE=Regular', 'SDUUID=436c344c-bc57-441b-b311-c9595c6039e1', 'TYPE=NFS', 'VERSION=0', '_SHA_CKSUM=565367baaccc061ba62498d9ef0510a4acd42623'] >Thread-67::DEBUG::2014-02-01 12:00:32,050::fileSD::575::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images [] >Thread-67::INFO::2014-02-01 12:00:32,050::sd::374::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 436c344c-bc57-441b-b311-c9595c6039e1_imageNS already registered >Thread-67::INFO::2014-02-01 12:00:32,051::sd::382::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 436c344c-bc57-441b-b311-c9595c6039e1_volumeNS already registered >Thread-67::DEBUG::2014-02-01 12:00:32,062::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-67::DEBUG::2014-02-01 12:00:32,092::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n364 bytes (364 B) copied, 0.000619694 s, 587 kB/s\n'; <rc> = 0 >Thread-36542::DEBUG::2014-02-01 12:00:33,485::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call vmCont with ('01d705e3-6b62-4796-b9e4-4de1c477401a',) {} flowID [3c100dfb] >libvirtEventLoop::DEBUG::2014-02-01 12:00:33,519::vm::5098::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`01d705e3-6b62-4796-b9e4-4de1c477401a`::event Resumed detail 0 opaque None >libvirtEventLoop::DEBUG::2014-02-01 12:00:33,525::vm::5098::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`01d705e3-6b62-4796-b9e4-4de1c477401a`::event Resumed detail 0 opaque None >Thread-36542::DEBUG::2014-02-01 12:00:33,526::BindingXMLRPC::977::vds::(wrapper) return vmCont with {'status': {'message': 'Done', 'code': 0}, 'output': ['']} >libvirtEventLoop::INFO::2014-02-01 12:00:33,528::vm::4507::vm.Vm::(_onAbnormalStop) vmId=`01d705e3-6b62-4796-b9e4-4de1c477401a`::abnormal vm stop device virtio-disk0 error eother >libvirtEventLoop::DEBUG::2014-02-01 12:00:33,528::vm::5098::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`01d705e3-6b62-4796-b9e4-4de1c477401a`::event Suspended detail 2 opaque None >Thread-66::DEBUG::2014-02-01 12:00:33,684::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.1.5:_VM-DATA/ff3a0c94-11dc-446d-b093-5f68ad81520d/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-66::DEBUG::2014-02-01 12:00:33,805::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n505 bytes (505 B) copied, 0.000790498 s, 639 kB/s\n'; <rc> = 0 >Thread-36544::DEBUG::2014-02-01 12:00:35,880::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36544::INFO::2014-02-01 12:00:35,882::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36544::INFO::2014-02-01 12:00:35,897::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 35L}} >Thread-36545::DEBUG::2014-02-01 12:00:35,906::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36545::INFO::2014-02-01 12:00:35,907::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36545::INFO::2014-02-01 12:00:35,914::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'name': 'NextDC_M1', 'isoprefix': '/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 35, 'domains': 'ff3a0c94-11dc-446d-b093-5f68ad81520d:Active,436c344c-bc57-441b-b311-c9595c6039e1:Active', 'master_uuid': 'ff3a0c94-11dc-446d-b093-5f68ad81520d', 'version': '3', 'spm_id': 1, 'type': 'GLUSTERFS', 'master_ver': 1}, 'dominfo': {'ff3a0c94-11dc-446d-b093-5f68ad81520d': {'status': 'Active', 'diskfree': '916172963840', 'isoprefix': '', 'alerts': [], 'disktotal': '982907879424', 'version': 3}, '436c344c-bc57-441b-b311-c9595c6039e1': {'status': 'Active', 'diskfree': '21406154752', 'isoprefix': '/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '25323634688', 'version': 0}}} >Thread-36546::DEBUG::2014-02-01 12:00:37,181::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} flowID [43a03935] >Thread-36546::DEBUG::2014-02-01 12:00:37,221::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >Thread-67::DEBUG::2014-02-01 12:00:42,108::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-67::DEBUG::2014-02-01 12:00:42,201::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n364 bytes (364 B) copied, 0.000530856 s, 686 kB/s\n'; <rc> = 0 >Thread-36549::DEBUG::2014-02-01 12:00:42,373::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} >Thread-36549::DEBUG::2014-02-01 12:00:42,411::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >Thread-66::DEBUG::2014-02-01 12:00:43,835::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.1.5:_VM-DATA/ff3a0c94-11dc-446d-b093-5f68ad81520d/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-66::DEBUG::2014-02-01 12:00:44,303::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n505 bytes (505 B) copied, 0.000822357 s, 614 kB/s\n'; <rc> = 0 >Thread-36551::DEBUG::2014-02-01 12:00:44,838::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call tasksList with () {} >Thread-36551::ERROR::2014-02-01 12:00:44,872::BindingXMLRPC::986::vds::(wrapper) vdsm exception occured >Traceback (most recent call last): > File "/usr/share/vdsm/BindingXMLRPC.py", line 973, in wrapper > res = f(*args, **kwargs) > File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper > rv = func(*args, **kwargs) > File "/usr/share/vdsm/gluster/api.py", line 306, in tasksList > status = self.svdsmProxy.glusterTasksList(taskIds) > File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ > return callMethod() > File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> > **kwargs) > File "<string>", line 2, in glusterTasksList > File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod > raise convert_to_error(kind, result) >GlusterCmdExecFailedException: Command execution failed >error: tasks is not a valid status option >Usage: volume status [all | <VOLNAME> [nfs|shd|<BRICK>]] [detail|clients|mem|inode|fd|callpool] >return code: 1 >Thread-36552::DEBUG::2014-02-01 12:00:45,992::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36552::INFO::2014-02-01 12:00:45,993::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36552::INFO::2014-02-01 12:00:46,010::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 35L}} >Thread-36553::DEBUG::2014-02-01 12:00:46,018::BindingXMLRPC::159::vds::(wrapper) client [172.16.0.10] >Thread-36553::INFO::2014-02-01 12:00:46,019::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', options=None) >Thread-36553::INFO::2014-02-01 12:00:46,026::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'name': 'NextDC_M1', 'isoprefix': '/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 35, 'domains': 'ff3a0c94-11dc-446d-b093-5f68ad81520d:Active,436c344c-bc57-441b-b311-c9595c6039e1:Active', 'master_uuid': 'ff3a0c94-11dc-446d-b093-5f68ad81520d', 'version': '3', 'spm_id': 1, 'type': 'GLUSTERFS', 'master_ver': 1}, 'dominfo': {'ff3a0c94-11dc-446d-b093-5f68ad81520d': {'status': 'Active', 'diskfree': '916172963840', 'isoprefix': '', 'alerts': [], 'disktotal': '982907879424', 'version': 3}, '436c344c-bc57-441b-b311-c9595c6039e1': {'status': 'Active', 'diskfree': '21406154752', 'isoprefix': '/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '25323634688', 'version': 0}}} >Thread-36554::INFO::2014-02-01 12:00:46,642::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) >Thread-36554::INFO::2014-02-01 12:00:46,642::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'ff3a0c94-11dc-446d-b093-5f68ad81520d': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000822357', 'lastCheck': '2.3', 'valid': True}, '436c344c-bc57-441b-b311-c9595c6039e1': {'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000530856', 'lastCheck': '4.4', 'valid': True}} >Thread-36554::ERROR::2014-02-01 12:00:46,646::API::1244::vds::(getStats) failed to retrieve Hosted Engine HA score >Traceback (most recent call last): > File "/usr/share/vdsm/API.py", line 1242, in getStats > stats['haScore'] = haClient.HAClient().get_local_host_score() > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/client/client.py", line 204, in get_local_host_score > path.get_metadata_path(self._config), > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/env/path.py", line 47, in get_metadata_path > return os.path.join(get_domain_path(config_), > File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/env/path.py", line 40, in get_domain_path > .format(sd_uuid, parent)) >Exception: path to storage domain 562f0160-7b80-42e2-b248-5754455c40fc not found in /rhev/data-center/mnt >Thread-36556::DEBUG::2014-02-01 12:00:47,584::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} >Thread-36556::DEBUG::2014-02-01 12:00:47,623::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >VM Channels Listener::DEBUG::2014-02-01 12:00:49,821::vmChannels::91::vds::(_handle_timeouts) Timeout on fileno 38. >Thread-67::DEBUG::2014-02-01 12:00:52,218::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/engine.melb.example.net:_var_lib_exports_iso/436c344c-bc57-441b-b311-c9595c6039e1/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-67::DEBUG::2014-02-01 12:00:52,241::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n364 bytes (364 B) copied, 0.000648526 s, 561 kB/s\n'; <rc> = 0 >Thread-36558::DEBUG::2014-02-01 12:00:52,793::BindingXMLRPC::970::vds::(wrapper) client [172.16.0.10]::call volumesList with () {} >Thread-36558::DEBUG::2014-02-01 12:00:53,036::BindingXMLRPC::977::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'HOSTED-ENGINE': {'transportType': ['TCP'], 'uuid': '224dc5cc-3ce9-4db2-84dc-4694c8bd6759', 'bricks': ['gs01.melb.example.net:/data1/hosted-engine', 'gs02.melb.example.net:/data1/hosted-engine'], 'volumeName': 'HOSTED-ENGINE', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}, 'VM-DATA': {'transportType': ['TCP'], 'uuid': 'f8e51af1-a0ca-411b-9738-c3e673d6c4e3', 'bricks': ['gs01.melb.example.net:/data1/vm-data', 'gs02.melb.example.net:/data1/vm-data'], 'volumeName': 'VM-DATA', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '172.16.*.*', 'performance.cache-size': '1GB', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} >Thread-66::DEBUG::2014-02-01 12:00:54,326::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/rhev/data-center/mnt/glusterSD/172.16.1.5:_VM-DATA/ff3a0c94-11dc-446d-b093-5f68ad81520d/dom_md/metadata bs=4096 count=1' (cwd None) >Thread-66::DEBUG::2014-02-01 12:00:54,347::fileSD::225::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n505 bytes (505 B) copied, 0.000917136 s, 551 kB/s\n'; <rc> = 0 >Thread-3062::INFO::2014-02-01 12:00:54,668::logUtils::44::dispatcher::(wrapper) Run and protect: getVolumeSize(sdUUID='ff3a0c94-11dc-446d-b093-5f68ad81520d', spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', imgUUID='a746f514-51eb-4926-80d5-545108438f01', volUUID='ba8685ec-668a-4780-88ba-0b114e1a7e7d', options=None) >Thread-3062::INFO::2014-02-01 12:00:54,674::logUtils::47::dispatcher::(wrapper) Run and protect: getVolumeSize, Return response: {'truesize': '2795319296', 'apparentsize': '53687091200'}
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1058300
:
856063
|
856064
|
858008
|
858009
| 858010 |
867646
|
867647
|
884735
|
909506