Bug 985556 - vdsm: failure to move disk with 'truesize' error in vdsm will show the same exit message in event log
Summary: vdsm: failure to move disk with 'truesize' error in vdsm will show the same e...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 3.1.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: 3.2.2
Assignee: Federico Simoncelli
QA Contact: Aharon Canan
URL:
Whiteboard: storage
Depends On: 883858
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-17 18:40 UTC by Idith Tal-Kohen
Modified: 2018-12-02 17:32 UTC (History)
21 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, failure to move a disk produced a 'truesize' exit message, which was not informative. Now, failure to move a disk produces a more helpful error message explaining that the volume is corrupted or missing.
Clone Of: 883858
Environment:
Last Closed: 2013-08-13 16:18:26 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2013:1155 0 normal SHIPPED_LIVE Moderate: rhev 3.2.2 - vdsm security and bug fix update 2013-08-21 21:07:13 UTC
oVirt gerrit 13529 0 None None None Never

Comment 3 Aharon Canan 2013-08-01 13:17:35 UTC
verified (although 'truesize' still appears - see vdsm.log below) using sf19
Volume 1f761569-4135-439d-9fb8-0f7f7b19181a is corrupted or missing

verification steps - 
====================
1. create DC based on NFS
2. create VM without starting it.

On the NFS foler -  
  3. mv <voluuid> <voluuid>.bak
  4. mv <voluuid>.meta <voluuid>.meta.bak
  5. mv <voluuid>.lease <voluuid>.lease.bak

from vdsm.log
=============
Thread-8281::ERROR::2013-08-01 16:07:24,910::dispatcher::69::Storage.Dispatcher.Protect::(run) [Errno 2] No such file or directory: '/rhev/data-center/8ce555d1-6142-472a-8379-0daadcb11a86/58d36645-018d-44cf-904b-949ccc63ad0f/images/773e0
01b-8b5e-4457-80d5-ebed15313658/1f761569-4135-439d-9fb8-0f7f7b19181a'
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/dispatcher.py", line 61, in run
    result = ctask.prepare(self.func, *args, **kwargs)
  File "/usr/share/vdsm/storage/task.py", line 1159, in prepare
    raise self.error
OSError: [Errno 2] No such file or directory: '/rhev/data-center/8ce555d1-6142-472a-8379-0daadcb11a86/58d36645-018d-44cf-904b-949ccc63ad0f/images/773e001b-8b5e-4457-80d5-ebed15313658/1f761569-4135-439d-9fb8-0f7f7b19181a'
Thread-8281::ERROR::2013-08-01 16:07:24,916::vm::412::vm.Vm::(_normalizeVdsmImg) vmId=`2a192a7f-5ea6-4a28-b085-8e4d12d8594e`::Unable to get volume size for 1f761569-4135-439d-9fb8-0f7f7b19181a
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 408, in _normalizeVdsmImg
    drv['truesize'] = res['truesize']
KeyError: 'truesize'
Thread-8281::DEBUG::2013-08-01 16:07:24,996::vm::684::vm.Vm::(_startUnderlyingVm) vmId=`2a192a7f-5ea6-4a28-b085-8e4d12d8594e`::_ongoingCreations released
Thread-8281::ERROR::2013-08-01 16:07:24,996::vm::710::vm.Vm::(_startUnderlyingVm) vmId=`2a192a7f-5ea6-4a28-b085-8e4d12d8594e`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 670, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/libvirtvm.py", line 1457, in _run
    devices = self.buildConfDevices()
  File "/usr/share/vdsm/vm.py", line 507, in buildConfDevices
    self._normalizeVdsmImg(drv)
  File "/usr/share/vdsm/vm.py", line 414, in _normalizeVdsmImg
    drv['volumeID'])
RuntimeError: Volume 1f761569-4135-439d-9fb8-0f7f7b19181a is corrupted or missing

Comment 5 errata-xmlrpc 2013-08-13 16:18:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-1155.html


Note You need to log in before you can comment on or make changes to this bug.