Bug 1308948 - Recovery of a vm with floppy disk payload reports failure
Recovery of a vm with floppy disk payload reports failure
Status: CLOSED WONTFIX
Product: vdsm
Classification: oVirt
Component: Core (Show other bugs)
4.17.19
Unspecified Unspecified
low Severity low (vote)
: ovirt-4.2.0
: ---
Assigned To: bugs@ovirt.org
meital avital
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-16 09:29 EST by sefi litmanovich
Modified: 2017-07-14 08:43 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-07-14 08:43:27 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Virt
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
rule-engine: ovirt‑4.2?
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)

  None (edit)
Description sefi litmanovich 2016-02-16 09:29:35 EST
Engine: rhevm-3.6.3-0.1.el6.noarch
Host: vdsm-4.17.19-0.el7ev.noarch

1. Started 2 windows VMs with floppy disk payload with sysprep.
2. After the VMs started and are on up state - restart vdsmd service on the host.

Actual Result:

Recovery is reported as a failure and logs reveals reason as a failure to make floppy FS. In practice the vm is recovered and there's no trouble with the floppy disk (it is accessible in the machine, and contains the sysprep unattend xml). 

[root@{hostname}]# tail -F /var/log/vdsm/vdsm.log | egrep -i --color "recovery"
clientIFinit::DEBUG::2016-02-10 14:11:42,613::clientIF::462::vds::(_recoverExistingVms) recovery: started
clientIFinit::INFO::2016-02-10 14:11:42,675::clientIF::479::vds::(_recoverExistingVms) recovery [1:1/2]: recovered domain 0a0bfec9-713c-4b6a-bdab-bb1b762ed432 from libvirt
clientIFinit::INFO::2016-02-10 14:11:42,679::clientIF::479::vds::(_recoverExistingVms) recovery [1:2/2]: recovered domain 7d8429f5-5482-4a64-bb18-1c300ca1bf5c from libvirt
clientIFinit::INFO::2016-02-10 14:11:42,679::clientIF::521::vds::(_recoverExistingVms) recovery: waiting for 2 domains to go up
clientIFinit::INFO::2016-02-10 14:11:43,681::clientIF::532::vds::(_recoverExistingVms) recovery: waiting for storage pool to go up
clientIFinit::INFO::2016-02-10 14:11:48,684::clientIF::532::vds::(_recoverExistingVms) recovery: waiting for storage pool to go up
clientIFinit::INFO::2016-02-10 14:11:53,687::clientIF::544::vds::(_recoverExistingVms) recovery [1/2]: preparing paths for domain 7d8429f5-5482-4a64-bb18-1c300ca1bf5c
clientIFinit::ERROR::2016-02-10 14:11:53,705::clientIF::550::vds::(_recoverExistingVms) recovery [1/2]: failed for vm 7d8429f5-5482-4a64-bb18-1c300ca1bf5c
clientIFinit::INFO::2016-02-10 14:11:53,706::clientIF::544::vds::(_recoverExistingVms) recovery [2/2]: preparing paths for domain 0a0bfec9-713c-4b6a-bdab-bb1b762ed432
clientIFinit::ERROR::2016-02-10 14:11:53,711::clientIF::550::vds::(_recoverExistingVms) recovery [2/2]: failed for vm 0a0bfec9-713c-4b6a-bdab-bb1b762ed432
clientIFinit::INFO::2016-02-10 14:11:53,711::clientIF::553::vds::(_recoverExistingVms) recovery: completed in 11s

The full trace of the error:

clientIFinit::DEBUG::2016-02-10 14:11:53,708::task::993::Storage.TaskManager.Task::(_decref) Task=`585a4c11-59ee-455f-923b-877cf38cd5fc`::ref 0 aborting False
clientIFinit::ERROR::2016-02-10 14:11:53,711::clientIF::550::vds::(_recoverExistingVms) recovery [2/2]: failed for vm 0a0bfec9-713c-4b6a-bdab-bb1b762ed432
Traceback (most recent call last):
  File "/usr/share/vdsm/clientIF.py", line 546, in _recoverExistingVms
    vm_obj.devSpecMapFromConf()[hwclass.DISK])
  File "/usr/share/vdsm/virt/vm.py", line 780, in preparePaths
    drive['path'] = self.cif.prepareVolumePath(drive, self.id)
  File "/usr/share/vdsm/clientIF.py", line 349, in prepareVolumePath
    vmId, device, params['vmPayload'])
  File "/usr/share/vdsm/clientIF.py", line 393, in _prepareVolumePathFromPayload
    return func(vmId, payload['file'], payload.get('volId'))
  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
    **kwargs)
  File "<string>", line 2, in mkFloppyFs
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod
    raise convert_to_error(kind, result)
OSError: [Errno 5] could not create floppy file: code 1, out mkfs.fat 3.0.20 (12 Jun 2013)

err mkfs.msdos: unable to create /var/run/vdsm/payload/0a0bfec9-713c-4b6a-bdab-bb1b762ed432.9bc736a4fe83370f41b3048eb6e0456b.img

Expected result:
Vm is recovered successfully, no failure is reported in engine log.
Comment 1 Michal Skrivanek 2016-12-21 04:09:13 EST
The bug was not addressed in time for 4.1. Postponing to 4.2
Comment 2 Tomas Jelinek 2017-07-14 08:43:27 EDT
It works properly, there is only a nasty error message in the logs.

Closing due to capacity.

Please feel free to reopen if it is an important issue for your use case.

Note You need to log in before you can comment on or make changes to this bug.