Red Hat Bugzilla – Bug 1257203
vdsm fails to create symbolic links
Last modified: 2016-02-10 15:11:50 EST
Description of problem:
logical volumes for particular VM disk images exist, but vdsm doesn't create symlinks to them
Version-Release number of selected component (if applicable):
Red Hat Enterprise Virtualization Hypervisor release 6.6 (20150603.0.el6ev)
Steps to Reproduce:
1. Recover VM from a broken snapshot state
2. Attempt to check images with qemu-img
qemu-img check failed because symlinks to images not created under /rhev/datacenter
All images in storage domain accessible via symlinks
(In reply to Allan Voss from comment #0)
> Steps to Reproduce:
> 1. Recover VM from a broken snapshot state
We need more details - what is a broken snapshot state, how did you
got into this state?
> 2. Attempt to check images with qemu-img
> Expected results:
> All images in storage domain accessible via symlinks
symlinks are created when preparing an image and removed when tearing down
an image. You expected result is incorrect.
Can you attach vdsm log showing the flow until you got this error?
Also, does it happen on RHEV-H only or can it be reproduced on RHEL-H as well?
The customer doesn't have any RHEL hypervisors, and doesn't have the resources to attempt to build one at the moment. Such a test would have to wait.
The snapshot was in BROKEN state because the symlinks weren't there. It was marking the snapshot as BROKEN in the database because it was getting 'bad volume specification' errors when attempting to start the VM.
I was attempting to check the images with qemu-img manually to find out of the problem was image corruption, and this is what led me to finding out that there were no symlinks.
this is an automated message. oVirt 3.6.0 RC3 has been released and GA is targeted to next week, Nov 4th 2015.
Please review this bug and if not a blocker, please postpone to a later release.
All bugs not postponed on GA release will be automatically re-targeted to
- 3.6.1 if severity >= high
- 4.0 if severity < high
I'm closing this bug because we don't have enough data and the customer case is closed as well.
Please re-open if it becomes necessary.