Description of problem:
Failed to add a disk to a virtual machine as the link does not exist on the host
Version-Release number of selected component (if applicable):
reproduced this 3 times in succession
Steps to Reproduce:
1. Via the Rhevm manager - Virtual Machines - Disks - Add new disk
2. Size 50g, preallocated, nfs storage domain - ok
3. Failes to create the disk
The disk is not created. The link to the mount point on the host does not exist:
The disk should be created
2014-04-24 18:21:49,112 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-13) CommandAsyncTask::EndActionIfNecessar
y: All tasks of command 6a6ce499-9b58-40cb-a0fc-d75a1e5bff77 has ended -> executing endAction
2014-04-24 18:21:49,112 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (DefaultQuartzScheduler_Worker-13) CommandAsyncTask::endAction: Ending a
ction for 1 tasks (command ID: 6a6ce499-9b58-40cb-a0fc-d75a1e5bff77): calling endAction .
2014-04-24 18:21:49,112 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (org.ovirt.thread.pool-4-thread-23) CommandAsyncTask::EndCommandAction [
within thread] context: Attempting to endAction AddDisk, executionIndex: 0
2014-04-24 18:21:49,122 ERROR [org.ovirt.engine.core.bll.AddDiskCommand] (org.ovirt.thread.pool-4-thread-23) [1a306cc3] Ending command with failure
2014-04-24 18:21:49,127 ERROR [org.ovirt.engine.core.bll.AddImageFromScratchCommand] (org.ovirt.thread.pool-4-thread-23) [2f3f15e6] Ending command with failure: org.ovirt.engine.core.bll.AddImageFromScratchCommand
2014-04-24 18:21:49,343 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-4-thread-23) Correlation ID: 1a306cc3, Job ID: e1a04e0f-ac43-45cc-a01d-64d010c03626, Call Stack: null, Custom Event ID: -1, Message: Operation Add-Disk failed to complete.
Created attachment 890176 [details]
vdsm and engine logs
Kevin, the correlation ID in the snippet you provided is missing from the VDSM log.
Can you please provide the full log?
Fede, we're getting too many bugs about missing links (e.g. bug 1069772).
Can you please take a look?
(In reply to Allon Mureinik from comment #3)
> Fede, we're getting too many bugs about missing links (e.g. bug 1069772).
More appropriately, this seems like a dup of bug 1086210 ?
Created attachment 890241 [details]
full vdsm log as requested
Added the full vdsm log that was required
It is very important to know if this is a regression or not. Can you please reproduce this on 3.3? Please contact me so that we can verify this together. Thanks.
Probably a duplicate of bug 1086210
(In reply to Federico Simoncelli from comment #7)
> Probably a duplicate of bug 1086210
Moving to MODIFIED based on this statement.
I was not able to reproduce this defect in 3.3. This does not mean that the defect doesn't exist in 3.3 as I do not have the specific instance that this problem occurs.
Verified according to steps from https://bugzilla.redhat.com/show_bug.cgi?id=1086210 :
2 NFS domains in DC:
1) Created a VM with a disk located on the masted domain, started it
2) Blocked connectivity form SPM to master domain, waited for reconstruct to take place
3) Once the other domain took master, destroyed the VM
4) Resumed connectivity to the first domain
5) Started the VM
The VM was started normally, the link to the mount of the storage domain re-appeared under /rhev/data-ceter/SPUUID. Tested 3 times
Thread-220::INFO::2014-05-18 15:22:27,387::sp::1113::Storage.StoragePool::(_linkStorageDomain) Linking /rhev/data-center/mnt/lion.qa.lab.tlv.redhat.com:_export_elad_6/4b73f56c-a54a-4f81-b9b2-010cc1b5904e to /rhev/data-center/4aa2760a-c779-4b5c-93aa-8aafd334aeb1/4b73f56c-a54a-4f81-b9b2-010cc1b5904e
As this scenario didn't cause the issue of missing links to reproduce, I'm moving this bug to VERIFIED.
Verified using av9.1
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.