Bug 1497117 - live storage migration - Failed to MergeVDS, error = General Exception: ("('Unable to find matching XML for device %s', 'virtio-disk0')
Summary: live storage migration - Failed to MergeVDS, error = General Exception: ("('U...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.2.0
: ---
Assignee: Ala Hino
QA Contact: Avihai
URL:
Whiteboard:
Depends On: 1497170
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-09-29 08:55 UTC by Avihai
Modified: 2017-12-20 11:32 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-20 11:32:02 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-4.2+


Attachments (Terms of Use)
engine , vdsm log (8.98 MB, application/x-gzip)
2017-09-29 08:55 UTC, Avihai
no flags Details

Description Avihai 2017-09-29 08:55:02 UTC
Created attachment 1332287 [details]
engine , vdsm  log

Description of problem:
While running live storage migration I encounter Error Failed to MergeVDS, error = Gene ral Exception: ("('Unable to find matching XML for device %s', 'virtio-disk0').

Live migration finishes, but trying to start preview (after power down VM) related operation is currently in progress.fails as with 


Version-Release number of selected component (if applicable):


How reproducible:
2 times so far (looks very reproduceble)


Steps to Reproduce:
    1. Create DC + cluster on v3 + 2 new storage domains
    2. Create a VM with thin disk and create 2 snapshots
    3. Upgrade the cluster+DC from v3 to v4
    4. Verify that the snapshot images are version 0.10
    5. Start the VM previously created
    6. Move all the disks of the VM to Version 4 Domain
    7. Verify that the snapshot images have been upgraded to version 1.1
    8. Power off the VM and Preview the Snasphot -

Actual results:
During step 6 issue occurs getting to Preview it Fails  as related operation is currently in progress.

Expected results:


Additional info:

Engine log:
2017-09-28 19:21:13,285+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-3) [disks_syncAction_823b3361-3d5b-4778] HostName = host_mixed_3
2017-09-28 19:21:13,285+03 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-3) [disks_syncAction_823b3361-3d5b-4778] Command 'MergeVDSCommand(H
ostName = host_mixed_3, MergeVDSCommandParameters:{hostId='3b62341a-7b87-47ea-a164-a215cf635d88', vmId='45e6ab9f-ecde-46b4-8e8c-1587529ba715', storagePoolId='9e3c17a2-c76b-4858-82b0-ae2091bf28da', storageDomainI
d='547223bf-43bc-49d6-9bc7-3607c37238fc', imageGroupId='9a9bce4c-8786-497e-accf-c36fd2b5b039', imageId='ec9ac87a-eb3e-4eb1-b721-67f419d1cf72', baseImageId='15ce4b13-13ab-4ae9-b7b5-d7fd0fe7068d', topImageId='ec9a
c87a-eb3e-4eb1-b721-67f419d1cf72', bandwidth='0'})' execution failed: VDSGenericException: VDSErrorException: Failed to MergeVDS, error = General Exception: ("('Unable to find matching XML for device %s', 'virti
o-disk0')",), code = 100
2017-09-28 19:21:13,285+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-3) [disks_syncAction_823b3361-3d5b-4778] FINISH, MergeVDSCommand, l
og id: 268a9fa5
2017-09-28 19:21:13,285+03 ERROR [org.ovirt.engine.core.bll.MergeCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-3) [disks_syncAction_823b3361-3d5b-4778] Engine exception thrown while sending merge c
ommand: org.ovirt.engine.core.common.errors.EngineException: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to MergeVDS, error = Gene
ral Exception: ("('Unable to find matching XML for device %s', 'virtio-disk0')",), code = 100 (Failed with error GeneralException and code 100)



Preivew failed with "related operation is currently in progress. "
2017-09-28 19:21:14,672+03 ERROR [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-43) [disks_syncAction_823b3361-3d5b-4778] Merging of sna
pshot 'd24c8092-91e9-46ac-8e3a-99b1d8fb7b1f' images '15ce4b13-13ab-4ae9-b7b5-d7fd0fe7068d'..'ec9ac87a-eb3e-4eb1-b721-67f419d1cf72' failed. Images have been marked illegal and can no longer be previewed or revert
ed to. Please retry Live Merge on the snapshot to complete the operation.
2017-09-28 19:21:14,691+03 ERROR [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-43) [disks_syncAction_823b3361-3d5b-4778] Ending command
 'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand' with failure.
2017-09-28 19:21:15,166+03 INFO  [org.ovirt.engine.core.bll.snapshots.TryBackToAllSnapshotsOfVmCommand] (default task-16) [vms_syncAction_c2f4950c-6795-4bd2] Failed to Acquire Lock to object 'EngineLock:{exclusiveLocks='[45e6ab9f-ecde-46b4-8e8c-1587529ba715=VM]', sharedLocks=''}'
2017-09-28 19:21:15,166+03 WARN  [org.ovirt.engine.core.bll.snapshots.TryBackToAllSnapshotsOfVmCommand] (default task-16) [vms_syncAction_c2f4950c-6795-4bd2] Validation of action 'TryBackToAllSnapshotsOfVm' failed for user admin@internal-authz. Reasons: VAR__ACTION__PREVIEW,VAR__TYPE__SNAPSHOT,ACTION_TYPE_FAILED_OBJECT_LOCKED
2017-09-28 19:21:15,179+03 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-16) [] Operation Failed: [Cannot preview Snapshot. Related operation is currently in progress. Please try again later.]
2017-09-28 19:21:15,741+03 INFO  [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-9) [disks_syncAction_823b3361-3d5b-4778] Command 'RemoveSnapshot' id: '2c7ca86e-693c-4f0d-92f2-b1be2584c9ca' child commands '[26d74043-c43a-4944-b1fa-fcee3ecb2a16]' executions were completed, status 'FAILED'

2017-09-28 19:21:15,917+03 INFO  [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-9) [disks_syncAction_823b3361-3d5b-4778] Command 'LiveMigrateVmDisks' (id: 'c7e10826-fd12-4f22-bb25-6fc2c680d629') waiting on child command id: '2c7ca86e-693c-4f0d-92f2-b1be2584c9ca' type:'RemoveSnapshot' to complete


VDSM Error:
2017-09-28 19:21:12,630+0300 INFO  (jsonrpc/1) [vdsm.api] START merge(driveSpec={'poolID': '9e3c17a2-c76b-4858-82b0-ae2091bf28da', 'volumeID': 'ec9ac87a-eb3e-4eb1-b721-67f419d1cf72', 'domainID': '547223bf-43bc-4
9d6-9bc7-3607c37238fc', 'imageID': '9a9bce4c-8786-497e-accf-c36fd2b5b039'}, baseVolUUID='15ce4b13-13ab-4ae9-b7b5-d7fd0fe7068d', topVolUUID='ec9ac87a-eb3e-4eb1-b721-67f419d1cf72', bandwidth='0', jobUUID='3d5c79c6
-db72-43fd-87bf-43810e4734c3') from=::ffff:10.35.161.118,41022, flow_id=disks_syncAction_823b3361-3d5b-4778 (api:46)
2017-09-28 19:21:12,984+0300 WARN  (vdsm.Scheduler) [Executor] executor state: count=5 workers=set([<Worker name=periodic/1 waiting task#=161269 at 0x3c49b90>, <Worker name=periodic/4 waiting task#=0 at 0x491c7d
0>, <Worker name=periodic/0 waiting task#=161554 at 0x3c49950>, <Worker name=periodic/3 running <Task discardable <Operation action=<VmDispatcher operation=<class 'vdsm.virt.periodic.DriveWatermarkMonitor'> at 0
x3c53890> at 0x3c53910> timeout=1.0, duration=1 at 0x3c38810> discarded task#=161470 at 0x3c53210>, <Worker name=periodic/2 waiting task#=161904 at 0x3c49ed0>]) (executor:213)
2017-09-28 19:21:12,986+0300 INFO  (vdsm.Scheduler) [Executor] Worker discarded: <Worker name=periodic/3 running <Task discardable <Operation action=<VmDispatcher operation=<class 'vdsm.virt.periodic.DriveWaterm
arkMonitor'> at 0x3c53890> at 0x3c53910> timeout=1.0, duration=1 at 0x3c38810> discarded task#=161470 at 0x3c53210> (executor:355)
2017-09-28 19:21:13,196+0300 DEBUG (jsonrpc/0) [storage.TaskManager.Task] (Task='fa8807fa-6baf-42d6-8f67-ca50169c51d0') moving from state init -> state preparing (task:599)
2017-09-28 19:21:13,197+0300 INFO  (jsonrpc/0) [vdsm.api] START teardownImage(sdUUID='547223bf-43bc-49d6-9bc7-3607c37238fc', spUUID='9e3c17a2-c76b-4858-82b0-ae2091bf28da', imgUUID='9a9bce4c-8786-497e-accf-c36fd2
b5b039', volUUID=None) from=::ffff:10.35.161.118,41022, flow_id=vms_syncAction_26fa4b04-2d8a-4f1b, task_id=fa8807fa-6baf-42d6-8f67-ca50169c51d0 (api:46)
2017-09-28 19:21:13,198+0300 DEBUG (jsonrpc/0) [storage.ResourceManager] Trying to register resource '00_storage.547223bf-43bc-49d6-9bc7-3607c37238fc' for lock type 'shared' (resourceManager:495)
2017-09-28 19:21:13,199+0300 DEBUG (jsonrpc/0) [storage.ResourceManager] Resource '00_storage.547223bf-43bc-49d6-9bc7-3607c37238fc' is free. Now locking as 'shared' (1 active user) (resourceManager:552)
2017-09-28 19:21:13,200+0300 DEBUG (jsonrpc/0) [storage.ResourceManager.Request] (ResName='00_storage.547223bf-43bc-49d6-9bc7-3607c37238fc', ReqID='3b02da83-e81e-4080-a793-c5a33097a6d1') Granted request (resourc
eManager:222)
2017-09-28 19:21:13,201+0300 DEBUG (jsonrpc/0) [storage.TaskManager.Task] (Task='fa8807fa-6baf-42d6-8f67-ca50169c51d0') _resourcesAcquired: 00_storage.547223bf-43bc-49d6-9bc7-3607c37238fc (shared) (task:831)
2017-09-28 19:21:13,201+0300 DEBUG (jsonrpc/0) [storage.TaskManager.Task] (Task='fa8807fa-6baf-42d6-8f67-ca50169c51d0') ref 1 aborting False (task:999)
2017-09-28 19:21:13,202+0300 DEBUG (jsonrpc/0) [storage.fileUtils] Removing directory: /var/run/vdsm/storage/547223bf-43bc-49d6-9bc7-3607c37238fc/9a9bce4c-8786-497e-accf-c36fd2b5b039 (fileUtils:178)
2017-09-28 19:21:13,204+0300 DEBUG (jsonrpc/0) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-0 /usr/bin/sudo -n /usr/sbin/lvm lvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_d
evices=1 write_cache_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/3514f0c5a51600269|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=
0 }  backup {  retain_min = 50  retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags 547223bf-43bc-49d6-9bc7-3607c
37238fc (cwd None) (commands:70)
2017-09-28 19:21:13,221+0300 INFO  (libvirt/events) [virt.vm] (vmId='45e6ab9f-ecde-46b4-8e8c-1587529ba715') underlying process disconnected (vm:991)
2017-09-28 19:21:13,260+0300 INFO  (jsonrpc/1) [vdsm.api] FINISH merge error=('Unable to find matching XML for device %s', 'virtio-disk0') from=::ffff:10.35.161.118,41022, flow_id=disks_syncAction_823b3361-3d5b-
4778 (api:50)
2017-09-28 19:21:13,262+0300 ERROR (jsonrpc/1) [api] FINISH merge error=('Unable to find matching XML for device %s', 'virtio-disk0') (api:127)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 117, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 666, in merge
    drive, baseVolUUID, topVolUUID, bandwidth, jobUUID)
  File "<string>", line 2, in merge
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5364, in merge
    chains = self._driveGetActualVolumeChain([drive])
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5506, in _driveGetActualVolumeChain
    diskXML = lookupDeviceXMLByAlias(self._domain.xml, alias)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5500, in lookupDeviceXMLByAlias
    targetAlias)
LookupError: ('Unable to find matching XML for device %s', 'virtio-disk0')
2017-09-28 19:21:13,267+0300 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call VM.merge failed (error 100) in 0.64 seconds (__init__:630)
2017-09-28 19:21:13,384+0300 DEBUG (jsonrpc/0) [storage.Misc.excCmd] SUCCESS: <err> = ''; <rc> = 0 (commands:94)

Comment 1 Avihai 2017-09-29 08:56:18 UTC
Engine : 
ovirt-engine-4.2.0-0.0.master.20170917124606.gita804ef7.el7.centos.noarch

VDSM:
4.20.3-55.git5d02f64.el7

Comment 2 Nir Soffer 2017-10-30 07:06:09 UTC
Avihay, is this a new test? can you tell us which was the latest version that
passed this test?

Comment 3 Ala Hino 2017-10-30 10:50:53 UTC
(In reply to Nir Soffer from comment #2)
> Avihay, is this a new test? can you tell us which was the latest version that
> passed this test?

In addition, is the test marked as broken or it is executed?

Comment 4 Avihai 2017-10-31 12:13:50 UTC
(In reply to Ala Hino from comment #3)
> (In reply to Nir Soffer from comment #2)
> > Avihay, is this a new test?
No this is not a new test but an automation test (TestCase18343) that runs in each Tier2 test cycle.
 
>can you tell us which was the latest version that
> > passed this test?
Well, I checked for some time to find when it ran before this failure & went all the way to July...

In short, this test case most probably run for the first time on 4.2 when this bug was reported.

This is due to blocker bugs 1489005/1449944/1456268 the test was skipped throughout most of 4.2 .

last blocker bug 1489005 was closed at 28/9 which one day before this bug was opened.

> In addition, is the test marked as broken or it is executed?
The test is marked as broken/failed as it fails in the preview step.

Comment 5 Ala Hino 2017-11-01 18:34:25 UTC
(In reply to Avihai from comment #4)
> (In reply to Ala Hino from comment #3)
> > (In reply to Nir Soffer from comment #2)
> > > Avihay, is this a new test?
> No this is not a new test but an automation test (TestCase18343) that runs
> in each Tier2 test cycle.
>  
> >can you tell us which was the latest version that
> > > passed this test?
> Well, I checked for some time to find when it ran before this failure & went
> all the way to July...
> 
> In short, this test case most probably run for the first time on 4.2 when
> this bug was reported.
> 
> This is due to blocker bugs 1489005/1449944/1456268 the test was skipped
> throughout most of 4.2 .
> 
> last blocker bug 1489005 was closed at 28/9 which one day before this bug
> was opened.
> 
> > In addition, is the test marked as broken or it is executed?
> The test is marked as broken/failed as it fails in the preview step.

So, all the relevant bugs are either closed or verified.
Is there any reason not to run the test now? If no, can you please re-run the test and check whether still fails?

Comment 6 Avihai 2017-11-02 08:06:13 UTC
(In reply to Ala Hino from comment #5)
> (In reply to Avihai from comment #4)
> > (In reply to Ala Hino from comment #3)
> > > (In reply to Nir Soffer from comment #2)
> > > > Avihay, is this a new test?
> > No this is not a new test but an automation test (TestCase18343) that runs
> > in each Tier2 test cycle.
> >  
> > >can you tell us which was the latest version that
> > > > passed this test?
> > Well, I checked for some time to find when it ran before this failure & went
> > all the way to July...
> > 
> > In short, this test case most probably run for the first time on 4.2 when
> > this bug was reported.
> > 
> > This is due to blocker bugs 1489005/1449944/1456268 the test was skipped
> > throughout most of 4.2 .
> > 
> > last blocker bug 1489005 was closed at 28/9 which one day before this bug
> > was opened.
> > 
> > > In addition, is the test marked as broken or it is executed?
> > The test is marked as broken/failed as it fails in the preview step.
> 
> So, all the relevant bugs are either closed or verified.
> Is there any reason not to run the test now? If no, can you please re-run
> the test and check whether still fails?

The test failed with this issue after all these blocker bugs were resolved.
Why do you need another rerun, are the logs no sufficient?

Comment 7 Ala Hino 2017-11-05 10:04:50 UTC
(In reply to Avihai from comment #6)
> (In reply to Ala Hino from comment #5)
> > (In reply to Avihai from comment #4)
> > > (In reply to Ala Hino from comment #3)
> > > > (In reply to Nir Soffer from comment #2)
> > > > > Avihay, is this a new test?
> > > No this is not a new test but an automation test (TestCase18343) that runs
> > > in each Tier2 test cycle.
> > >  
> > > >can you tell us which was the latest version that
> > > > > passed this test?
> > > Well, I checked for some time to find when it ran before this failure & went
> > > all the way to July...
> > > 
> > > In short, this test case most probably run for the first time on 4.2 when
> > > this bug was reported.
> > > 
> > > This is due to blocker bugs 1489005/1449944/1456268 the test was skipped
> > > throughout most of 4.2 .
> > > 
> > > last blocker bug 1489005 was closed at 28/9 which one day before this bug
> > > was opened.
> > > 
> > > > In addition, is the test marked as broken or it is executed?
> > > The test is marked as broken/failed as it fails in the preview step.
> > 
> > So, all the relevant bugs are either closed or verified.
> > Is there any reason not to run the test now? If no, can you please re-run
> > the test and check whether still fails?
> 
> The test failed with this issue after all these blocker bugs were resolved.
> Why do you need another rerun, are the logs no sufficient?

We made quite few changes in the live merge area. It would be great if you can unmark the test as broken. If still fails, we at least will have the latest logs to analyze.

Comment 8 Avihai 2017-11-12 14:35:06 UTC
(In reply to Ala Hino from comment #7)
> (In reply to Avihai from comment #6)
> > (In reply to Ala Hino from comment #5)
> > > (In reply to Avihai from comment #4)
> > > > (In reply to Ala Hino from comment #3)
> > > > > (In reply to Nir Soffer from comment #2)
> > > > > > Avihay, is this a new test?
> > > > No this is not a new test but an automation test (TestCase18343) that runs
> > > > in each Tier2 test cycle.
> > > >  
> > > > >can you tell us which was the latest version that
> > > > > > passed this test?
> > > > Well, I checked for some time to find when it ran before this failure & went
> > > > all the way to July...
> > > > 
> > > > In short, this test case most probably run for the first time on 4.2 when
> > > > this bug was reported.
> > > > 
> > > > This is due to blocker bugs 1489005/1449944/1456268 the test was skipped
> > > > throughout most of 4.2 .
> > > > 
> > > > last blocker bug 1489005 was closed at 28/9 which one day before this bug
> > > > was opened.
> > > > 
> > > > > In addition, is the test marked as broken or it is executed?
> > > > The test is marked as broken/failed as it fails in the preview step.
> > > 
> > > So, all the relevant bugs are either closed or verified.
> > > Is there any reason not to run the test now? If no, can you please re-run
> > > the test and check whether still fails?
> > 
> > The test failed with this issue after all these blocker bugs were resolved.
> > Why do you need another rerun, are the logs no sufficient?
> 
> We made quite few changes in the live merge area. It would be great if you
> can unmark the test as broken. If still fails, we at least will have the
> latest logs to analyze.

I ran the same TestCase18343 & it does not fail .
How do you want to proceed ?

builds:
VDSM 4.20.6-33.git54a784e.el7 
Engine :ovirt-engine-4.2.0-0.0.master.20171106202508.gitf5140b9

Comment 9 Allon Mureinik 2017-11-12 14:40:15 UTC
(In reply to Avihai from comment #8)
> (In reply to Ala Hino from comment #7)
> > (In reply to Avihai from comment #6)
> > > (In reply to Ala Hino from comment #5)
> > > > (In reply to Avihai from comment #4)
> > > > > (In reply to Ala Hino from comment #3)
> > > > > > (In reply to Nir Soffer from comment #2)
> > > > > > > Avihay, is this a new test?
> > > > > No this is not a new test but an automation test (TestCase18343) that runs
> > > > > in each Tier2 test cycle.
> > > > >  
> > > > > >can you tell us which was the latest version that
> > > > > > > passed this test?
> > > > > Well, I checked for some time to find when it ran before this failure & went
> > > > > all the way to July...
> > > > > 
> > > > > In short, this test case most probably run for the first time on 4.2 when
> > > > > this bug was reported.
> > > > > 
> > > > > This is due to blocker bugs 1489005/1449944/1456268 the test was skipped
> > > > > throughout most of 4.2 .
> > > > > 
> > > > > last blocker bug 1489005 was closed at 28/9 which one day before this bug
> > > > > was opened.
> > > > > 
> > > > > > In addition, is the test marked as broken or it is executed?
> > > > > The test is marked as broken/failed as it fails in the preview step.
> > > > 
> > > > So, all the relevant bugs are either closed or verified.
> > > > Is there any reason not to run the test now? If no, can you please re-run
> > > > the test and check whether still fails?
> > > 
> > > The test failed with this issue after all these blocker bugs were resolved.
> > > Why do you need another rerun, are the logs no sufficient?
> > 
> > We made quite few changes in the live merge area. It would be great if you
> > can unmark the test as broken. If still fails, we at least will have the
> > latest logs to analyze.
> 
> I ran the same TestCase18343 & it does not fail .
> How do you want to proceed ?
> 
> builds:
> VDSM 4.20.6-33.git54a784e.el7 
> Engine :ovirt-engine-4.2.0-0.0.master.20171106202508.gitf5140b9

Sounds good to me.
Moved to VERIFIED.

Comment 10 Sandro Bonazzola 2017-12-20 11:32:02 UTC
This bugzilla is included in oVirt 4.2.0 release, published on Dec 20th 2017.

Since the problem described in this bug report should be
resolved in oVirt 4.2.0 release, published on Dec 20th 2017, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.