Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1585950 - [downstream clone - 4.2.8] Live Merge failed on engine with "still in volume chain", but merge on host was successful
[downstream clone - 4.2.8] Live Merge failed on engine with "still in volume ...
Status: MODIFIED
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
4.1.9
Unspecified Linux
urgent Severity urgent
: ovirt-4.2.8
: ---
Assigned To: Eyal Shenitzky
Elad
: Reopened, ZStream
Depends On: 1554369
Blocks:
  Show dependency treegraph
 
Reported: 2018-06-05 03:38 EDT by RHV Bugzilla Automation and Verification Bot
Modified: 2018-11-05 18:09 EST (History)
17 users (show)

See Also:
Fixed In Version: ovirt-engine-4.2.4.4
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1554369
Environment:
Last Closed: 2018-06-27 06:02:42 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3378361 None None None 2018-06-05 03:40 EDT
oVirt gerrit 91841 master MERGED core: Improve MergeStatusCommand 2018-06-05 03:40 EDT
oVirt gerrit 91930 ovirt-engine-4.2 MERGED core: Improve MergeStatusCommand 2018-06-05 03:40 EDT
oVirt gerrit 95085 ovirt-engine-4.2 MERGED core: fix race in VmJobs monitoring 2018-10-24 08:02 EDT
Red Hat Product Errata RHSA-2018:2071 None None None 2018-06-27 06:03 EDT

  None (edit)
Description RHV Bugzilla Automation and Verification Bot 2018-06-05 03:38:01 EDT
+++ This bug is a downstream clone. The original bug is: +++
+++   bug 1554369 +++
======================================================================

Description of problem:

An internal Live Merge was performed on a disk. The merge on the host appeared to have been successful. The top volume was unlinked, such that the qemu image chain and the volume metadata chain matched and no longer contained the volume in question. The xml and open files for the 'qemu-kvm' process also no longer contain this volume. So, on the host and storage side, the top volume had been merged back and was just waiting to be removed.

However, the engine reported that the top volume was still in the volume chain and the merge operation terminated.

All subsequent snapshot deletions for this disk failed due to the failure above.



Version-Release number of selected component (if applicable):

RHV 4.1.9
RHEL 7.4 host;
  vdsm-4.19.43-3.el7ev.x86_64 

How reproducible:

Not.


Steps to Reproduce:
1.
2.
3.

Actual results:

The live merge failed and terminated, resulting in susbsequent live merge failures.


Expected results:

The merge should have succeeded on the engine.

Additional info:

(Originally by Gordon Watson)
Comment 6 RHV Bugzilla Automation and Verification Bot 2018-06-05 03:38:32 EDT
Ala - please take a look.

I'm tentatively targetting for 4.2.2, just because it's quite let to get anything into 4.1.10.
If we *do* find a quick [and safe!] fix, this should definitely be a candidate for 4.1.z.

(Originally by amureini)
Comment 14 RHV Bugzilla Automation and Verification Bot 2018-06-05 03:39:18 EDT
Moving to 4.2.4 for now as the issue isn't reproducible and need more analysis.

(Originally by Ala Hino)
Comment 17 RHV Bugzilla Automation and Verification Bot 2018-06-05 03:39:34 EDT
Evelina, please take a look

(Originally by Elad Ben Aharon)
Comment 18 RHV Bugzilla Automation and Verification Bot 2018-06-05 03:39:39 EDT
Waiting for customer's response.

(Originally by Evelina Shames)
Comment 19 RHV Bugzilla Automation and Verification Bot 2018-06-05 03:39:44 EDT
(In reply to Evelina Shames from comment #17)
> Waiting for customer's response.

What response are we waiting for?
The customer provided the script they use, and the request was to attempt to reproduce the bug on an 4.2 env.

What am I missing here?

(Originally by amureini)
Comment 20 RHV Bugzilla Automation and Verification Bot 2018-06-05 03:39:49 EDT
Me and Ala had a few questions about the script, Gordon sent them to the customer.

(Originally by Evelina Shames)
Comment 22 RHV Bugzilla Automation and Verification Bot 2018-06-05 03:39:59 EDT
Hi Evelina,

Let's try to come with a script from our side that simulates what the customer is trying to do:

1. Create a VM
2. Create a snapshot
3. Delete up to N snapshots. When deleting a snapshot, we have to check the all snapshots status is OK before proceeding to next deletion

Pseudo code:

vm = _create_vm()
_create_snapshot(vm)
// if there are X snapshots and X>N, _list_vm_snapshots returns X-N snapshots
snapshots = _list_vm_snapshots(vm.id) 
for s in snapshots:
    _delete_snapshot(s)
    _del_completed = _check_snapshot_status(vm)
    while not _del_completed:
        _del_completed = _check_snapshot_status(vm)
    
As a reference please see listSnapsToDelete and checkSnapStateOK in the customer script.

Let me know if you need any help with the script.

(Originally by Ala Hino)
Comment 23 RHV Bugzilla Automation and Verification Bot 2018-06-05 03:40:05 EDT
This is happening very frequently when the customer uses Commvault.

Just reviewed the 3 cases attached above, happened with:
rhvm-4.2.3.5-0.1.el7.noarch
vdsm-4.19.50-1.el7ev.x86_64

The engine sees the volume still in chain, but all was fine on the host.

A retry fixed the issue.

(Originally by Germano Veit Michel)
Comment 29 Elad 2018-06-20 06:04:38 EDT
Examined our latest automation executions done for 4.2.4-5 build. Snapshot removal tests passed.


Used:
rhvm-4.2.4.4-0.1.el7_3.noarch
vdsm-4.20.31-1.el7ev.x86_64
libvirt-3.9.0-14.el7_5.6.x86_64
qemu-kvm-rhev-2.10.0-21.el7_5.4.x86_64
Comment 32 errata-xmlrpc 2018-06-27 06:02:42 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2071
Comment 45 Eyal Shenitzky 2018-10-11 05:38:03 EDT
Managed to reproduce this bug with the following steps:

In 4.3 engine and cluster < 4.2:

1) Create vm1 with a disk
2) Create backup_vm with a disk
3) Create snapshot ('snap1') to vm1
4) run vm1 backup_vm
5) Attach snap1 to backup_vm
6) Power-off backup_vm
7) Remove backup_vm
8) Remove snap1

Live merge failed with the following error - 
2018-10-11 11:56:48,883+03 ERROR [org.ovirt.engine.core.bll.MergeStatusCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-10) [c1dce6ca-df36-4e6f-a5f1-e3cc063ccf6a] Failed to live merge. Top volume e9ffc75b-caca-4188-aa3f-a983ae2554d1 is still in qemu chain [ca9223cb-960a-4da7-8c6e-6166b269c813, e9ffc75b-caca-4188-aa3f-a983ae2554d1]

Note You need to log in before you can comment on or make changes to this bug.