Description of problem: Live merge operation could fail due to different reasons. For example: - User starts a live merge but vdsm fails to get the command - User starts a live merge but engine fails to get vdsm response - Engine goes down while live merge is running - vdsm goes down while live merge is running These failures leave the volume in ILLEGAL state. To recover from these failures, we ask the user to try live merge again. The recovery mechanism will: - Start a new merge job (i.e. send merge command to vdsm) if the old didn't start - Sync the database if merge succeeded at storage Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Not sure what this BZ is about - fixing something in the flow, or documenting the behavior?
Fixing something in the flow
(In reply to Ala Hino from comment #2) > Fixing something in the flow My bad - this is for doc purposes
(In reply to Ala Hino from comment #3) > (In reply to Ala Hino from comment #2) > > Fixing something in the flow > > My bad - this is for doc purposes Ack. Moving to the proper team and resetting the assignee to the default assignee. Ala, please provide the relevant information so one of the technical writers can pick it up and update the documentation correctly.
virt is not the correct team;-)
(In reply to Michal Skrivanek from comment #5) > virt is not the correct team;-) PgDn instead of PgUp ;-) my apologies.
In addition to Doc Text, following info might be helpful: The guiding principle is here is simple. Assume you have a chain A<-B<-C, where A is the base and C is the active volume, and assume you want to remove snapshot A, which means you'll be pushing data from B into A, and eventually removing B (yes, this can be counter-intuitive at first, but remember that removing A actually means that you are willing to lose the ability to revert back in time to the A snapshot). At the moment you begin this merge operation, A will illegal, as it will no longer represent a consistent point in time, but the chain, as a whole, is still completely intact. In other words, you can safely run a VM on it, but cannot revert back to A's state. Under this logic, it does not matter how many times you stop and restart the live merge process, it can pick up where it left (more-or-less, sans some overhead), and complete eventually - and this is also the guideline to the field in 3.6.z. Have a failed live merge? Fix the underlying problem (e.g., failed host or inaccessible storage device) and retry.
oVirt 4.0 Alpha has been released, moving to oVirt 4.0 Beta target.
*** Bug 1342681 has been marked as a duplicate of this bug. ***
Moving back to NEW to be reassigned as resources allow.
Assigning to Emma for review. Emma, looks like we just need to add the note suggested in comment 20. We don't really refer to 'Live Merge' as a concept, because it's an underlying operation when you delete a snapshot on a running virtual machine. I think the following two locations would be the most suitable for this information: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html-single/technical_reference/#Snapshot_Deletion https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html-single/virtual_machine_management_guide/#Deleting_a_snapshot
The updated documentation is available on the Customer Portal: Technical Guide: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html-single/technical_reference/#Snapshot_Deletion Virtual Machine Management Guide: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/virtual_machine_management_guide/sect-snapshots