Red Hat Bugzilla – Full Text Bug Listing
|Summary:||RFE: virt-clone --move option for relocating guests across connections|
|Product:||[Community] Virtualization Tools||Reporter:||jbrackinshaw|
|Component:||virt-manager||Assignee:||Cole Robinson <crobinso>|
|Status:||CLOSED WONTFIX||QA Contact:|
|Version:||unspecified||CC:||acathrow, berrange, clalance, crobinso, xen-maint|
|Fixed In Version:||Doc Type:||Bug Fix|
|Doc Text:||Story Points:||---|
|Last Closed:||2014-02-08 19:37:21 EST||Type:||---|
|oVirt Team:||---||RHEL 7.3 requirements from Atomic Host:|
Description jbrackinshaw 2009-10-02 09:00:17 EDT
During testing a user may use a slow LUN (or a local disk) and then for production use a faster lun. Later the guests using the local disk (or slow LUN) need to be moved to faster shared storage. It would be helpful to allow a migration to different storage, either when the guest stays on the same host or moves to another host.
Comment 1 jbrackinshaw 2009-10-02 09:06:48 EDT
*** Bug 526904 has been marked as a duplicate of this bug. ***
Comment 2 Chris Lalancette 2009-10-02 09:10:13 EDT
This isn't really possible with the underlying technology today. That is, in order for this to work, you would need to copy the disk data along with the memory data to the destination. There have been some patches proposed in qemu upstream to do this, but: a) They aren't ready yet b) It will significantly slow down your migration, since you'll have to copy all of your storage over (there are tricks to use snapshots that help here, but to go from a slow storage to a fast storage, you would have to copy the whole thing). While it's a feature that might be useful in some limited circumstances, the right solution is to use shared storage. Chris Lalancette
Comment 3 jbrackinshaw 2009-10-02 09:21:39 EDT
So in our case, we have lots of local guests and want to move it to the storage. The solution was to stop the guest, move the disk file, dump the xml file, undefine the guest, edit the xml file to show the new location, import the xml file and start the guest. (I don't know if this is possible with a suspend in place of a stop, it didn't work for us). This seems like something that could work without any patches, but obviously not online, and it's something that is very useful in migrating from a test setup to a live setup. I guess it would also work with "virsh edit" instead of dump/undefine/edit/import/define. I will try it.
Comment 4 jbrackinshaw 2009-10-02 09:23:20 EDT
Yep worked fine. I was paranoid :)
Comment 5 jbrackinshaw 2009-10-02 09:32:15 EDT
Just to be clear: I'm not requesting that this work online, only offline.
Comment 6 Chris Lalancette 2009-10-02 09:34:38 EDT
(In reply to comment #3) > So in our case, we have lots of local guests and want to move it to the > storage. Ah, I see. You don't really want "migration", as it is traditionally used in virtualization. You want more like "move my guests to a new piece of storage". > > The solution was to stop the guest, move the disk file, dump the xml file, > undefine the guest, edit the xml file to show the new location, import the xml > file and start the guest. (I don't know if this is possible with a suspend in > place of a stop, it didn't work for us). No, it won't work with a suspend; the reason is that editing the guest XML of a running domain only edits it for the next time you stop/start it. > This seems like something that could work without any patches, but obviously > not online, and it's something that is very useful in migrating from a test > setup to a live setup. > > I guess it would also work with "virsh edit" instead of > dump/undefine/edit/import/define. I will try it. As you found out, this does work. So your steps are now simplified down to: 1) Stop guest. 2) Copy guest disk over. 3) virsh edit, point at new storage. I could see room for automation in this case. You actually might be able to do something with virt-clone; even though it is the same guest, you could "clone" the storage to the new place, and possibly just give it a new UUID and stuff. That might help you out. If virt-clone doesn't do exactly what you want, maybe it can be expanded to fill your use-case. Chris Lalancette
Comment 7 jbrackinshaw 2009-10-02 09:44:22 EDT
I can see that virt-clone is similar, but I want to destroy the original disk file, i.e. virt-clone is a copier, I want a mover. It's important for us that we don't have disk files hanging around us wondering that they are doing there :) Not sure why the new uuid would help here though, it's the same guest, just on different storage.
Comment 8 Chris Lalancette 2009-10-02 10:46:20 EDT
(In reply to comment #7) > I can see that virt-clone is similar, but I want to destroy the original disk > file, i.e. virt-clone is a copier, I want a mover. It's important for us that > we don't have disk files hanging around us wondering that they are doing there > :) > > Not sure why the new uuid would help here though, it's the same guest, just on > different storage. virt-clone probably won't let you define a guest that has the same UUID as an existing guest, so you would probably need a new UUID. As mentioned, it looks like virt-clone is really what you want, with a "--move" option. I'm going to move this over to that component and fix up the subject. Chris Lalancette
Comment 9 Cole Robinson 2009-10-02 11:49:45 EDT
The way to do this would be to extend the CreateVolFrom libvirt API to work across connections (possibly using the new streaming APIs). Danpb wrote about it here: https://www.redhat.com/archives/libvir-list/2009-September/msg00472.html
Comment 10 Cole Robinson 2013-04-21 15:12:09 EDT
virtinst has been merged into virt-manager.git. Moving all virtinst bugs to the virt-manager component.
Comment 11 Cole Robinson 2014-02-08 19:37:21 EST
While we could implement this as part of virt-clone, I think this makes more sense with proper libvirt support as 'offline migration with non-shared storage', since it already has most of the pieces in place for that. Particularly I don't see this being implemented in virt-clone any time soon, so keeping this RFE open isn't very useful. Closing as WONTFIX