Bug 1402581 - 'virsh snapshot-create' should have way to default to external snapshots
Summary: 'virsh snapshot-create' should have way to default to external snapshots
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: yisun
URL:
Whiteboard:
Depends On:
Blocks: 1519002 1214187 1342543 1403951 1431852 1519003
TreeView+ depends on / blocked
 
Reported: 2016-12-07 21:33 UTC by Eric Blake
Modified: 2021-06-07 13:22 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1403951 1519002 (view as bug list)
Environment:
Last Closed: 2019-04-24 12:29:11 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Eric Blake 2016-12-07 21:33:59 UTC
Description of problem:
For historical reasons, 'virsh snapshot-create' defaults to internal snapshots, because external snapshots were not implemented for several years.  However, these days the qemu team recommends the use of external snapshots, and upper levels like RHEV prefer external snapshots.  It would be nice if there were a way to set up configuration defaults for virsh, so that typing 'virsh snapshot-create' could consult the configuration file on whether to default to internal or external, instead of forcing the user to remember to pass extra flags to get an external snapshot.

Version-Release number of selected component (if applicable):
libvirt-2.5.0-1.el7

How reproducible:
100%

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 5 Peter Krempa 2017-01-17 08:59:12 UTC
External snapshots are not on par feature-wise on libvirt thus we can't switch the default yet.

Comment 6 Laszlo Ersek 2017-02-24 23:50:15 UTC
When designing this feature for virsh (and virt-manager), please don't forget about the UEFI varstore files (under /var/lib/libvirt/qemu/nvram), which are technically raw drives, but have very different storage properties from normal drives. They don't live in storage pools, they are not on shared storage, and also not subject to storage migration. QEMU automatically writes them out fully on the target host after migration finishes.

I'm unsure how this interacts with external snapshots, but I figure I'd better raise it. Thanks.

Comment 7 Niccolò Belli 2018-09-24 19:16:11 UTC
Is there currently any way to take a snapshot of a running VM (and restore it) *including* its ram state?
If you don't save the ram state it's basically like pulling the power cord...

Comment 8 Peter Krempa 2018-09-25 08:44:48 UTC
Taking the ram state is possible when you use the --memspec argument of virsh snapshot-create-as (or create the XML with the corresponding element for the memory snapshot target).

Restoring of external snapshots is possible only manually depending on what you want to achieve. (plain revert without creating a second set of overlay disks will invalidate the most recent state). For restoring a snapshot with memory the 'virsh restore' command can be used with the memory state image, but special care needs to be used with the configuration if you care about the most recent state. (new updated XML needs to be passed via the --xml attribute and overlay disk images need to be created)

Comment 10 Jaroslav Suchanek 2019-04-24 12:29:11 UTC
This bug is going to be addressed in next major release within existing cloned bug.

Comment 11 Jonathan Watt 2021-06-05 18:32:01 UTC
Jaroslav, can you clarify comment 10? Neither of the marked clones of this bug seem to be the one you referred to. Which bug needs to be followed to track progress on this?

Comment 12 Jaroslav Suchanek 2021-06-07 12:19:56 UTC
(In reply to Jonathan Watt from comment #11)
> Jaroslav, can you clarify comment 10? Neither of the marked clones of this
> bug seem to be the one you referred to. Which bug needs to be followed to
> track progress on this?

It should be bug 1519002 .

Comment 13 Jonathan Watt 2021-06-07 12:29:33 UTC
Thanks for the reply. I assumed given that bug 1519002 is locked it must be a security bug rather than a feature bug. It's a bit unfortunate that it being locked prevents many folks from tracking its progress, but I guess there are reasons.

Comment 14 Jaroslav Suchanek 2021-06-07 12:57:32 UTC
(In reply to Jonathan Watt from comment #13)
> Thanks for the reply. I assumed given that bug 1519002 is locked it must be
> a security bug rather than a feature bug. It's a bit unfortunate that it
> being locked prevents many folks from tracking its progress, but I guess
> there are reasons.

Ah, my apologies, I didn't realized that the bug 1519002 is not public. Anyway, I made it public so you should be able to see it. Thanks.

Comment 15 Jonathan Watt 2021-06-07 13:22:28 UTC
Awesome, thank you!


Note You need to log in before you can comment on or make changes to this bug.