Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
For historical reasons, 'virsh snapshot-create' defaults to internal snapshots, because external snapshots were not implemented for several years. However, these days the qemu team recommends the use of external snapshots, and upper levels like RHEV prefer external snapshots. It would be nice if there were a way to set up configuration defaults for virsh, so that typing 'virsh snapshot-create' could consult the configuration file on whether to default to internal or external, instead of forcing the user to remember to pass extra flags to get an external snapshot.
Version-Release number of selected component (if applicable):
libvirt-2.5.0-1.el7
How reproducible:
100%
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
When designing this feature for virsh (and virt-manager), please don't forget about the UEFI varstore files (under /var/lib/libvirt/qemu/nvram), which are technically raw drives, but have very different storage properties from normal drives. They don't live in storage pools, they are not on shared storage, and also not subject to storage migration. QEMU automatically writes them out fully on the target host after migration finishes.
I'm unsure how this interacts with external snapshots, but I figure I'd better raise it. Thanks.
Is there currently any way to take a snapshot of a running VM (and restore it) *including* its ram state?
If you don't save the ram state it's basically like pulling the power cord...
Taking the ram state is possible when you use the --memspec argument of virsh snapshot-create-as (or create the XML with the corresponding element for the memory snapshot target).
Restoring of external snapshots is possible only manually depending on what you want to achieve. (plain revert without creating a second set of overlay disks will invalidate the most recent state). For restoring a snapshot with memory the 'virsh restore' command can be used with the memory state image, but special care needs to be used with the configuration if you care about the most recent state. (new updated XML needs to be passed via the --xml attribute and overlay disk images need to be created)
Comment 10Jaroslav Suchanek
2019-04-24 12:29:11 UTC
This bug is going to be addressed in next major release within existing cloned bug.
Jaroslav, can you clarify comment 10? Neither of the marked clones of this bug seem to be the one you referred to. Which bug needs to be followed to track progress on this?
Comment 12Jaroslav Suchanek
2021-06-07 12:19:56 UTC
(In reply to Jonathan Watt from comment #11)
> Jaroslav, can you clarify comment 10? Neither of the marked clones of this
> bug seem to be the one you referred to. Which bug needs to be followed to
> track progress on this?
It should be bug 1519002 .
Thanks for the reply. I assumed given that bug 1519002 is locked it must be a security bug rather than a feature bug. It's a bit unfortunate that it being locked prevents many folks from tracking its progress, but I guess there are reasons.
Comment 14Jaroslav Suchanek
2021-06-07 12:57:32 UTC
(In reply to Jonathan Watt from comment #13)
> Thanks for the reply. I assumed given that bug 1519002 is locked it must be
> a security bug rather than a feature bug. It's a bit unfortunate that it
> being locked prevents many folks from tracking its progress, but I guess
> there are reasons.
Ah, my apologies, I didn't realized that the bug 1519002 is not public. Anyway, I made it public so you should be able to see it. Thanks.