Bug 1662588 - host-model CPU changes to custom CPU in an inactive config after reverting to an active snapshot
Summary: host-model CPU changes to custom CPU in an inactive config after reverting to...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libvirt
Version: 8.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: 8.2
Assignee: Jiri Denemark
QA Contact: Meina Li
URL:
Whiteboard:
Depends On: 1494471
Blocks: 1711971
TreeView+ depends on / blocked
 
Reported: 2018-12-30 13:36 UTC by jiyan
Modified: 2020-05-05 09:47 UTC (History)
11 users (show)

Fixed In Version: libvirt-5.9.0-1.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1494471
Environment:
Last Closed: 2020-05-05 09:45:09 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
max-rhel-8 log (4.39 KB, application/octet-stream)
2019-12-03 18:20 UTC, IBM Bug Proxy
no flags Details
max-rhel-8 xml (1.12 KB, application/octet-stream)
2019-12-03 18:21 UTC, IBM Bug Proxy
no flags Details
vm xml, scripts (1.49 KB, application/gzip)
2020-02-10 08:12 UTC, Han Han
no flags Details


Links
System ID Private Priority Status Summary Last Updated
IBM Linux Technology Center 174306 0 None None None 2019-08-06 01:36:11 UTC
Red Hat Product Errata RHBA-2020:2017 0 None None None 2020-05-05 09:47:03 UTC

Description jiyan 2018-12-30 13:36:18 UTC
+++ This bug was initially created as a clone of Bug #1494471 +++

Reproduce this bug in RHEL-8 and detailed steps can be seen as following steps:
https://bugzilla.redhat.com/show_bug.cgi?id=1494471#c5

Description of problem:

When a domain with a host-model CPU is started, its CPU changes into a custom one and this custom CPU is stored within a snapshot taken while the domain is running. The inactive config still contains host-model CPU. But once we revert to the active snapshot, even the CPU in the inactive config changes into the custom one, which is wrong.

Version-Release number of selected component (if applicable):

libvirt-3.7.0

How reproducible:

100%

Steps to Reproduce:
1. define a domain with <cpu mode='host-model'/>
2. start the domain: virsh start $DOM
3. create a snapshot: virsh snapshot-create-as $DOM snap
4. revert to the new snapshot: virsh snapshot-revert $DOM snap
5. check inactive domain XML: virsh dumpxml --inactive $DOM

Actual results:

After each of the first three steps the inactive domain XML correctly contains the host-model CPU, but after step 4 the CPU in the inactive domain XML changes to the custom one from the snapshot.

Expected results:

The inactive XML should contain the host-model CPU in all steps, i.e., even after reverting the domain to the snapshot.

Additional info:

Do not confuse this issue with bug 1485022 which fixes an issue with an offline, i.e., inactive snapshot.

--- Additional comment from Jiri Denemark on 2017-11-22 13:31:39 UTC ---



--- Additional comment from IBM Bug Proxy on 2017-11-22 13:38:07 UTC ---

------- Comment From sthoufee.com 2017-11-03 03:20 EDT-------
https://www.redhat.com/archives/libvir-list/2017-October/msg01333.html

--- Additional comment from IBM Bug Proxy on 2017-12-07 11:01:17 UTC ---



--- Additional comment from jiyan on 2018-12-30 10:37:51 UTC ---

Reproduce this bug in the following components:

Version:
libvirt-3.7.0-1.el7.x86_64
qemu-kvm-rhev-2.9.0-16.el7_4.18.x86_64
kernel-3.10.0-693.el7.x86_64

Steps:
1. prepare a shutdown VM with 'host-model' CPU configuration
# virsh domstate test1
shut off

# virsh dumpxml test1 --inactive | grep "<cpu" -A2
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
  </cpu>

2. Start VM and check cpu configuration in inactive dumpxml
# virsh start test1
Domain test1 started

# virsh dumpxml test1 --inactive|grep "<cpu" -A2
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
  </cpu>

3. Create internal snapshot for VM and check cpu configuration in inactive dumpxml
# virsh snapshot-create-as test1 snap1
Domain snapshot snap1 created

# virsh dumpxml test1 --inactive|grep "<cpu" -A2
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
  </cpu>

4. Revert VM from internal snapshot and check cpu configuration in inactive dumpxml
# virsh snapshot-revert test1 snap1

# virsh dumpxml test1 --inactive|grep "<cpu" -A2
  <cpu mode='custom' match='exact' check='full'>
    <model fallback='forbid'>Opteron_G5</model>
    <vendor>AMD</vendor>
    ...

As step-4 shows, the CPU configuration in inactive dumpxml changed after reverting from internal snapshot.

--- Additional comment from jiyan on 2018-12-30 10:45:23 UTC ---

This issue can be also reproduced in RHEL-8 (slow train and fast train).

#################### RHEL-8 fast train

Version:
libvirt-4.5.0-16.module+el8+2586+bf759444.x86_64
qemu-kvm-2.12.0-50.module+el8+2596+0a642e54.x86_64
kernel-4.18.0-57.el8.x86_64

Steps:
# virsh domstate test1
shut off

# virsh dumpxml test1 |grep "<cpu" -A2
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
  </cpu>

# virsh start test1
Domain test1 started

# virsh dumpxml test1 --inactive|grep "<cpu" -A2
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
  </cpu>

# virsh snapshot-create-as test1 snap1
Domain snapshot snap1 created

# virsh dumpxml test1 --inactive|grep "<cpu" -A2
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
  </cpu>

# virsh snapshot-revert test1 snap1

# virsh dumpxml test1 --inactive|grep "<cpu" -A2
  <cpu mode='custom' match='exact' check='full'>
    <model fallback='forbid'>IvyBridge-IBRS</model>
    <vendor>Intel</vendor>
    ...

#################### RHEL-8 fast train

Version:
libvirt-4.10.0-1.module+el8+2317+367e35b5.x86_64
kernel-4.18.0-57.el8.x86_64
qemu-kvm-3.1.0-1.module+el8+2538+1516be75.x86_64

Stesp:
# virsh domstate fast1
shut off

# virsh dumpxml fast1 |grep "<cpu" -A2
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
  </cpu>

# virsh start fast1
Domain fast1 started

# virsh dumpxml fast1 --inactive|grep "<cpu" -A2
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
  </cpu>

# virsh snapshot-create-as fast1 snap1
Domain snapshot snap1 created

# virsh dumpxml fast1 --inactive|grep "<cpu" -A2
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
  </cpu>

# virsh snapshot-revert fast1 snap1

# virsh dumpxml fast1 --inactive|grep "<cpu" -A2
  <cpu mode='custom' match='exact' check='full'>
    <model fallback='forbid'>Haswell-noTSX-IBRS</model>
    <vendor>Intel</vendor>
    ...

Comment 1 IBM Bug Proxy 2019-01-04 12:30:18 UTC
------- Comment From viparash.com 2019-01-04 07:28 EDT-------
Hello Madhu,

Please have a look at this. This bug is RHEL8 counterpart
of Pegas bug LTC 158611 (Red Hat 1494471/1509107)

Comment 2 IBM Bug Proxy 2019-07-26 19:20:20 UTC
------- Comment From maxiwell 2019-07-26 15:17 EDT-------
I submitted a V2 patch to libvirt to save both active and inactive domain in the snapshot XML. I am waiting for the community review.

https://www.redhat.com/archives/libvir-list/2019-July/msg01357.html

Comment 3 IBM Bug Proxy 2019-08-05 21:00:19 UTC
------- Comment From maxiwell 2019-08-05 16:56 EDT-------
*** Bug 164156 has been marked as a duplicate of this bug. ***

Comment 4 Jiri Denemark 2019-09-11 11:16:06 UTC
This is fixed upstream by

commit 720d98263e2c5864efb6a6e6e68e3b9f0dd04e63
Refs: v5.7.0-101-g720d98263e
Author:     Maxiwell S. Garcia <maxiwell.com>
AuthorDate: Thu Aug 29 17:55:42 2019 -0300
Commit:     Jiri Denemark <jdenemar>
CommitDate: Wed Sep 11 13:09:45 2019 +0200

    qemu: formatting XML from domain def choosing the root name

    The function virDomainDefFormatInternal() has the predefined root name
    "domain" to format the XML. But to save both active and inactive domain
    in the snapshot XML, the new root name "inactiveDomain" was created.
    So, the new function virDomainDefFormatInternalSetRootName() allows to
    choose the root name of XML. The former function became a tiny wrapper
    to call the new function setting the correct parameters.

    Signed-off-by: Maxiwell S. Garcia <maxiwell.com>
    Reviewed-by: Daniel Henrique Barboza <danielhb413>
    Tested-by: Daniel Henrique Barboza <danielhb413>
    Reviewed-by: Jiri Denemark <jdenemar>
commit 152c165d34cb6dcd21d08427422850f406cd0643
Refs: v5.7.0-102-g152c165d34
Author:     Maxiwell S. Garcia <maxiwell.com>
AuthorDate: Thu Aug 29 17:55:43 2019 -0300
Commit:     Jiri Denemark <jdenemar>
CommitDate: Wed Sep 11 13:09:45 2019 +0200

    snapshot: Store both config and live XML in the snapshot domain

    The snapshot-create operation of running guests saves the live
    XML and uses it to replace the active and inactive domain in
    case of revert. So, the config XML is ignored by the snapshot
    process. This commit changes it and adds the config XML in the
    snapshot XML as the <inactiveDomain> entry.

    In case of offline guest, the behavior remains the same and the
    config XML is saved in the snapshot XML as <domain> entry. The
    behavior of older snapshots of running guests, that don't have
    the new <inactiveDomain>, remains the same too. The revert, in
    this case, overrides both active and inactive domain with the
    <domain> entry. So, the <inactiveDomain> in the snapshot XML is
    not required to snapshot work, but it's useful to preserve the
    config XML of running guests.

    Signed-off-by: Maxiwell S. Garcia <maxiwell.com>
    Reviewed-by: Daniel Henrique Barboza <danielhb413>
    Tested-by: Daniel Henrique Barboza <danielhb413>
    Reviewed-by: Jiri Denemark <jdenemar>

Comment 5 IBM Bug Proxy 2019-12-03 18:20:21 UTC
------- Comment From maxiwell 2019-12-03 13:14 EDT-------
I tried to test the fix, but I can't revert the snapshot when the VM is running.

maxiwell@ltcgen2:xmls$ sudo virsh define max-rhel-8.xml
Domain max-rhel-8 defined from max-rhel-8.xml

maxiwell@ltcgen2:xmls$ sudo virsh start max-rhel-8
Domain max-rhel-8 started

maxiwell@ltcgen2:xmls$ sudo virsh snapshot-create-as max-rhel-8 snap
Domain snapshot snap created

maxiwell@ltcgen2:xmls$ sudo virsh snapshot-revert max-rhel-8 snap
error: Disconnected from qemu:///system due to end of file
error: End of file while reading data: Input/output error

The log /var/log/libvirt/qemu/max-rhel-8.log didn't show any message about the error.

- Versions:
Red Hat Enterprise Linux release 8.2 Beta (Ootpa)

libvirt version: 5.9.0, package: 4.module+el8.2.0+4836+a8e32ad7 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2019-11-21-19:33:03, )

qemu version: 4.2.0qemu-kvm-4.2.0-1.module+el8.2.0+4793+b09dd2fb,

kernel: 4.18.0-151.el8.ppc64le, hostname: ltcgen2.aus.stglabs.ibm.com

Comment 6 IBM Bug Proxy 2019-12-03 18:20:23 UTC
Created attachment 1641745 [details]
max-rhel-8 log

Comment 7 IBM Bug Proxy 2019-12-03 18:21:46 UTC
Created attachment 1641746 [details]
max-rhel-8 xml

Comment 9 jiyan 2020-01-19 08:55:40 UTC
Hi Jiri 

When I am trying to reproduce and verify this issue, I find the following issues:
1> I can reproduce this issue with "libvirt-5.6.0-7.module+el8.2.0+4673+ff4b3b61.x86_64"

2> Update libvirt to "libvirt-5.9.0-1.module+el8.2.0+4682+acceb91e.x86_64", each time doing "snapshot-revert" will trigger libvirtd coredump.
I set up libvirt-5.9.0-1.module+el8.2.0+4682+acceb91e.x86_64 and VM in a totally new environment, which can also hit the libvirtd core dump.

3> Update libvirt to "libvirt-6.0.0-1.module+el8.2.0+5453+31b2b136.x86_64", the first time doing "snapshot-revert" will trigger qemu core dumpxml, the the later operation will succeed.
I set up libvirt-6.0.0-1.module+el8.2.0+5453+31b2b136.x86_64 and VM in a totally new environment, which can work as expected.

So the questions are:
1> Should we modify the fixed version in this bug?
2> Can I verify this bug with testing the scenario after updating the libvirt? 
   2.1> If no, then testing this bug with libvirt-6.0.0-1.module+el8.2.0+5453+31b2b136.x86_64 in a pure environment is expected.
   2.2> If yes, could you pls check the qemu core dump in step-6?

Thank you in advance. :)

Version:
kernel-4.18.0-171.el8.x86_64
libvirt-5.6.0-7.module+el8.2.0+4673+ff4b3b61.x86_64
qemu-kvm-4.2.0-6.module+el8.2.0+5453+31b2b136.x86_64

Steps:
1. Start a VM with host-model cpu conf 
# virsh list --all
 Id   Name     State
-------------------------
 -    test82   shut off

# virsh dumpxml test82 --inactive |grep "<cpu" -A2
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
  </cpu>

# virsh start test82 
Domain test82 started

2. Create internal snapshot for the VM
# virsh snapshot-create-as test82 test82-snap-1
Domain snapshot test82-snap-1 created

# virsh snapshot-list test82 
 Name            Creation Time               State
------------------------------------------------------
 test82-snap-1   2020-01-19 02:59:08 -0500   running

3. Check internal snapshot again 
# virsh dumpxml test82 --inactive |grep "<cpu" -A2
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
  </cpu>

4. Revert snapshot for VM and check inactive dumpxml again
# virsh snapshot-revert test82 test82-snap-1 

# virsh dumpxml test82 --inactive |grep "<cpu" -A20
  <cpu mode='custom' match='exact' check='full'>
    <model fallback='forbid'>EPYC-IBPB</model>
    <vendor>AMD</vendor>
    <feature policy='require' name='x2apic'/>
    <feature policy='require' name='tsc-deadline'/>
    <feature policy='require' name='hypervisor'/>
    <feature policy='require' name='tsc_adjust'/>
    <feature policy='require' name='arch-capabilities'/>
    <feature policy='require' name='ssbd'/>
    <feature policy='require' name='cmp_legacy'/>
    <feature policy='require' name='perfctr_core'/>
    <feature policy='require' name='amd-ssbd'/>
    <feature policy='require' name='virt-ssbd'/>
    <feature policy='require' name='rdctl-no'/>
    <feature policy='require' name='skip-l1dfl-vmentry'/>
    <feature policy='require' name='mds-no'/>
    <feature policy='disable' name='monitor'/>
    <feature policy='disable' name='svm'/>
    <feature policy='require' name='topoext'/>
  </cpu>

5. Update libvirt to the fixing version of this bug, and do the operation again ==> libvirt coredump
# yum update libvirt* -y

# systemctl restart libvirtd

# rpm -qa libvirt
libvirt-5.9.0-1.module+el8.2.0+4682+acceb91e.x86_64

# virsh snapshot-revert test82 test82-snap-1 
error: Disconnected from qemu:///system due to end of file
error: Cannot recv data: Connection reset by peer

(gdb) c
Continuing.

Thread 3 "libvirtd" received signal SIGABRT, Aborted.
[Switching to Thread 0x7f80cae51700 (LWP 3979)]
0x00007f80d18b670f in raise () from /lib64/libc.so.6
(gdb) bt
#0  0x00007f80d18b670f in raise () from /lib64/libc.so.6
#1  0x00007f80d18a0b25 in abort () from /lib64/libc.so.6
#2  0x00007f80d18f9897 in __libc_message () from /lib64/libc.so.6
#3  0x00007f80d18fffdc in malloc_printerr () from /lib64/libc.so.6
#4  0x00007f80d190028c in munmap_chunk () from /lib64/libc.so.6
#5  0x00007f80d27072b2 in g_free () from /lib64/libglib-2.0.so.0
#6  0x00007f80d547028f in virFree () from /lib64/libvirt.so.0
#7  0x00007f808bc41b31 in qemuMonitorTextLoadSnapshot () from /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so
#8  0x00007f808bc784ca in qemuDomainRevertToSnapshot () from /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so
#9  0x00007f80d56a9aaf in virDomainRevertToSnapshot () from /lib64/libvirt.so.0
#10 0x000055b671cbfc62 in remoteDispatchDomainRevertToSnapshotHelper ()
#11 0x00007f80d55cf319 in virNetServerProgramDispatch () from /lib64/libvirt.so.0
#12 0x00007f80d55d44bc in virNetServerHandleJob () from /lib64/libvirt.so.0
#13 0x00007f80d54f3210 in virThreadPoolWorker () from /lib64/libvirt.so.0
#14 0x00007f80d54f259c in virThreadHelper () from /lib64/libvirt.so.0
#15 0x00007f80d1c492de in start_thread () from /lib64/libpthread.so.0
#16 0x00007f80d197ae83 in clone () from /lib64/libc.so.6

6. Update libvirt to the newest version of libvirt, the first time of "snapshot-revert" operation will see qemu coredump, and the later operation will work as expected.
# yum update libvirt* -y

# systemctl restart libvirtd

# rpm -qa libvirt
libvirt-6.0.0-1.module+el8.2.0+5453+31b2b136.x86_64

# virsh snapshot-revert test82 test82-snap-01 
error: Unable to read from monitor: Connection reset by peer

# tail -n1 /var/log/libvirt/qemu/test82.log 
2020-01-19 08:44:18.448+0000: shutting down, reason=crashed

# virsh snapshot-revert test82 test82-snap-01 

# virsh dumpxml test82 --inactive |grep "<cpu" -A2
  <cpu mode='host-model' check='partial'/>

# virsh snapshot-revert test82 test82-snap-01 

# virsh dumpxml test82 --inactive |grep "<cpu" -A2
  <cpu mode='host-model' check='partial'/>

Comment 10 Jiri Denemark 2020-02-05 14:22:22 UTC
(In reply to jiyan from comment #9)
> 1> Should we modify the fixed version in this bug?

Not really. There's just another bug in that version which blocks verification
of this bug in some cases.

> 2> Can I verify this bug with testing the scenario after updating the
> libvirt? 

Please, check the behavior in both cases: after upgrade from 8.1.1 to 8.2.0
and with 8.2.0 only.

> # rpm -qa libvirt
> libvirt-5.9.0-1.module+el8.2.0+4682+acceb91e.x86_64
> 
> # virsh snapshot-revert test82 test82-snap-1 
> error: Disconnected from qemu:///system due to end of file
> error: Cannot recv data: Connection reset by peer

This was caused by a bug present in libvirt 5.{8,9,10}.0. It was fixed by

    commit 4c53267b70fc5c548b6530113c3f96870d8d7fc1
    Refs: v5.10.0-67-g4c53267b70
    Author:     Michal Prívozník <mprivozn>
    AuthorDate: Fri Dec 6 10:27:08 2019 +0100
    Commit:     Michal Prívozník <mprivozn>
    CommitDate: Fri Dec 6 10:29:46 2019 +0100

        qemu_monitor_text: Drop unused variable and avoid crash

        In v5.8.0-rc1~122 we've removed the only use of @safename in
        qemuMonitorTextLoadSnapshot(). What we are left with is an
        declared but not initialized variable that is passed to
        VIR_FREE().

        Caught by libvirt-php test suite.

        Signed-off-by: Michal Privoznik <mprivozn>


> 6. Update libvirt to the newest version of libvirt, the first time of
> "snapshot-revert" operation will see qemu coredump, and the later operation
> will work as expected.
> # yum update libvirt* -y
> 
> # systemctl restart libvirtd
> 
> # rpm -qa libvirt
> libvirt-6.0.0-1.module+el8.2.0+5453+31b2b136.x86_64
> 
> # virsh snapshot-revert test82 test82-snap-01 
> error: Unable to read from monitor: Connection reset by peer
> 
> # tail -n1 /var/log/libvirt/qemu/test82.log 
> 2020-01-19 08:44:18.448+0000: shutting down, reason=crashed

Reverting a snapshot of a running domain tries to reuse existing QEMU process
and the error indicates QEMU crashed while loading the snapshot. Thus the next
attempt to revert to the snapshot does not have any QEMU process it could
reuse and starts a new process telling it to load the snapshot, which
succeeds. If you can reproduce it during upgrade from RHEL-AV-8.1.1 to
RHEL-AV-8.2.0 (i.e., creating a snapshot on 8.1.1, keeping the domain running,
upgrading to 8.2.0 and reverting to that snapshot), file a new bug, please.

Comment 11 Han Han 2020-02-10 08:12:56 UTC
Created attachment 1662088 [details]
vm xml, scripts

I didn't reproduce the issue of qemu crash on comment9. Here are my steps:
Version:
qemu-kvm-4.2.0-8.module+el8.2.0+5607+dc756904.x86_64
Libvirt before update: libvirt-5.6.0-10.module+el8.1.1+5309+6d656f05
Libvirt after update: libvirt-6.0.0-4.module+el8.2.0+5642+838f3513

Steps:
Follow the script in attachment:
# cat 1662588.sh 
#!/bin/bash
VM=nfs-rhel8
SNAP_NAME=s1

virsh create $VM.xml
virsh start $VM && sleep 30
virsh snapshot-create-as $VM $SNAP_NAME
dnf update -y
virsh snapshot-revert $VM $SNAP_NAME
if [ $? -ne 0 ]; then
    echo "BUG reproduced"
fi

My cpu info:
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              1
On-line CPU(s) list: 0
Thread(s) per core:  1
Core(s) per socket:  1
Socket(s):           1
NUMA node(s):        1
Vendor ID:           GenuineIntel
CPU family:          6
Model:               94
Model name:          Intel(R) Xeon(R) CPU E3-1260L v5 @ 2.90GHz
Stepping:            3
CPU MHz:             2903.998
BogoMIPS:            5807.99
Virtualization:      VT-x
Hypervisor vendor:   KVM
Virtualization type: full
L1d cache:           32K
L1i cache:           32K
L2 cache:            4096K
L3 cache:            16384K
NUMA node0 CPU(s):   0
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves arat umip arch_capabilities

Comment 12 Han Han 2020-02-10 08:14:32 UTC
I am not sure the qemu crash in comment9 was caused by host cpu type or qemu version. Please reproduce it on latest libvirt&qemu version.

Comment 13 jiyan 2020-03-09 12:42:49 UTC
Hi Han
I think the crash should not be caused by host CPU type cause I have tried same scenario on two physical hosts.
Could you pls follow the steps in comment 9 to see more info about the crash? Thank you. :)

Comment 14 Meina Li 2020-03-18 03:51:50 UTC
Verified Version:
libvirt-6.0.0-12.el8.x86_64
qemu-kvm-4.2.0-15.module+el8.2.0+6029+618ef2ec.x86_64
Verified Steps:
1. Start a guest with host-model cpu conf
# virsh dumpxml lmn --inactive | grep '<cpu'
  <cpu mode='host-model' check='partial'/>
2. Create the internal snapshot for the guest and revert it.
# virsh snapshot-create-as lmn s1
Domain snapshot s1 created
# virsh dumpxml lmn --inactive | grep '<cpu'
  <cpu mode='host-model' check='partial'/>
# virsh snapshot-revert lmn s1

3. Check the inactive dumpxml again.
# virsh dumpxml lmn --inactive | grep '<cpu'
  <cpu mode='host-model' check='partial'/>
# virsh dumpxml lmn | grep '<cpu'
  <cpu mode='custom' match='exact' check='full'>

Additional:
Reproduced in libvirt-5.6.0-7.module+el8.2.0+4673+ff4b3b61.x86_64
Passed in  libvirt-6.0.0-12.el8.x86_64after updating libvirt directly from libvirt-5.6.0-7.
So move this bug to be verified.

Comment 16 errata-xmlrpc 2020-05-05 09:45:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2017


Note You need to log in before you can comment on or make changes to this bug.