RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1310747 - Passwords disappear from domain XML passed to virDomainRestoreFlags or virDomainSaveImageDefineXML
Summary: Passwords disappear from domain XML passed to virDomainRestoreFlags or virDom...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.7
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: Jiri Denemark
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1307094
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-02-22 15:31 UTC by Jan Kurik
Modified: 2016-03-22 15:49 UTC (History)
22 users (show)

Fixed In Version: libvirt-0.10.2-54.el6_7.6
Doc Type: Bug Fix
Doc Text:
Prior to this update, the libvirt service in some cases removed the password for the SPICE client from the domain XML file after modifying the file and restoring the domain. As a consequence, anyone was able to connect to the SPICE client without password authentication. With this update, the code that updates XML configuration of a saved domain uses correct internal options to avoid removing passwords. As a result, users can change the XML file of a saved domain without the risk of losing set-up passwords.
Clone Of: 1307094
Environment:
Last Closed: 2016-03-22 15:49:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0483 0 normal SHIPPED_LIVE libvirt bug fix update 2016-03-22 19:48:24 UTC

Description Jan Kurik 2016-02-22 15:31:44 UTC
This bug has been copied from bug #1307094 and has been proposed
to be backported to 6.7 z-stream (EUS).

Comment 6 Fangge Jin 2016-03-04 06:03:31 UTC
I can reproduce this bug with build libvirt-0.10.2-54.el6.x86_64.

Scenario 1:
Change spice password after saving the guest, the password will be lost.

1) Start a guest with spice passwd
   <graphics type='spice' autoport='yes' passwd='**'/>
2) # virsh save rhel6.6 /tmp/rhel6.6.save
3) Change the spice passwd by: 
# virsh save-image-edit /tmp/rhel6.6.save
4) Check the dumpxml of saved image, no passwd found
# virsh save-image-dumpxml rhel6.6 --security-info


Scenario 2:
Restore guest with --xml <new_file> (the new file contains updated spice passwd for the guest), the spice passwd will be lost after restoring.

1) Start a guest with spice passwd
   <graphics type='spice' autoport='yes' passwd='**'/>
2) # virsh save rhel6.6 /tmp/rhel6.6.save
3) # virsh save-image-dumpxml /tmp/rhel6.6.save --security-info> rhel6.6-save-dump.xml
4) update spice passwd in file rhel6.6-save-dump.xml
5) # virsh restore --xml rhel6.6-save-dump.xml /tmp/rhel6.6.save
6) Check the dumpxml of rhel6.6, no passwd found
# virsh dumpxml --security-info rhel6.6


Scenaio 3:
For a guest with cpu mode='host-passthrough', save-image-edit(e.g. change spice listen address), the edited xml save failed.

1) Start a guest with the following settings:
<cpu mode='host-passthrough'>
<graphics type='spice' autoport='yes' listen='0.0.0.0'>
2) # virsh save rhel6.6 /tmp/rhel6.6.save
3) Change spice listen address to 127.0.0.1 by save-image-edit and try to save.
# virsh save-image-edit /tmp/rhel6.6.save 
error: unsupported configuration: Target CPU model (null) does not match source Opteron_G5
Failed. Try again? [y,n,f,?]:
error: unsupported configuration: Target CPU model (null) does not match source Opteron_G5

Comment 7 Fangge Jin 2016-03-04 09:33:08 UTC
Verify on build libvirt-0.10.2-54.el6_7.4.x86_64

Scenario 1: passed
Change spice password (using save-image-edit) after saving the guest.
Check save-image-dumpxml with --security-info, the password has been updated.
Then restore the guest. Open spice graphic, use new passwd to login.

Scenario 2: passed
Restore guest with --xml <new_file> (the new file contains updated spice passwd for the guest).
Check guest dumpxml with --security-info, passwd has been updated.
Open spice graphic, use new passwd to login.

Scenario 3: failed
1)Start a guest with cpu mode='host-passthrough'.

2)# virsh save rhel6.6 /tmp/rhel6.6.save

3)Change spice listen address in save image xml:
# virsh save-image-edit /tmp/rhel6.6.save
State file /tmp/rhel6.6.save edited.

4)Try to restore guest:
# virsh restore /tmp/rhel6.6.save
error: Failed to restore domain from /tmp/rhel6.6.save
error: End of file while reading data: Input/output error
error: One or more references were leaked after disconnect from the hypervisor
error: Failed to reconnect to the hypervisor

(gdb) bt
#0  0x0000003d5ff2842a in ?? ()
#1  0x00007f7147100b51 in x86ModelFind (cpu=0x7f7128005080, map=0x7f7128001cf0, policy=1) at cpu/cpu_x86.c:831
#2  x86ModelFromCPU (cpu=0x7f7128005080, map=0x7f7128001cf0, policy=1) at cpu/cpu_x86.c:850
#3  0x00007f7147102f38 in x86Compute (host=<value optimized out>, cpu=0x7f7128005080, guest=0x7f713e981f50, message=0x7f713e981f40) at cpu/cpu_x86.c:1243
#4  0x000000000048724e in qemuBuildCpuArgStr (conn=0x7f71300009c0, driver=0x7f7134029ee0, def=0x7f7128012490, monitor_chr=0x7f71280022f0, monitor_json=true, caps=0x7f7128000f20, migrateFrom=0x520a7c "stdio", 
    migrateFd=21, snapshot=0x0, vmop=VIR_NETDEV_VPORT_PROFILE_OP_RESTORE) at qemu/qemu_command.c:4516
#5  qemuBuildCommandLine (conn=0x7f71300009c0, driver=0x7f7134029ee0, def=0x7f7128012490, monitor_chr=0x7f71280022f0, monitor_json=true, caps=0x7f7128000f20, migrateFrom=0x520a7c "stdio", migrateFd=21, 
    snapshot=0x0, vmop=VIR_NETDEV_VPORT_PROFILE_OP_RESTORE) at qemu/qemu_command.c:5320
#6  0x00000000004b7224 in qemuProcessStart (conn=0x7f71300009c0, driver=0x7f7134029ee0, vm=0x7f7134104750, migrateFrom=0x520a7c "stdio", stdin_fd=21, stdin_path=0x7f7128000c00 "/tmp/rhel6.6.save", 
    snapshot=0x0, vmop=VIR_NETDEV_VPORT_PROFILE_OP_RESTORE, flags=2) at qemu/qemu_process.c:4034
#7  0x000000000046cbc6 in qemuDomainSaveImageStartVM (conn=0x7f71300009c0, driver=0x7f7134029ee0, vm=0x7f7134104750, fd=0x7f713e9829bc, header=0x7f713e9829c0, path=0x7f7128000c00 "/tmp/rhel6.6.save", 
    start_paused=false) at qemu/qemu_driver.c:5639
#8  0x000000000046dae5 in qemuDomainRestoreFlags (conn=0x7f71300009c0, path=0x7f7128000c00 "/tmp/rhel6.6.save", dxml=<value optimized out>, flags=<value optimized out>) at qemu/qemu_driver.c:5759
#9  0x00007f714712735a in virDomainRestore (conn=0x7f71300009c0, from=0x7f7128000900 "/tmp/rhel6.6.save") at libvirt.c:2741
#10 0x000000000043d176 in remoteDispatchDomainRestore (server=<value optimized out>, client=0x271fac0, msg=<value optimized out>, rerr=0x7f713e982b80, args=0x7f71280008c0, ret=<value optimized out>)
    at remote_dispatch.h:4421
#11 remoteDispatchDomainRestoreHelper (server=<value optimized out>, client=0x271fac0, msg=<value optimized out>, rerr=0x7f713e982b80, args=0x7f71280008c0, ret=<value optimized out>) at remote_dispatch.h:4403
#12 0x00007f7147178f52 in virNetServerProgramDispatchCall (prog=0x2723020, server=0x271a700, client=0x271fac0, msg=0x271e150) at rpc/virnetserverprogram.c:431
#13 virNetServerProgramDispatch (prog=0x2723020, server=0x271a700, client=0x271fac0, msg=0x271e150) at rpc/virnetserverprogram.c:304
#14 0x00007f7147175ede in virNetServerProcessMsg (srv=<value optimized out>, client=0x271fac0, prog=<value optimized out>, msg=0x271e150) at rpc/virnetserver.c:170
#15 0x00007f714717657c in virNetServerHandleJob (jobOpaque=<value optimized out>, opaque=0x271a700) at rpc/virnetserver.c:191
#16 0x00007f7147095d7c in virThreadPoolWorker (opaque=<value optimized out>) at util/threadpool.c:144
#17 0x00007f7147095669 in virThreadHelper (data=<value optimized out>) at util/threads-pthread.c:161

Comment 8 Fangge Jin 2016-03-04 09:35:17 UTC
Hello, Jiri.
  Please check comment 7 Scenario 3.

Comment 9 Jiri Denemark 2016-03-04 09:44:27 UTC
As mentioned in bug 1307094, it's an unrelated issue.

Comment 10 Jiri Denemark 2016-03-04 10:27:18 UTC
And as mentioned in the same bug, it's a regression caused by the patch for this bug. Sorry for not noticing it...

Comment 12 Fangge Jin 2016-03-07 08:31:59 UTC
Retest comment 7 scenario 3 with build libvirt-0.10.2-54.el6_7.5.x86_64 - PASSED

Scenario 3: passed
1)Start a guest with cpu mode='host-passthrough'.

2)# virsh save rhel6.6 /tmp/rhel6.6.save

3)Change something(e.g. graphic listen address) in save image xml:
# virsh save-image-edit /tmp/rhel6.6.save
State file /tmp/rhel6.6.save edited.

4)Check the change is step 3) is saved in save image successfully:
# virsh save-image-dumpxml /tmp/rhel6.6.save

5)# virsh restore /tmp/rhel6.6.save

6)Check the change in step 3) takes effect after guest restore:
# virsh dumpxml rhel6.6

7)Open guest graphic, check guest works well.

Comment 13 Luyao Huang 2016-03-07 09:20:53 UTC
Hi Jiri,

when i do regression test for this bug, i found libvirtd will crash if cpu-model is host-model, would you please help to check if your patch was included in libvirt-0.10.2-54.el6_7.5 ? thanks a lot.

1.
# rpm -q libvirt
libvirt-0.10.2-54.el6_7.5.x86_64

2.
# virsh dumpxml test3

  <cpu mode='host-model'>
    <model fallback='allow'/>
  </cpu>

3.
# virsh save test3 test3.save

Domain test3 saved to test3.save

4. edit the save file, just add a blank to make it really call api:
# virsh save-image-edit test3.save
State file test3.save edited.

5.
# virsh restore test3.save
error: Failed to restore domain from test3.save
error: End of file while reading data: Input/output error
error: One or more references were leaked after disconnect from the hypervisor
error: Failed to reconnect to the hypervisor

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f7c1060c700 (LWP 27321)]
0x000000369687f8ca in __strcmp_sse2 () from /lib64/libc.so.6
(gdb) bt
#0  0x000000369687f8ca in __strcmp_sse2 () from /lib64/libc.so.6
#1  0x00007f7c1b16bb51 in x86ModelFind (cpu=0x7f7bf800ec20, map=0x7f7bf80108c0, policy=1) at cpu/cpu_x86.c:831
#2  x86ModelFromCPU (cpu=0x7f7bf800ec20, map=0x7f7bf80108c0, policy=1) at cpu/cpu_x86.c:850
#3  0x00007f7c1b16df38 in x86Compute (host=<value optimized out>, cpu=0x7f7bf800ec20, guest=0x7f7c1060af50, message=0x7f7c1060af40) at cpu/cpu_x86.c:1243
#4  0x00000000004991d6 in qemuBuildCpuArgStr (conn=0x7f7c00000bd0, driver=0x7f7c040238d0, def=0x7f7bf800fa90, monitor_chr=0x7f7bf8000c90, monitor_json=true, caps=0x7f7bf8010950, migrateFrom=0x5259ee "stdio", 
    migrateFd=21, snapshot=0x0, vmop=VIR_NETDEV_VPORT_PROFILE_OP_RESTORE) at qemu/qemu_command.c:4517
#5  qemuBuildCommandLine (conn=0x7f7c00000bd0, driver=0x7f7c040238d0, def=0x7f7bf800fa90, monitor_chr=0x7f7bf8000c90, monitor_json=true, caps=0x7f7bf8010950, migrateFrom=0x5259ee "stdio", migrateFd=21, 
    snapshot=0x0, vmop=VIR_NETDEV_VPORT_PROFILE_OP_RESTORE) at qemu/qemu_command.c:5322
#6  0x00000000004a85b4 in qemuProcessStart (conn=0x7f7c00000bd0, driver=0x7f7c040238d0, vm=0x7f7c04175fb0, migrateFrom=0x5259ee "stdio", stdin_fd=21, stdin_path=0x7f7bf8000a70 "/root/test3.save", snapshot=0x0, 
    vmop=VIR_NETDEV_VPORT_PROFILE_OP_RESTORE, flags=2) at qemu/qemu_process.c:4034
#7  0x000000000046cbc6 in qemuDomainSaveImageStartVM (conn=0x7f7c00000bd0, driver=0x7f7c040238d0, vm=0x7f7c04175fb0, fd=0x7f7c1060b9bc, header=0x7f7c1060b9c0, path=0x7f7bf8000a70 "/root/test3.save", 
    start_paused=false) at qemu/qemu_driver.c:5639
#8  0x000000000046dae5 in qemuDomainRestoreFlags (conn=0x7f7c00000bd0, path=0x7f7bf8000a70 "/root/test3.save", dxml=<value optimized out>, flags=<value optimized out>) at qemu/qemu_driver.c:5759
#9  0x00007f7c1b19235a in virDomainRestore (conn=0x7f7c00000bd0, from=0x7f7bf80008e0 "/root/test3.save") at libvirt.c:2741
#10 0x000000000043d176 in remoteDispatchDomainRestore (server=<value optimized out>, client=0x1102770, msg=<value optimized out>, rerr=0x7f7c1060bb80, args=0x7f7bf8000970, ret=<value optimized out>)
    at remote_dispatch.h:4421
#11 remoteDispatchDomainRestoreHelper (server=<value optimized out>, client=0x1102770, msg=<value optimized out>, rerr=0x7f7c1060bb80, args=0x7f7bf8000970, ret=<value optimized out>) at remote_dispatch.h:4403
#12 0x00007f7c1b1e0e12 in virNetServerProgramDispatchCall (prog=0x1104fa0, server=0x10fc450, client=0x1102770, msg=0x1103d10) at rpc/virnetserverprogram.c:431
#13 virNetServerProgramDispatch (prog=0x1104fa0, server=0x10fc450, client=0x1102770, msg=0x1103d10) at rpc/virnetserverprogram.c:304
#14 0x00007f7c1b1e468e in virNetServerProcessMsg (srv=<value optimized out>, client=0x1102770, prog=<value optimized out>, msg=0x1103d10) at rpc/virnetserver.c:170
#15 0x00007f7c1b1e4d2c in virNetServerHandleJob (jobOpaque=<value optimized out>, opaque=0x10fc450) at rpc/virnetserver.c:191
#16 0x00007f7c1b100d7c in virThreadPoolWorker (opaque=<value optimized out>) at util/threadpool.c:144
#17 0x00007f7c1b100669 in virThreadHelper (data=<value optimized out>) at util/threads-pthread.c:161
#18 0x00007f7c1a44eaa1 in start_thread () from /lib64/libpthread.so.0
#19 0x00000036968e8aad in clone () from /lib64/libc.so.6

Comment 15 Luyao Huang 2016-03-09 02:27:32 UTC
Hi Jiri,

i found libvirtd will crash if restore a save file which create on libvirt-0.10.2-54.el6_7.3 (i chose this version since it is the previous libvirt.6.7.z which custom used), Could you please help to check if this is worse to fix ?:

1.
# rpm -q libvirt
libvirt-0.10.2-54.el6_7.3.x86_64

2. start a guest with host-model:

# virsh  dumpxml test3
...
  <cpu mode='host-model'>
    <model fallback='allow'/>
  </cpu>
...

3.

# virsh  start test3
Domain test3 started

3. edit it and update cpu info, maybe use save-image-define can also do this

# virsh  save-image-edit test3.save
State file test3.save edited.

4. update libvirt

# yum update /nfs/rhel6/libvirtz/0.10.2-54.el6_7.6/libvirt-*
....

5.
# service libvirtd restart
Stopping libvirtd daemon: [  OK  ]
Starting libvirtd daemon: [  OK  ]

6.
# virsh  restore test3.save
error: Failed to restore domain from test3.save
error: End of file while reading data: Input/output error
error: One or more references were leaked after disconnect from the hypervisor
error: Failed to reconnect to the hypervisor

7.
# rpm -q libvirt
libvirt-0.10.2-54.el6_7.6.x86_64


Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fc2bf424700 (LWP 2378)]
0x000000369687f8ca in __strcmp_sse2 () from /lib64/libc.so.6
(gdb) bt
#0  0x000000369687f8ca in __strcmp_sse2 () from /lib64/libc.so.6
#1  0x00007fc2c67a7b41 in x86ModelFind (cpu=0x7fc2b000e7b0, map=0x7fc2b00100b0, policy=1) at cpu/cpu_x86.c:831
#2  x86ModelFromCPU (cpu=0x7fc2b000e7b0, map=0x7fc2b00100b0, policy=1) at cpu/cpu_x86.c:850
#3  0x00007fc2c67a9f28 in x86Compute (host=<value optimized out>, cpu=0x7fc2b000e7b0, guest=0x7fc2bf422f50, message=0x7fc2bf422f40) at cpu/cpu_x86.c:1243
#4  0x000000000049c396 in qemuBuildCpuArgStr (conn=0x7fc2b40bdfa0, driver=0x7fc2b4011a00, def=0x7fc2b000f510, monitor_chr=0x7fc2b0000e80, monitor_json=true, caps=0x7fc2b0000dd0, migrateFrom=0x521da3 "stdio", 
    migrateFd=21, snapshot=0x0, vmop=VIR_NETDEV_VPORT_PROFILE_OP_RESTORE) at qemu/qemu_command.c:4517
#5  qemuBuildCommandLine (conn=0x7fc2b40bdfa0, driver=0x7fc2b4011a00, def=0x7fc2b000f510, monitor_chr=0x7fc2b0000e80, monitor_json=true, caps=0x7fc2b0000dd0, migrateFrom=0x521da3 "stdio", migrateFd=21, 
    snapshot=0x0, vmop=VIR_NETDEV_VPORT_PROFILE_OP_RESTORE) at qemu/qemu_command.c:5322
#6  0x00000000004bb914 in qemuProcessStart (conn=0x7fc2b40bdfa0, driver=0x7fc2b4011a00, vm=0x7fc2b401a950, migrateFrom=0x521da3 "stdio", stdin_fd=21, stdin_path=0x7fc2b0008fd0 "/root/test3.save", snapshot=0x0, 
    vmop=VIR_NETDEV_VPORT_PROFILE_OP_RESTORE, flags=2) at qemu/qemu_process.c:4034
#7  0x000000000046cbc6 in qemuDomainSaveImageStartVM (conn=0x7fc2b40bdfa0, driver=0x7fc2b4011a00, vm=0x7fc2b401a950, fd=0x7fc2bf4239bc, header=0x7fc2bf4239c0, path=0x7fc2b0008fd0 "/root/test3.save", 
    start_paused=false) at qemu/qemu_driver.c:5639
#8  0x000000000046dae5 in qemuDomainRestoreFlags (conn=0x7fc2b40bdfa0, path=0x7fc2b0008fd0 "/root/test3.save", dxml=<value optimized out>, flags=<value optimized out>) at qemu/qemu_driver.c:5759
#9  0x00007fc2c67ce34a in virDomainRestore (conn=0x7fc2b40bdfa0, from=0x7fc2b00324a0 "/root/test3.save") at libvirt.c:2741
#10 0x000000000043d176 in remoteDispatchDomainRestore (server=<value optimized out>, client=0x1efdf00, msg=<value optimized out>, rerr=0x7fc2bf423b80, args=0x7fc2b00328a0, ret=<value optimized out>)
    at remote_dispatch.h:4421
#11 remoteDispatchDomainRestoreHelper (server=<value optimized out>, client=0x1efdf00, msg=<value optimized out>, rerr=0x7fc2bf423b80, args=0x7fc2b00328a0, ret=<value optimized out>) at remote_dispatch.h:4403
#12 0x00007fc2c681ce02 in virNetServerProgramDispatchCall (prog=0x1f01fa0, server=0x1ef9450, client=0x1efdf00, msg=0x1ef5c00) at rpc/virnetserverprogram.c:431
#13 virNetServerProgramDispatch (prog=0x1f01fa0, server=0x1ef9450, client=0x1efdf00, msg=0x1ef5c00) at rpc/virnetserverprogram.c:304
#14 0x00007fc2c681f35e in virNetServerProcessMsg (srv=<value optimized out>, client=0x1efdf00, prog=<value optimized out>, msg=0x1ef5c00) at rpc/virnetserver.c:170
#15 0x00007fc2c681f9fc in virNetServerHandleJob (jobOpaque=<value optimized out>, opaque=0x1ef9450) at rpc/virnetserver.c:191
#16 0x00007fc2c673cd7c in virThreadPoolWorker (opaque=<value optimized out>) at util/threadpool.c:144
#17 0x00007fc2c673c669 in virThreadHelper (data=<value optimized out>) at util/threads-pthread.c:161
#18 0x0000003696c07aa1 in start_thread () from /lib64/libpthread.so.0
#19 0x00000036968e8aad in clone () from /lib64/libc.so.6

Comment 16 Luyao Huang 2016-03-09 03:06:15 UTC
And this is test result with 3 cpu mode:

A. custom:  PASS

1.
# virsh dumpxml test3
...
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>Opteron_G3</model>
    <vendor>AMD</vendor>
    <topology sockets='1' cores='2' threads='1'/>
    <feature policy='disable' name='lahf_lm'/>
  </cpu>
...

2.
# ps aux|grep qemu
qemu      3074 83.9  1.1 1579588 187316 ?      Sl   10:38   4:43 /usr/libexec/qemu-kvm -name test3 -S -M rhel6.6.0 -cpu Opteron_G3,-lahf_lm

3.
# virsh save test3 test3.save

Domain test3 saved to test3.save

4.
# virsh save-image-edit test3.save
State file test3.save edited.

5.
# virsh save-image-dumpxml test3.save
...
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>Opteron_G3</model>
    <vendor>AMD</vendor>
    <topology sockets='1' cores='2' threads='1'/>
    <feature policy='disable' name='lahf_lm'/>
  </cpu>
...

6.

# virsh restore test3.save
Domain restored from test3.save

7.
# virsh dumpxml test3
...
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>Opteron_G3</model>
    <vendor>AMD</vendor>
    <topology sockets='1' cores='2' threads='1'/>
    <feature policy='disable' name='lahf_lm'/>
  </cpu>
...

8.
# ps aux|grep qemu
qemu      3204 82.1  1.1 1579016 179584 ?      Sl   10:46   1:27 /usr/libexec/qemu-kvm -name test3 -S -M rhel6.6.0 -cpu Opteron_G3,-lahf_lm ...


B. host-model:          Fail (looks like an old bug)

1.
# virsh dumpxml test3
...
  <cpu mode='host-model'>
    <model fallback='allow'>Opteron_G3</model>
    <vendor>AMD</vendor>
    <feature policy='disable' name='lahf_lm'/>
  </cpu>
...

2. not match xml

# ps aux|grep qemu
qemu      3319 84.2  1.1 1578652 191608 ?      Sl   10:51   5:32 /usr/libexec/qemu-kvm -name test3 -S -M rhel6.6.0 -cpu Opteron_G3,+nodeid_msr,+wdt,+skinit,+ibs,+osvw,+3dnowprefetch,+cr8legacy,+extapic,+cmp_legacy,+3dnow,+3dnowext,+pdpe1gb,+fxsr_opt,+mmxext,+ht,+vme

3.

# virsh save test3 test3.save

Domain test3 saved to test3.save


4.
# virsh restore  test3.save
Domain restored from test3.save

5.
# virsh dumpxml test3
...
  <cpu mode='host-model'>
    <model fallback='allow'>Opteron_G3</model>
    <vendor>AMD</vendor>
    <feature policy='require' name='nodeid_msr'/>
    <feature policy='require' name='wdt'/>
    <feature policy='require' name='skinit'/>
    <feature policy='require' name='ibs'/>
    <feature policy='require' name='osvw'/>
    <feature policy='require' name='3dnowprefetch'/>
    <feature policy='require' name='cr8legacy'/>
    <feature policy='require' name='extapic'/>
    <feature policy='require' name='cmp_legacy'/>
    <feature policy='require' name='3dnow'/>
    <feature policy='require' name='3dnowext'/>
    <feature policy='require' name='pdpe1gb'/>
    <feature policy='require' name='fxsr_opt'/>
    <feature policy='require' name='mmxext'/>
    <feature policy='require' name='ht'/>
    <feature policy='require' name='vme'/>
  </cpu>
...

6.
# ps aux|grep qemu
qemu      3437 85.8  1.0 1578964 179176 ?      Sl   10:58   0:46 /usr/libexec/qemu-kvm -name test3 -S -M rhel6.6.0 -cpu Opteron_G3,+nodeid_msr,+wdt,+skinit,+ibs,+osvw,+3dnowprefetch,+cr8legacy,+extapic,+cmp_legacy,+3dnow,+3dnowext,+pdpe1gb,+fxsr_opt,+mmxext,+ht,+vme


C host-passthrough                  PASS

1.
# virsh dumpxml test3

  <cpu mode='host-passthrough'>
  </cpu>

2.
# ps aux|grep qemu
qemu      3557 20.8  0.1 1577708 31360 ?       Sl   11:00   0:01 /usr/libexec/qemu-kvm -name test3 -S -M rhel6.6.0 -cpu host ...

3.
# virsh save test3 test3.save

Domain test3 saved to test3.save

4. change spice passwd:
# virsh restore test3.save
Domain restored from test3.save

5.
# virsh dumpxml test3
...
  <cpu mode='host-passthrough'>
  </cpu>
...

6.
# ps aux|grep test3
qemu      3689 88.3  1.1 1578928 179336 ?      Sl   11:03   1:36 /usr/libexec/qemu-kvm -name test3 -S -M rhel6.6.0 -cpu host

Comment 17 Jiri Denemark 2016-03-09 09:02:23 UTC
(In reply to Luyao Huang from comment #15)
> # virsh  start test3
> Domain test3 started
> 
> 3. edit it and update cpu info, maybe use save-image-define can also do this
> 
> # virsh  save-image-edit test3.save
> State file test3.save edited.

This looks like test3.save is a left-over from your previous tests and you didn't run virsh save test3 test3.save with libvirt-0.10.2-54.el6_7.3. In that version of libvirt save-image-edit with host-model/host-passthrough does not work at all returning

error: unsupported configuration: Target CPU model (null) does not match source ...

That is the crash should be impossible to reproduce when updating libvirt-0.10.2-54.el6_7.3 to libvirt-0.10.2-54.el6_7.6.

Comment 18 Luyao Huang 2016-03-09 09:32:45 UTC
(In reply to Jiri Denemark from comment #17)
> (In reply to Luyao Huang from comment #15)
> > # virsh  start test3
> > Domain test3 started
> > 
> > 3. edit it and update cpu info, maybe use save-image-define can also do this
> > 
> > # virsh  save-image-edit test3.save
> > State file test3.save edited.
> 
> This looks like test3.save is a left-over from your previous tests and you
> didn't run virsh save test3 test3.save with libvirt-0.10.2-54.el6_7.3. In
> that version of libvirt save-image-edit with host-model/host-passthrough
> does not work at all returning
> 

Sorry, save guest step must been cleaned when i removed the extra line during filed this comment.

> error: unsupported configuration: Target CPU model (null) does not match
> source ...
> 

that is because libvirt show saved image xml without --migrate, you need paste full cpu info when edit the save file. Since save-image-edit and save-image-define call the same api to change the xml, i will give another reproduce steps:

[root@hp-dl385g7-02 ~]# ll test3.save
ls: cannot access test3.save: No such file or directory

[root@hp-dl385g7-02 ~]# rpm -q libvirt
libvirt-0.10.2-54.el6_7.3.x86_64

[root@hp-dl385g7-02 ~]# service libvirtd restart
Stopping libvirtd daemon: [  OK  ]
Starting libvirtd daemon: [  OK  ]

[root@hp-dl385g7-02 ~]# virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     test3                          shut off

[root@hp-dl385g7-02 ~]# virsh start test3
Domain test3 started

[root@hp-dl385g7-02 ~]# virsh dumpxml test3 --migratable > test3-mig.xml

[root@hp-dl385g7-02 ~]# virsh save test3 test3.save

Domain test3 saved to test3.save

[root@hp-dl385g7-02 ~]# virsh save-image-define test3.save test3-mig.xml
State file test3.save updated.

[root@hp-dl385g7-02 ~]# yum update /nfs/rhel6/libvirtz/0.10.2-54.el6_7.6/libvirt-*

[root@hp-dl385g7-02 ~]# service libvirtd restart
Stopping libvirtd daemon: [  OK  ]
Starting libvirtd daemon: [  OK  ]

[root@hp-dl385g7-02 ~]# virsh restore test3.save
error: Failed to restore domain from test3.save
error: End of file while reading data: Input/output error
error: One or more references were leaked after disconnect from the hypervisor
error: Failed to reconnect to the hypervisor

[root@hp-dl385g7-02 ~]# rpm -q libvirt
libvirt-0.10.2-54.el6_7.6.x86_64


> That is the crash should be impossible to reproduce when updating
> libvirt-0.10.2-54.el6_7.3 to libvirt-0.10.2-54.el6_7.6.

Strange, i guess if the save file xml like this will cause libvirtd crash:

  <cpu mode='host-model'>
    <model fallback='allow'/>
  </cpu>

and in libvirt-0.10.2-54.el6_7.6 fix the problem when create save file, so there is no way to hit this crash if the save file is create in libvirt-0.10.2-54.el6_7.6, and still can hit this problem with old save file.

Comment 19 Jiri Denemark 2016-03-09 09:43:58 UTC
The thing is that the XML stored in the saved image should never look like

  <cpu mode='host-model'>
    <model fallback='allow'/>
  </cpu>

it should always contain complete CPU specification.

But I think I see what you're saying now, you use the --migratable XML with full CPU specification for save-image-define which means the ABI check succeeds, but because of a bug in the old libvirtd version only the stripped down CPU element will be stored in the saved image. In that case, I believe even the old libvirtd would crash when trying to restore such image. And I don't think this issue is worth fixing (especially in this 6.7.z release).

Comment 20 Luyao Huang 2016-03-09 10:03:33 UTC
(In reply to Jiri Denemark from comment #19)
> The thing is that the XML stored in the saved image should never look like
> 
>   <cpu mode='host-model'>
>     <model fallback='allow'/>
>   </cpu>
> 
> it should always contain complete CPU specification.
> 
> But I think I see what you're saying now, you use the --migratable XML with
> full CPU specification for save-image-define which means the ABI check
> succeeds, but because of a bug in the old libvirtd version only the stripped
> down CPU element will be stored in the saved image. In that case, I believe
> even the old libvirtd would crash when trying to restore such image. And I
> don't think this issue is worth fixing (especially in this 6.7.z release).

Reasonable, thanks a lot for your reply !

Verify cpu mode part bug with steps in comment 16.

Comment 21 Fangge Jin 2016-03-09 10:10:36 UTC
According to comment 7 and comment 20. Move to verified.

Comment 23 errata-xmlrpc 2016-03-22 15:49:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0483.html


Note You need to log in before you can comment on or make changes to this bug.