Bug 1814947 - Libvirtd gets SIGSEGV when blockcopy to nvme destination
Summary: Libvirtd gets SIGSEGV when blockcopy to nvme destination
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libvirt
Version: 8.2
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 8.0
Assignee: Michal Privoznik
QA Contact: Meina Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-19 06:00 UTC by Han Han
Modified: 2020-05-05 09:59 UTC (History)
7 users (show)

Fixed In Version: libvirt-6.0.0-14.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-05 09:59:00 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
libvirtd log, dest disk xml, vm xml, core backtrace (101.31 KB, application/gzip)
2020-03-19 06:08 UTC, Han Han
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:2017 0 None None None 2020-05-05 09:59:43 UTC

Description Han Han 2020-03-19 06:00:42 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Han Han 2020-03-19 06:05:29 UTC
Description of problem:
As subject

Version-Release number of selected component (if applicable):
libvirt-6.0.0-13.virtcov.el8.x86_64
qemu-kvm-4.2.0-15.module+el8.2.0+6029+618ef2ec.x86_64


How reproducible:
100%

Steps to Reproduce:
1. Prepare a running VM. Prepare nvme dest xml:
# cat nvme.xml
<disk type='nvme' device='disk'>
  <driver name='qemu' type='raw'/>
  <source type='pci' managed='yes' namespace='1'>
    <address domain='0x0000' bus='0x44' slot='0x00' function='0x0'/>
  </source>
  <target dev='sdb' bus='scsi'/>
</disk>


2. Do blockcopy
# virsh blockcopy new sda --xml /tmp/nvme.xml --transient-job 
error: Disconnected from qemu:///system due to end of file
error: End of file while reading data: Input/output error

Backtrace:
(gdb) bt
#0  0x00007f311ab3b48b in __strrchr_avx2 () at ../sysdeps/x86_64/multiarch/strrchr-avx2.S:54
#1  0x00007f311dcd0d1d in dm_task_set_name (dmt=dmt@entry=0x7f30ec003550, name=name@entry=0x0) at libdm-common.c:670
#2  0x00007f311ea5ca50 in virDevMapperGetTargetsImpl (path=0x0, devPaths_ret=devPaths_ret@entry=0x7f3112da7480, ttl=ttl@entry=32) at ../../src/util/virdevmapper.c:96
#3  0x00007f311ea5ce63 in virDevMapperGetTargets (path=<optimized out>, devPaths=devPaths@entry=0x7f3112da7480) at ../../src/util/virdevmapper.c:194
#4  0x00007f30e1778879 in qemuDomainNamespaceSetupDisk (vm=vm@entry=0x7f30b41a6500, src=src@entry=0x7f30ec008360) at ../../src/qemu/qemu_domain.c:15956
#5  0x00007f30e1778f7c in qemuDomainStorageSourceAccessModify
    (driver=driver@entry=0x7f30b4141d10, vm=vm@entry=0x7f30b41a6500, src=src@entry=0x7f30ec008360, flags=flags@entry=(QEMU_DOMAIN_STORAGE_SOURCE_ACCESS_CHAIN | QEMU_DOMAIN_STORAGE_SOURCE_ACCESS_CHAIN_TOP))
    at ../../src/qemu/qemu_domain.c:12051
#6  0x00007f30e17795ab in qemuDomainStorageSourceChainAccessAllow (driver=driver@entry=0x7f30b4141d10, vm=vm@entry=0x7f30b41a6500, src=src@entry=0x7f30ec008360) at ../../src/qemu/qemu_domain.c:12123
#7  0x00007f30e182fd9a in qemuDomainBlockCopyCommon
    (vm=0x7f30b41a6500, conn=<optimized out>, path=path@entry=0x7f30ec001350 "sda", mirrorsrc=0x7f30ec008360, bandwidth=bandwidth@entry=0, granularity=granularity@entry=0, buf_size=0, flags=4, keepParentLabel=false) at ../../src/qemu/qemu_driver.c:18401
#8  0x00007f30e1830e31 in qemuDomainBlockCopy
    (dom=0x7f30ec0018f0, disk=0x7f30ec001350 "sda", destxml=0x7f30ec001780 "<disk type='nvme' device='disk'>\n  <driver name='qemu' type='raw'/>\n  <source type='pci' managed='yes' namespace='1'>\n    <address domain='0x0000' bus='0x44' slot='0x00' function='0x0'/>\n  </source>\n "..., params=<optimized out>, nparams=<optimized out>, flags=4) at ../../src/qemu/qemu_driver.c:18678
#9  0x00007f311ed8a86f in virDomainBlockCopy
    (dom=dom@entry=0x7f30ec0018f0, disk=0x7f30ec001350 "sda", destxml=0x7f30ec001780 "<disk type='nvme' device='disk'>\n  <driver name='qemu' type='raw'/>\n  <source type='pci' managed='yes' namespace='1'>\n    <address domain='0x0000' bus='0x44' slot='0x00' function='0x0'/>\n  </source>\n "..., params=0x0, nparams=0, flags=4) at ../../src/libvirt-domain.c:10391
#10 0x0000564ff4ebee83 in remoteDispatchDomainBlockCopy (args=0x7f30ec001730, rerr=0x7f3112da78c0, msg=0x564ff64bc6d0, client=<optimized out>, server=0x564ff645ca90)
    at ./remote/remote_daemon_dispatch_stubs.h:3949
#11 0x0000564ff4ebee83 in remoteDispatchDomainBlockCopyHelper (server=0x564ff645ca90, client=<optimized out>, msg=0x564ff64bc6d0, rerr=0x7f3112da78c0, args=0x7f30ec001730, ret=0x0)
    at ./remote/remote_daemon_dispatch_stubs.h:3919
#12 0x00007f311ec5a9a1 in virNetServerProgramDispatchCall (msg=0x564ff64bc6d0, client=0x564ff64ac710, server=0x564ff645ca90, prog=0x564ff649aa20) at ../../src/rpc/virnetserverprogram.c:430
#13 0x00007f311ec5a9a1 in virNetServerProgramDispatch (prog=0x564ff649aa20, server=server@entry=0x564ff645ca90, client=client@entry=0x564ff64ac710, msg=msg@entry=0x564ff64bc6d0)
    at ../../src/rpc/virnetserverprogram.c:302
#14 0x00007f311ec62498 in virNetServerProcessMsg (srv=srv@entry=0x564ff645ca90, client=0x564ff64ac710, prog=<optimized out>, msg=0x564ff64bc6d0) at ../../src/rpc/virnetserver.c:136
#15 0x00007f311ec62905 in virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x564ff645ca90) at ../../src/rpc/virnetserver.c:153
#16 0x00007f311eaf6461 in virThreadPoolWorker (opaque=opaque@entry=0x564ff6441e30) at ../../src/util/virthreadpool.c:163
#17 0x00007f311eaf50bf in virThreadHelper (data=<optimized out>) at ../../src/util/virthread.c:196
#18 0x00007f311ada82de in start_thread (arg=<optimized out>) at pthread_create.c:486
#19 0x00007f311aad9e83 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95


Actual results:
As above

Expected results:
NO SIGSEGV

Additional info:

Comment 2 Han Han 2020-03-19 06:08:22 UTC
Created attachment 1671345 [details]
libvirtd log, dest disk xml, vm xml, core backtrace

Comment 3 Michal Privoznik 2020-03-19 16:23:19 UTC
Patch proposed upstream:

https://www.redhat.com/archives/libvir-list/2020-March/msg00683.html

Comment 4 Michal Privoznik 2020-03-19 18:34:00 UTC
Pushed upstream:

  aeb909bf9b qemu: Don't crash when getting targets for a multipath

v6.1.0-190-gaeb909bf9b

Comment 8 Han Han 2020-03-20 06:36:31 UTC
PASSed on v6.1.0-190-gaeb909bf9b

Comment 12 Michal Privoznik 2020-03-23 16:29:17 UTC
Clearing stale needinfo.

Comment 13 Meina Li 2020-03-31 06:14:51 UTC
Verified Version:
# rpm -q libvirt qemu-kvm
libvirt-6.0.0-15.module+el8.3.0+6120+74650046.x86_64
qemu-kvm-4.2.0-16.module+el8.2.0+6092+4f2391c1.x86_64

Verified Steps:
1. Prepare a nvme disk destination xml:
# cat nvme.xml 
<disk type='nvme' device='disk'>
  <driver name='qemu' type='raw'/>
  <source type='pci' managed='yes' namespace='1'>
    <address domain='0x0000' bus='0x44' slot='0x00' function='0x0'/>
  </source>
  <target dev='sdb' bus='scsi'/>
</disk>

2. Blockcopy to nvme destination xml with --pivot.
# virsh blockcopy new vda --xml nvme.xml --transient-job --pivot --verbose 
Block Copy: [100 %]
Successfully pivoted
# virsh dumpxml new | xmllint --xpath //disk -
**VM disk pivot to the new disk source**

3. Clean the images created in step 2 and blockcopy to nvme destination xml with --finish
# virsh blockcopy new vda --xml nvme.xml --transient-job --finish --verbose --wait
Block Copy: [100 %]
Successfully copied
# virsh dumpxml new | xmllint --xpath //disk -
**VM disk pivot to the old disk source**

4. Blockcopy to nvme destination xml with --pivot --reuse-external.
virsh blockcopy new vda --xml nvme.xml --transient-job --pivot --reuse-external --verbose --wait 
Block Copy: [100 %]
Successfully pivoted
# virsh dumpxml new | xmllint --xpath //disk -
**VM disk pivot to the new disk source**

Comment 15 errata-xmlrpc 2020-05-05 09:59:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2017


Note You need to log in before you can comment on or make changes to this bug.