Bug 1900326 - [incremental_backup] qemu aborts if guest reboot during backup when using virtio-blk: "aio_co_schedule: Co-routine was already scheduled in 'aio_co_schedule'" [NEEDINFO]
Summary: [incremental_backup] qemu aborts if guest reboot during backup when using vir...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.3
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: rc
: 8.3
Assignee: Sergio Lopez
QA Contact: Gu Nini
URL:
Whiteboard:
Depends On:
Blocks: 1892681 1897025 1918966
TreeView+ depends on / blocked
 
Reported: 2020-11-22 09:46 UTC by Nir Soffer
Modified: 2021-02-22 15:40 UTC (History)
14 users (show)

Fixed In Version: qemu-kvm-5.1.0-19.module+el8.3.1+9795+4ce2a535
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1918966 (view as bug list)
Environment:
Last Closed: 2021-02-22 15:39:41 UTC
Type: Bug
Target Upstream Version:
ngu: needinfo? (slopezpa)


Attachments (Terms of Use)
/var/log/libvirt/qemu/vm-name.log (2.39 KB, application/x-xz)
2020-11-22 09:46 UTC, Nir Soffer
no flags Details
libvirtd log (20.43 KB, application/x-xz)
2020-11-22 09:48 UTC, Nir Soffer
no flags Details
abrt crash report (17.97 MB, application/x-xz)
2020-11-22 09:51 UTC, Nir Soffer
no flags Details

Description Nir Soffer 2020-11-22 09:46:20 UTC
Created attachment 1732013 [details]
/var/log/libvirt/qemu/vm-name.log

Description of problem:

When performing full or incremental backup with virtio-blk disk (virtio in
RHV) qemu aborts with this message:

    aio_co_schedule: Co-routine was already scheduled in 'aio_co_schedule'

Same flow works when using scsi-disk (virtio-scsi in RHV).

Version-Release number of selected component (if applicable):
libvirt-daemon-6.6.0-8.module+el8.3.1+8648+130818f2.x86_64
qemu-kvm-5.1.0-13.module+el8.3.0+8424+e82f331d.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Start VM (see example xmls below)
2. Start full or incremental backup in RHV
3. Reboot the vm inside the guest

Actual results:
qemu abort

Expected results:
Backup is completed successfuly (as done in scsi-disk)

Additional info:

Here are example VMs flows that reproduce this crash:

## VM 1 - raw disk, virtio

virsh -r dumpxml backup-raw
<domain type='kvm' id='3' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>backup-raw</name>
  <uuid>62af983c-53d1-4e1e-b78b-0de57027c192</uuid>
  <metadata xmlns:ns1="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
    <ns1:qos/>
    <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
    <ovirt-vm:balloonTarget type="int">1048576</ovirt-vm:balloonTarget>
    <ovirt-vm:clusterVersion>4.5</ovirt-vm:clusterVersion>
    <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>
    <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
    <ovirt-vm:memGuaranteedSize type="int">1024</ovirt-vm:memGuaranteedSize>
    <ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb>
    <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>
    <ovirt-vm:startTime type="float">1605915811.6488793</ovirt-vm:startTime>
    <ovirt-vm:device mac_address="56:6f:7a:5a:00:00">
        <ovirt-vm:network>ovirtmgmt</ovirt-vm:network>
    </ovirt-vm:device>
    <ovirt-vm:device devtype="disk" name="vda">
        <ovirt-vm:domainID>839db77f-fde3-4e13-bfb6-56be604631ed</ovirt-vm:domainID>
        <ovirt-vm:guestName>/dev/vda</ovirt-vm:guestName>
        <ovirt-vm:imageID>5eb97d51-9da4-420c-8c88-6a3691bc4c67</ovirt-vm:imageID>
        <ovirt-vm:poolID>fb7797bd-3cca-4baa-bcb3-35e8851015f7</ovirt-vm:poolID>
        <ovirt-vm:volumeID>b14b04c3-5936-4963-8ee0-c5dbda70afa3</ovirt-vm:volumeID>
        <ovirt-vm:specParams>
            <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>
        </ovirt-vm:specParams>
        <ovirt-vm:volumeChain>
            <ovirt-vm:volumeChainNode>
                <ovirt-vm:domainID>839db77f-fde3-4e13-bfb6-56be604631ed</ovirt-vm:domainID>
                <ovirt-vm:imageID>5eb97d51-9da4-420c-8c88-6a3691bc4c67</ovirt-vm:imageID>
                <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
                <ovirt-vm:leasePath>/rhev/data-center/mnt/storage:_export_1/839db77f-fde3-4e13-bfb6-56be604631ed/images/5eb97d51-9da4-420c-8c88-6a3691bc4c67/b14b04c3-5936-4963-8ee0-c5dbda70afa3.lease</ovirt-vm:leasePath>
                <ovirt-vm:path>/rhev/data-center/mnt/storage:_export_1/839db77f-fde3-4e13-bfb6-56be604631ed/images/5eb97d51-9da4-420c-8c88-6a3691bc4c67/b14b04c3-5936-4963-8ee0-c5dbda70afa3</ovirt-vm:path>
                <ovirt-vm:volumeID>b14b04c3-5936-4963-8ee0-c5dbda70afa3</ovirt-vm:volumeID>
            </ovirt-vm:volumeChainNode>
        </ovirt-vm:volumeChain>
    </ovirt-vm:device>
    <ovirt-vm:device devtype="disk" name="sdc"/>
</ovirt-vm:vm>
  </metadata>
  <maxMemory slots='16' unit='KiB'>4194304</maxMemory>
  <memory unit='KiB'>1048576</memory>
  <currentMemory unit='KiB'>1048576</currentMemory>
  <vcpu placement='static' current='1'>16</vcpu>
  <iothreads>1</iothreads>
  <resource>
    <partition>/machine</partition>
  </resource>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>oVirt</entry>
      <entry name='product'>RHEL</entry>
      <entry name='version'>8.3-1.0.el8</entry>
      <entry name='serial'>095bce26-9d5b-4ae2-b3bf-dddfc9cdf541</entry>
      <entry name='uuid'>62af983c-53d1-4e1e-b78b-0de57027c192</entry>
      <entry name='family'>oVirt</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='pc-q35-rhel8.1.0'>hvm</type>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
  </features>
  <cpu mode='custom' match='exact' check='full'>
    <model fallback='forbid'>Skylake-Client</model>
    <topology sockets='16' dies='1' cores='1' threads='1'/>
    <feature policy='disable' name='hle'/>
    <feature policy='disable' name='rtm'/>
    <feature policy='disable' name='mpx'/>
    <feature policy='require' name='hypervisor'/>
    <numa>
      <cell id='0' cpus='0-15' memory='1048576' unit='KiB'/>
    </numa>
  </cpu>
  <clock offset='variable' adjustment='0' basis='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='cdrom'>
      <driver name='qemu' error_policy='report'/>
      <source startupPolicy='optional'/>
      <target dev='sdc' bus='sata'/>
      <readonly/>
      <alias name='ua-f99274cc-c8aa-441f-82eb-a7bcf1d58789'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads' iothread='1'/>
      <source file='/rhev/data-center/mnt/storage:_export_1/839db77f-fde3-4e13-bfb6-56be604631ed/images/5eb97d51-9da4-420c-8c88-6a3691bc4c67/b14b04c3-5936-4963-8ee0-c5dbda70afa3' index='1'>
        <seclabel model='dac' relabel='no'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <serial>5eb97d51-9da4-420c-8c88-6a3691bc4c67</serial>
      <boot order='1'/>
      <alias name='ua-5eb97d51-9da4-420c-8c88-6a3691bc4c67'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </disk>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <alias name='pci.1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <alias name='pci.2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <alias name='pci.3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <alias name='pci.4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <alias name='pci.5'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x15'/>
      <alias name='pci.6'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0x16'/>
      <alias name='pci.7'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
    </controller>
    <controller type='pci' index='8' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='8' port='0x17'/>
      <alias name='pci.8'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
    </controller>
    <controller type='pci' index='9' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='9' port='0x18'/>
      <alias name='pci.9'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='10' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='10' port='0x19'/>
      <alias name='pci.10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
    </controller>
    <controller type='pci' index='11' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='11' port='0x1a'/>
      <alias name='pci.11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
    </controller>
    <controller type='pci' index='12' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='12' port='0x1b'/>
      <alias name='pci.12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
    </controller>
    <controller type='pci' index='13' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='13' port='0x1c'/>
      <alias name='pci.13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
    </controller>
    <controller type='pci' index='14' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='14' port='0x1d'/>
      <alias name='pci.14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
    </controller>
    <controller type='pci' index='15' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='15' port='0x1e'/>
      <alias name='pci.15'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
    </controller>
    <controller type='pci' index='16' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='16' port='0x1f'/>
      <alias name='pci.16'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'>
      <alias name='pcie.0'/>
    </controller>
    <controller type='virtio-serial' index='0' ports='16'>
      <alias name='ua-20458a3e-cc4e-4eaf-bb21-93a61bd3bbe8'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <controller type='usb' index='0' model='qemu-xhci' ports='8'>
      <alias name='ua-33d62616-9d86-4da5-a352-e2bf7a79d667'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <driver iothread='1'/>
      <alias name='ua-c0757cfe-203b-43ce-8e76-508cfdf05e22'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='56:6f:7a:5a:00:00'/>
      <source bridge='ovirtmgmt'/>
      <target dev='vnet2'/>
      <model type='virtio'/>
      <filterref filter='vdsm-no-mac-spoofing'/>
      <link state='up'/>
      <mtu size='1500'/>
      <alias name='ua-a181f7b1-2838-4980-af77-cbd508172fec'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/62af983c-53d1-4e1e-b78b-0de57027c192.ovirt-guest-agent.0'/>
      <target type='virtio' name='ovirt-guest-agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/62af983c-53d1-4e1e-b78b-0de57027c192.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
      <alias name='channel1'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0' state='disconnected'/>
      <alias name='channel2'/>
      <address type='virtio-serial' controller='0' bus='0' port='3'/>
    </channel>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'>
      <alias name='input1'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input2'/>
    </input>
    <graphics type='spice' port='5900' tlsPort='5901' autoport='yes' listen='192.168.122.22' passwdValidTo='1970-01-01T00:00:01'>
      <listen type='network' address='192.168.122.22' network='vdsm-ovirtmgmt'/>
      <channel name='main' mode='secure'/>
      <channel name='display' mode='secure'/>
      <channel name='inputs' mode='secure'/>
      <channel name='cursor' mode='secure'/>
      <channel name='playback' mode='secure'/>
      <channel name='record' mode='secure'/>
      <channel name='smartcard' mode='secure'/>
      <channel name='usbredir' mode='secure'/>
    </graphics>
    <graphics type='vnc' port='5902' autoport='yes' listen='192.168.122.22' keymap='en-us' passwdValidTo='1970-01-01T00:00:01'>
      <listen type='network' address='192.168.122.22' network='vdsm-ovirtmgmt'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='8192' vgamem='16384' heads='1' primary='yes'/>
      <alias name='ua-b851f4bd-392b-448c-a08b-ab0643fb4def'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <stats period='5'/>
      <alias name='ua-c3938948-2dcb-45c0-86c5-cb7677bbe138'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <alias name='ua-0a608db4-1dea-47c2-9128-8fade85c3928'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </rng>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c896,c1004</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c896,c1004</imagelabel>
  </seclabel>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+107:+107</label>
    <imagelabel>+107:+107</imagelabel>
  </seclabel>
  <qemu:capabilities>
    <qemu:add capability='blockdev'/>
    <qemu:add capability='incremental-backup'/>
  </qemu:capabilities>
</domain>

Backup log:

$ ./backup_vm.py -c engine-dev full --backup-dir /var/tmp/backups/raw/ 62af983c-53d1-4e1e-b78b-0de57027c192
[   0.0 ] Starting full backup for VM 62af983c-53d1-4e1e-b78b-0de57027c192
[   0.7 ] Waiting until backup 85a6b40b-17d3-4867-b06b-b7ac76a2db03 is ready
[   1.8 ] Creating image transfer for disk 5eb97d51-9da4-420c-8c88-6a3691bc4c67
[   3.1 ] Image transfer 69ec0ef5-b7f2-445f-9553-255ccb5a04ba is ready
[ 100.00% ] 6.00 GiB, 29.52 seconds, 208.14 MiB/s                            
[  32.7 ] Finalizing image transfer
[  37.8 ] Creating image transfer for disk 5eb97d51-9da4-420c-8c88-6a3691bc4c67
[  38.9 ] Image transfer 283935b3-31cb-442e-9731-caa22358b683 is ready
[  23.37% ] 1.40 GiB, 2.10 seconds, 684.47 MiB/s                            
[  41.0 ] Finalizing image transfer
Traceback (most recent call last):
  File "./backup_vm.py", line 177, in cmd_full
    download_backup(connection, backup.id, args)
  File "./backup_vm.py", line 355, in download_backup
    download_disk(connection, backup_uuid, disk, disk_path, args, incremental=incremental)
  File "./backup_vm.py", line 390, in download_disk
    client.download(
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/client/_api.py", line 172, in download
    io.copy(
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/io.py", line 69, in copy
    log.debug("Executor failed")
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/io.py", line 189, in __exit__
    self.stop()
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/io.py", line 166, in stop
    raise self._errors[0]
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/io.py", line 238, in _run
    handler.copy(req)
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/io.py", line 282, in copy
    self._src.write_to(self._dst, req.length, self._buf)
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/backends/http.py", line 204, in write_to
    res = self._get(length)
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/backends/http.py", line 431, in _get
    self._reraise(res.status, res.read())
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/backends/http.py", line 585, in _reraise
    raise http.Error(status, msg)
ovirt_imageio._internal.http.Error: Server failed to perform the request, check logs

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./backup_vm.py", line 429, in <module>
    main()
  File "./backup_vm.py", line 161, in main
    args.command(args)
  File "./backup_vm.py", line 179, in cmd_full
    stop_backup(connection, backup.id, args)
  File "./backup_vm.py", line 327, in stop_backup
    backup_service.finalize()
  File "/home/nsoffer/.local/lib/python3.8/site-packages/ovirtsdk4/services.py", line 33306, in finalize
    return self._internal_action(action, 'finalize', None, headers, query, wait)
  File "/home/nsoffer/.local/lib/python3.8/site-packages/ovirtsdk4/service.py", line 299, in _internal_action
    return future.wait() if wait else future
  File "/home/nsoffer/.local/lib/python3.8/site-packages/ovirtsdk4/service.py", line 55, in wait
    return self._code(response)
  File "/home/nsoffer/.local/lib/python3.8/site-packages/ovirtsdk4/service.py", line 296, in callback
    self._check_fault(response)
  File "/home/nsoffer/.local/lib/python3.8/site-packages/ovirtsdk4/service.py", line 132, in _check_fault
    self._raise_error(response, body)
  File "/home/nsoffer/.local/lib/python3.8/site-packages/ovirtsdk4/service.py", line 118, in _raise_error
    raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Host not found.]". HTTP response code is 400.

Second downlaod failed during download. The error details are available in the server logs.

Server log:
/var/log/ovirt-imageio/daemon.log

2020-11-21 02:28:13,717 ERROR   (Thread-123) [http] Request aborted after starting response
Traceback (most recent call last):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py", line 774, in __call__
    self.dispatch(req, resp)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py", line 819, in dispatch
    return method(req, resp, *match.groups())
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/cors.py", line 84, in wrapper
    return func(self, req, resp, *args)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/images.py", line 148, in get
    ticket.run(op)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/auth.py", line 174, in run
    return operation.run()
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/ops.py", line 57, in run
    res = self._run()
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/ops.py", line 100, in _run
    self._read_chunk()
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/ops.py", line 112, in _read_chunk
    count = self._src.readinto(view)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/nbd.py", line 110, in readinto
    self._client.readinto(self._position, view)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 416, in readinto
    self._recv_reply(cmd)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 958, in _recv_reply
    if self._recv_reply_chunk(cmd):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1016, in _recv_reply_chunk
    self._handle_data_chunk(length, cmd)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1152, in _handle_data_chunk
    self._recv_into(view)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1213, in _recv_into
    .format(pos, length))
ovirt_imageio._internal.nbd.ProtocolError: Server closed the connection, read 7016420 bytes, expected 8388608 bytes

Connection to qemu closed during read.

Libvirt qemu log:
/var/log/libvirt/qemu/backup-raw.log

2020-11-21 00:26:03.762+0000: Domain id=6 is tainted: custom-hypervisor-feature
2020-11-21T00:26:04.033478Z qemu-kvm: -numa node,nodeid=0,cpus=0-15,mem=1024: warning: Parameter -numa node,mem is deprecated, use -numa node,memdev instead
2020-11-21 00:26:22.900+0000: Domain id=6 is tainted: custom-ga-command
aio_co_schedule: Co-routine was already scheduled in 'aio_co_schedule'
2020-11-21 00:28:13.932+0000: shutting down, reason=crashed

Looking in the coredump:

Core was generated by `/usr/libexec/qemu-kvm -name guest=backup-qcow2,debug-threads=on -S -object secr'.
Program terminated with signal SIGABRT, Aborted.
#0  0x00007f37f011a7ff in ?? ()
[Current thread is 1 (LWP 121128)]
(gdb) bt
#0  0x00007f37f011a7ff in ?? ()
#1  0x0000000010002240 in ?? ()
#2  0x00000000000001ff in ?? ()
#3  0x00000076acae0070 in ?? ()
#4  0x00001d8000000000 in ?? ()
#5  0x0000563681a7f040 in ?? ()
#6  0x0000563681425680 in ?? ()
#7  0x00007ffcacae0160 in ?? ()
#8  0x0000563681425e70 in ?? ()
#9  0x0000000000040000 in ?? ()
#10 0x00005636817bceb0 in ?? ()
#11 0x00007ffcacae0118 in ?? ()
#12 0x000056367fcfc25b in ?? ()
#13 0x0000001000000000 in ?? ()
#14 0x000000000003ffff in ?? ()
#15 0x00000076acae0070 in ?? ()
#16 0x00007f37f04b78f9 in ?? ()
#17 0xfffffffe7fffffff in ?? ()
#18 0xffffffffffffffff in ?? ()
#19 0xffffffffffffffff in ?? ()
#20 0xffffffffffffffff in ?? ()
#21 0xffffffffffffffff in ?? ()
#22 0xffffffffffffffff in ?? ()
#23 0xffffffffffffffff in ?? ()
#24 0xffffffffffffffff in ?? ()
#25 0xffffffffffffffff in ?? ()
#26 0xffffffffffffffff in ?? ()
#27 0xffffffffffffffff in ?? ()
#28 0xffffffffffffffff in ?? ()
#29 0xffffffffffffffff in ?? ()
#30 0xffffffffffffffff in ?? ()
#31 0xffffffffffffffff in ?? ()
#32 0xffffffffffffffff in ?? ()
#33 0x00007f37dc01e630 in ?? ()
#34 0xf5de3206c7b5ea00 in ?? ()
#35 0x00005636826ff5b0 in ?? ()
#36 0x00007f37f0104c35 in ?? ()
#37 0x0000000000000020 in ?? ()
#38 0x0000000000000000 in ?? ()

I installed the debuginfo packages:

# rpm -qa | grep qemu | grep debug
qemu-kvm-core-debuginfo-5.1.0-13.module+el8.3.0+8424+e82f331d.x86_64
qemu-kvm-debugsource-5.1.0-13.module+el8.3.0+8424+e82f331d.x86_64
qemu-kvm-debuginfo-5.1.0-13.module+el8.3.0+8424+e82f331d.x86_64


## VM 2 - qcow2 disk, virtio

# virsh -r dumpxml backup-qcow2
<domain type='kvm' id='10' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>backup-qcow2</name>
  <uuid>4198a95d-c9be-413c-a01b-bc2fa3448e26</uuid>
  <metadata xmlns:ns1="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
    <ns1:qos/>
    <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
    <ovirt-vm:balloonTarget type="int">1048576</ovirt-vm:balloonTarget>
    <ovirt-vm:clusterVersion>4.5</ovirt-vm:clusterVersion>
    <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>
    <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
    <ovirt-vm:memGuaranteedSize type="int">1024</ovirt-vm:memGuaranteedSize>
    <ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb>
    <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>
    <ovirt-vm:startTime type="float">1605927260.501533</ovirt-vm:startTime>
    <ovirt-vm:device mac_address="56:6f:7a:5a:00:02">
        <ovirt-vm:network>ovirtmgmt</ovirt-vm:network>
    </ovirt-vm:device>
    <ovirt-vm:device devtype="disk" name="vda">
        <ovirt-vm:domainID>839db77f-fde3-4e13-bfb6-56be604631ed</ovirt-vm:domainID>
        <ovirt-vm:imageID>86569d03-189c-4893-b46e-286c2eba8857</ovirt-vm:imageID>
        <ovirt-vm:poolID>fb7797bd-3cca-4baa-bcb3-35e8851015f7</ovirt-vm:poolID>
        <ovirt-vm:volumeID>fa0ff024-932b-4515-8dab-d2a2ed5a7bf9</ovirt-vm:volumeID>
        <ovirt-vm:specParams>
            <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>
        </ovirt-vm:specParams>
        <ovirt-vm:volumeChain>
            <ovirt-vm:volumeChainNode>
                <ovirt-vm:domainID>839db77f-fde3-4e13-bfb6-56be604631ed</ovirt-vm:domainID>
                <ovirt-vm:imageID>86569d03-189c-4893-b46e-286c2eba8857</ovirt-vm:imageID>
                <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
                <ovirt-vm:leasePath>/rhev/data-center/mnt/storage:_export_1/839db77f-fde3-4e13-bfb6-56be604631ed/images/86569d03-189c-4893-b46e-286c2eba8857/fa0ff024-932b-4515-8dab-d2a2ed5a7bf9.lease</ovirt-vm:leasePath>
                <ovirt-vm:path>/rhev/data-center/mnt/storage:_export_1/839db77f-fde3-4e13-bfb6-56be604631ed/images/86569d03-189c-4893-b46e-286c2eba8857/fa0ff024-932b-4515-8dab-d2a2ed5a7bf9</ovirt-vm:path>
                <ovirt-vm:volumeID>fa0ff024-932b-4515-8dab-d2a2ed5a7bf9</ovirt-vm:volumeID>
            </ovirt-vm:volumeChainNode>
        </ovirt-vm:volumeChain>
    </ovirt-vm:device>
    <ovirt-vm:device devtype="disk" name="sdc"/>
</ovirt-vm:vm>
  </metadata>
  <maxMemory slots='16' unit='KiB'>4194304</maxMemory>
  <memory unit='KiB'>1048576</memory>
  <currentMemory unit='KiB'>1048576</currentMemory>
  <vcpu placement='static' current='1'>16</vcpu>
  <iothreads>1</iothreads>
  <resource>
    <partition>/machine</partition>
  </resource>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>oVirt</entry>
      <entry name='product'>RHEL</entry>
      <entry name='version'>8.3-1.0.el8</entry>
      <entry name='serial'>095bce26-9d5b-4ae2-b3bf-dddfc9cdf541</entry>
      <entry name='uuid'>4198a95d-c9be-413c-a01b-bc2fa3448e26</entry>
      <entry name='family'>oVirt</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='pc-q35-rhel8.1.0'>hvm</type>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
  </features>
  <cpu mode='custom' match='exact' check='full'>
    <model fallback='forbid'>Skylake-Client</model>
    <topology sockets='16' dies='1' cores='1' threads='1'/>
    <feature policy='disable' name='hle'/>
    <feature policy='disable' name='rtm'/>
    <feature policy='disable' name='mpx'/>
    <feature policy='require' name='hypervisor'/>
    <numa>
      <cell id='0' cpus='0-15' memory='1048576' unit='KiB'/>
    </numa>
  </cpu>
  <clock offset='variable' adjustment='0' basis='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='cdrom'>
      <driver name='qemu' error_policy='report'/>
      <source startupPolicy='optional'/>
      <target dev='sdc' bus='sata'/>
      <readonly/>
      <alias name='ua-87de72de-d473-4b69-b5fc-bb8f2a65eda1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='threads' iothread='1'/>
      <source file='/rhev/data-center/mnt/storage:_export_1/839db77f-fde3-4e13-bfb6-56be604631ed/images/86569d03-189c-4893-b46e-286c2eba8857/fa0ff024-932b-4515-8dab-d2a2ed5a7bf9' index='1'>
        <seclabel model='dac' relabel='no'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <serial>86569d03-189c-4893-b46e-286c2eba8857</serial>
      <boot order='1'/>
      <alias name='ua-86569d03-189c-4893-b46e-286c2eba8857'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </disk>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <alias name='pci.1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <alias name='pci.2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <alias name='pci.3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <alias name='pci.4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <alias name='pci.5'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x15'/>
      <alias name='pci.6'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0x16'/>
      <alias name='pci.7'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
    </controller>
    <controller type='pci' index='8' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='8' port='0x17'/>
      <alias name='pci.8'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
    </controller>
    <controller type='pci' index='9' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='9' port='0x18'/>
      <alias name='pci.9'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='10' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='10' port='0x19'/>
      <alias name='pci.10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
    </controller>
    <controller type='pci' index='11' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='11' port='0x1a'/>
      <alias name='pci.11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
    </controller>
    <controller type='pci' index='12' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='12' port='0x1b'/>
      <alias name='pci.12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
    </controller>
    <controller type='pci' index='13' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='13' port='0x1c'/>
      <alias name='pci.13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
    </controller>
    <controller type='pci' index='14' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='14' port='0x1d'/>
      <alias name='pci.14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
    </controller>
    <controller type='pci' index='15' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='15' port='0x1e'/>
      <alias name='pci.15'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
    </controller>
    <controller type='pci' index='16' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='16' port='0x1f'/>
      <alias name='pci.16'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'>
      <alias name='pcie.0'/>
    </controller>
    <controller type='virtio-serial' index='0' ports='16'>
      <alias name='ua-6a467db1-1ffc-4f77-9b58-b746a1b048be'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <driver iothread='1'/>
      <alias name='ua-cc2f22e8-9625-4dc0-8d70-3513b45975bd'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <controller type='usb' index='0' model='qemu-xhci' ports='8'>
      <alias name='ua-ec25086f-5623-4d02-b596-5b5251ccce02'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='56:6f:7a:5a:00:02'/>
      <source bridge='ovirtmgmt'/>
      <target dev='vnet9'/>
      <model type='virtio'/>
      <filterref filter='vdsm-no-mac-spoofing'/>
      <link state='up'/>
      <mtu size='1500'/>
      <alias name='ua-d44c62e4-0a44-4a11-a3f1-31247c73f6a0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/4198a95d-c9be-413c-a01b-bc2fa3448e26.ovirt-guest-agent.0'/>
      <target type='virtio' name='ovirt-guest-agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/4198a95d-c9be-413c-a01b-bc2fa3448e26.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
      <alias name='channel1'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0' state='disconnected'/>
      <alias name='channel2'/>
      <address type='virtio-serial' controller='0' bus='0' port='3'/>
    </channel>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'>
      <alias name='input1'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input2'/>
    </input>
    <graphics type='spice' port='5900' tlsPort='5901' autoport='yes' listen='192.168.122.22' passwdValidTo='1970-01-01T00:00:01'>
      <listen type='network' address='192.168.122.22' network='vdsm-ovirtmgmt'/>
      <channel name='main' mode='secure'/>
      <channel name='display' mode='secure'/>
      <channel name='inputs' mode='secure'/>
      <channel name='cursor' mode='secure'/>
      <channel name='playback' mode='secure'/>
      <channel name='record' mode='secure'/>
      <channel name='smartcard' mode='secure'/>
      <channel name='usbredir' mode='secure'/>
    </graphics>
    <graphics type='vnc' port='5902' autoport='yes' listen='192.168.122.22' keymap='en-us' passwdValidTo='1970-01-01T00:00:01'>
      <listen type='network' address='192.168.122.22' network='vdsm-ovirtmgmt'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='8192' vgamem='16384' heads='1' primary='yes'/>
      <alias name='ua-d7761ee1-a13e-4314-979b-92d4968e2e61'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <stats period='5'/>
      <alias name='ua-96960a0a-f56f-4c4c-8b8a-bb51410db7b8'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <alias name='ua-494fbdfa-a451-4a9d-9573-f821e4726847'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </rng>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c50,c207</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c50,c207</imagelabel>
  </seclabel>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+107:+107</label>
    <imagelabel>+107:+107</imagelabel>
  </seclabel>
  <qemu:capabilities>
    <qemu:add capability='blockdev'/>
    <qemu:add capability='incremental-backup'/>
  </qemu:capabilities>
</domain>


Backup log:

$ ./backup_vm.py -c engine-dev full --backup-dir /var/tmp/backups/qcow2 4198a95d-c9be-413c-a01b-bc2fa3448e26
[   0.0 ] Starting full backup for VM 4198a95d-c9be-413c-a01b-bc2fa3448e26
[   0.6 ] Waiting until backup 3e8bae25-63da-4fc9-bddb-5dffaab67319 is ready
[   3.6 ] Created checkpoint '8d7530dd-deeb-41d3-81c4-76cafb71ec13' (to use in --from-checkpoint-uuid for the next incremental backup)
[   3.6 ] Creating image transfer for disk 86569d03-189c-4893-b46e-286c2eba8857
[   4.7 ] Image transfer e9e6b021-0410-467c-afb2-dc46445d895c is ready
[ 100.00% ] 6.00 GiB, 23.21 seconds, 264.73 MiB/s                              
[  27.9 ] Finalizing image transfer
[  29.0 ] Creating image transfer for disk 86569d03-189c-4893-b46e-286c2eba8857
[  30.1 ] Image transfer f554dd46-a58d-4193-8433-e4e1cf6a5784 is ready
[  29.78% ] 1.79 GiB, 6.37 seconds, 287.02 MiB/s                              
[  36.5 ] Finalizing image transfer
Traceback (most recent call last):
  File "./backup_vm.py", line 177, in cmd_full
    download_backup(connection, backup.id, args)
  File "./backup_vm.py", line 355, in download_backup
    download_disk(connection, backup_uuid, disk, disk_path, args, incremental=incremental)
  File "./backup_vm.py", line 390, in download_disk
    client.download(
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/client/_api.py", line 172, in download
    io.copy(
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/io.py", line 69, in copy
    log.debug("Executor failed")
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/io.py", line 189, in __exit__
    self.stop()
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/io.py", line 166, in stop
    raise self._errors[0]
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/io.py", line 238, in _run
    handler.copy(req)
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/io.py", line 282, in copy
    self._src.write_to(self._dst, req.length, self._buf)
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/backends/http.py", line 213, in write_to
    raise RuntimeError(
RuntimeError: Expected 24182784 bytes, got 16777216 bytes

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./backup_vm.py", line 429, in <module>
    main()
  File "./backup_vm.py", line 161, in main
    args.command(args)
  File "./backup_vm.py", line 179, in cmd_full
    stop_backup(connection, backup.id, args)
  File "./backup_vm.py", line 327, in stop_backup
    backup_service.finalize()
  File "/home/nsoffer/.local/lib/python3.8/site-packages/ovirtsdk4/services.py", line 33306, in finalize
    return self._internal_action(action, 'finalize', None, headers, query, wait)
  File "/home/nsoffer/.local/lib/python3.8/site-packages/ovirtsdk4/service.py", line 299, in _internal_action
    return future.wait() if wait else future
  File "/home/nsoffer/.local/lib/python3.8/site-packages/ovirtsdk4/service.py", line 55, in wait
    return self._code(response)
  File "/home/nsoffer/.local/lib/python3.8/site-packages/ovirtsdk4/service.py", line 296, in callback
    self._check_fault(response)
  File "/home/nsoffer/.local/lib/python3.8/site-packages/ovirtsdk4/service.py", line 132, in _check_fault
    self._raise_error(response, body)
  File "/home/nsoffer/.local/lib/python3.8/site-packages/ovirtsdk4/service.py", line 118, in _raise_error
    raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Host not found.]". HTTP response code is 400.


Server log:
/var/log/ovirt-imageio/daemon.log

2020-11-21 04:53:29,238 ERROR   (Thread-263) [http] Request aborted after starting response
Traceback (most recent call last):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py", line 774, in __call__
    self.dispatch(req, resp)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py", line 819, in dispatch
    return method(req, resp, *match.groups())
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/cors.py", line 84, in wrapper
    return func(self, req, resp, *args)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/images.py", line 148, in get
    ticket.run(op)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/auth.py", line 174, in run
    return operation.run()
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/ops.py", line 57, in run
    res = self._run()
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/ops.py", line 100, in _run
    self._read_chunk()
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/ops.py", line 112, in _read_chunk
    count = self._src.readinto(view)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/nbd.py", line 110, in readinto
    self._client.readinto(self._position, view)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 416, in readinto
    self._recv_reply(cmd)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 935, in _recv_reply
    magic = self._recv_fmt("!I")[0]
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1190, in _recv_fmt
    data = self._recv(s.size)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1200, in _recv
    self._recv_into(buf)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1213, in _recv_into
    .format(pos, length))
ovirt_imageio._internal.nbd.ProtocolError: Server closed the connection, read 0 bytes, expected 4 bytes


Libvirt qemu log:

2020-11-21 02:51:57.306+0000: Domain id=9 is tainted: custom-hypervisor-feature
2020-11-21T02:51:57.488883Z qemu-kvm: -numa node,nodeid=0,cpus=0-15,mem=1024: warning: Parameter -numa node,mem is deprecated,
 use -numa node,memdev instead
2020-11-21 02:52:20.345+0000: Domain id=9 is tainted: custom-ga-command
aio_co_schedule: Co-routine was already scheduled in 'aio_co_schedule'
2020-11-21 02:53:29.451+0000: shutting down, reason=crashed

qemu coredump:

Program terminated with signal SIGABRT, Aborted.
#0  0x00007f37f011a7ff in ?? ()


## VM 6 - qcow2 disk, virtio, enabled for incremental backup

(same xml as vm 2)

Backup log:

$ ./backup_vm.py -c engine-dev incremental --from-checkpoint-uuid 2842ca29-f527-48d2-b2b7-725ebb425260 --backup-dir /var/tmp/backups/qcow2 6ba753b5-2a04-4a6b-9874-6288131cfe64
[   0.0 ] Starting incremental backup for VM 6ba753b5-2a04-4a6b-9874-6288131cfe64
[   0.3 ] Waiting until backup 310cce5b-9868-408b-a378-370f5c8c3282 is ready
[   1.3 ] Created checkpoint '3319c16d-9af8-427c-ba44-f6d567bd176f' (to use in --from-checkpoint-uuid for the next incremental backup)
[   1.3 ] Creating image transfer for disk e494e31d-dae9-4d40-8160-d01aa6755542
[   2.4 ] Image transfer b0306363-c2ef-41cc-9b14-2a8962491ebf is ready
[ 100.00% ] 6.00 GiB, 1.92 seconds, 3.13 GiB/s                                
[   4.3 ] Finalizing image transfer
[   8.4 ] Creating image transfer for disk e494e31d-dae9-4d40-8160-d01aa6755542
[   9.5 ] Image transfer 738b5421-d71a-478e-9ab3-6e97ed424c4e is ready
[ 100.00% ] 6.00 GiB, 2.12 seconds, 2.83 GiB/s                                
[  11.6 ] Finalizing image transfer
[  15.6 ] Creating image transfer for disk e494e31d-dae9-4d40-8160-d01aa6755542
[  16.7 ] Image transfer 589a6f9e-c762-4322-b2e0-cef821a8a9fb is ready
[ 100.00% ] 6.00 GiB, 1.89 seconds, 3.17 GiB/s                                
[  18.6 ] Finalizing image transfer
[  22.6 ] Creating image transfer for disk e494e31d-dae9-4d40-8160-d01aa6755542
[  23.7 ] Image transfer 3de30c0e-6fce-4d02-80bd-bd6d9de50a13 is ready
[  43.82% ] 2.63 GiB, 0.27 seconds, 9.65 GiB/s                                
[  24.0 ] Finalizing image transfer
Traceback (most recent call last):
  File "./backup_vm.py", line 194, in cmd_incremental
    download_backup(connection, backup.id, args, incremental=True)
  File "./backup_vm.py", line 355, in download_backup
    download_disk(connection, backup_uuid, disk, disk_path, args, incremental=incremental)
  File "./backup_vm.py", line 390, in download_disk
    client.download(
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/client/_api.py", line 172, in download
    io.copy(
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/io.py", line 69, in copy
    log.debug("Executor failed")
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/io.py", line 189, in __exit__
    self.stop()
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/io.py", line 166, in stop
    raise self._errors[0]
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/io.py", line 238, in _run
    handler.copy(req)
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/io.py", line 282, in copy
    self._src.write_to(self._dst, req.length, self._buf)
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/backends/http.py", line 204, in write_to
    res = self._get(length)
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/backends/http.py", line 431, in _get
    self._reraise(res.status, res.read())
  File "/usr/lib64/python3.8/site-packages/ovirt_imageio/_internal/backends/http.py", line 585, in _reraise
    raise http.Error(status, msg)
ovirt_imageio._internal.http.Error: Server failed to perform the request, check logs

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./backup_vm.py", line 429, in <module>
    main()
  File "./backup_vm.py", line 161, in main
    args.command(args)
  File "./backup_vm.py", line 196, in cmd_incremental
    stop_backup(connection, backup.id, args)
  File "./backup_vm.py", line 327, in stop_backup
    backup_service.finalize()
  File "/home/nsoffer/.local/lib/python3.8/site-packages/ovirtsdk4/services.py", line 33306, in finalize
    return self._internal_action(action, 'finalize', None, headers, query, wait)
  File "/home/nsoffer/.local/lib/python3.8/site-packages/ovirtsdk4/service.py", line 299, in _internal_action
    return future.wait() if wait else future
  File "/home/nsoffer/.local/lib/python3.8/site-packages/ovirtsdk4/service.py", line 55, in wait
    return self._code(response)
  File "/home/nsoffer/.local/lib/python3.8/site-packages/ovirtsdk4/service.py", line 296, in callback
    self._check_fault(response)
  File "/home/nsoffer/.local/lib/python3.8/site-packages/ovirtsdk4/service.py", line 132, in _check_fault
    self._raise_error(response, body)
  File "/home/nsoffer/.local/lib/python3.8/site-packages/ovirtsdk4/service.py", line 118, in _raise_error
    raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Host not found.]". HTTP response code is 400.

Server log:
/var/log/ovirt-imageio/daemon.log

2020-11-21 05:17:42,480 INFO    (Thread-358) [backends.nbd] Open backend address='/run/vdsm/backup/310cce5b-9868-408b-a378-370f5c8c3282' export_name='vda' sparse=True max_connections=8
2020-11-21 05:17:42,637 ERROR   (Thread-355) [http] Server error
Traceback (most recent call last):
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py", line 774, in __call__
    self.dispatch(req, resp)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/http.py", line 819, in dispatch
    return method(req, resp, *match.groups())
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/cors.py", line 84, in wrapper
    return func(self, req, resp, *args)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/images.py", line 148, in get
    ticket.run(op)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/auth.py", line 174, in run
    return operation.run()
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/ops.py", line 57, in run
    res = self._run()
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/ops.py", line 100, in _run
    self._read_chunk()
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/ops.py", line 112, in _read_chunk
    count = self._src.readinto(view)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/nbd.py", line 110, in readinto
    self._client.readinto(self._position, view)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 415, in readinto
    self._send_command(cmd)
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 928, in _send_command
    self._send(cmd.to_bytes())
  File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/nbd.py", line 1196, in _send
    self._sock.sendall(data)
BrokenPipeError: [Errno 32] Broken pipe

Libvirt qemu log:
/var/log/libvirt/qemu/backup-qcow2-incremental.log

2020-11-21 03:15:13.546+0000: Domain id=12 is tainted: custom-hypervisor-feature
2020-11-21T03:15:13.924968Z qemu-kvm: -numa node,nodeid=0,cpus=0-15,mem=1024: warning: Parameter -numa node,mem is deprecated, use -numa node,memdev instead
2020-11-21 03:15:31.751+0000: Domain id=12 is tainted: custom-ga-command
aio_co_schedule: Co-routine was already scheduled in 'aio_co_schedule'
2020-11-21 03:17:42.815+0000: shutting down, reason=crashed

qemu coredump:

Program terminated with signal SIGABRT, Aborted.
#0  0x00007f37f011a7ff in ?? ()

Comment 1 Nir Soffer 2020-11-22 09:48:19 UTC
Created attachment 1732014 [details]
libvirtd log

Comment 2 Nir Soffer 2020-11-22 09:51:26 UTC
Created attachment 1732015 [details]
abrt crash report

The core dump for the last incedent, aborting during incremental backup.

Comment 3 CongLi 2020-11-22 09:59:18 UTC
Hi Niny,

Could you please help have a try?

Thanks.

Comment 10 Sergio Lopez 2020-11-27 16:04:21 UTC
I've confirmed this issue is reproducible with the latest upstream code. The root cause appears to be that the nbd server doesn't deal properly with AIO context switches, such as the one that is triggered while stopping the dataplane due to a guest reboot.

Next week I'll try to confirm this and propose a solution.

Sergio.

Comment 11 John Ferlan 2020-12-22 17:07:14 UTC
Series currently at v2 upstream: https://lists.nongnu.org/archive/html/qemu-devel/2020-12/msg03738.html (this bug is the first 3 patches).

Comment 14 Eric Hadley 2021-01-29 16:42:43 UTC
Setting exception+ for this instead of blocker+ as we are not intending to block 8.3.1 release on this. If this doesn't make 8.3.1 (patches in by Feb 03) the plan will be to fix this in 8.4.0 and backport to an 8.3.1.z batch update.

Comment 29 errata-xmlrpc 2021-02-22 15:39:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt:8.3 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:0639


Note You need to log in before you can comment on or make changes to this bug.