Description of problem: [blockdev enablement] Set blockcopy bandwidth failed with 'virsh blockjob' Version-Release number of selected component (if applicable): libvirt-5.10.0-1.module+el8.2.0+5040+bd433686.x86_64 How reproducible: 100% Steps to Reproduce: 1. Having a running vm # virsh domblklist avocado-vt-vm1 ... ---------------------------------------------------------------------- vda /var/lib/avocado/data/avocado-vt/images/jeos-27-x86_64.qcow2 2. Do blockcopy # virsh blockcopy avocado-vt-vm1 vda /tmp/copy.img --transient-job --reuse-external Block Copy started 3.Try to set the job's bandwidth with BYTES or BANDWIDTH # virsh blockjob avocado-vt-vm1 vda --bytes 2 error: Requested operation is not valid: No active block job 'drive-virtio-disk0' # virsh blockjob avocado-vt-vm1 vda --bandwidth 1 error: Requested operation is not valid: No active block job 'drive-virtio-disk0' Actual results: setting bandwidth failed Expected results: Succesful Additional info: qemu process as follow: /usr/libexec/qemu-kvm -name guest=avocado-vt-vm1,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-9-avocado-vt-vm1/master-key.aes -machine pc-q35-rhel8.1.0,accel=kvm,usb=off,dump-guest-core=off -cpu Skylake-Server-IBRS,ss=on,hypervisor=on,tsc-adjust=on,clflushopt=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,ssbd=on,ibpb=on,amd-ssbd=on,skip-l1dfl-vmentry=on,vmx=off -m 1024 -overcommit mem-lock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 9f672c02-487d-45df-965f-49b2870dd948 -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=38,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global ICH9-LPC.disable_s3=1 -global ICH9-LPC.disable_s4=1 -boot strict=on -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 -device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 -device pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6 -device qemu-xhci,p2=15,p3=15,id=usb,bus=pci.2,addr=0x0 -device virtio-serial-pci,id=virtio-serial0,bus=pci.3,addr=0x0 -blockdev {"driver":"file","filename":"/var/lib/avocado/data/avocado-vt/images/jeos-27-x86_64.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":null} -device virtio-blk-pci,scsi=off,bus=pcie.0,addr=0x5,drive=libvirt-1-format,id=virtio-disk0,bootindex=1 -netdev tap,fd=40,id=hostnet0,vhost=on,vhostfd=41 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:ba:73:b3,bus=pci.1,addr=0x0 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=42,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 127.0.0.1:1 -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1 -device virtio-balloon-pci,id=balloon0,bus=pci.5,addr=0x0 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.6,addr=0x0 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
Please upload the libvirtd log with log_filter='1:qemu' to see what happened on blockjob internal. BTW, --reuse-external is not essential in this bug,
According to the error message the job name is wrong. I probably forgot to fix this one call. I don't require additional data.
Fixed upstream: commit d179acf4ad5001b8b02d79167e4e65a35c612c15 Author: Peter Krempa <pkrempa> Date: Fri Dec 6 14:06:55 2019 +0100 qemu: driver: Use appropriate job name when setting blockjob speed qemuDomainBlockJobSetSpeed was not converted to get the job name from the block job data. This means that after enabling blockdev the API call would fail as we wouldn't use the appropriate name. v5.10.0-74-gd179acf4ad
This is one of the issues blocking the virt module of pass gating without require a manual waiving. Please include it in the next build asap.
Cases passed with latest libvirt and qemu (-blockdev enabled by default) # rpm -qa | egrep "^libvirt-5|^qemu-kvm-4" libvirt-5.10.0-2.module+el8.2.0+5274+60f836b5.x86_64 qemu-kvm-4.2.0-5.module+el8.2.0+5389+367d9739.x86_64 # avocado run --vt-type libvirt virsh.blockjob.positive_test.min_bandwidth virsh.blockjob.positive_test.max_bandwidth virsh.blockjob.positive_test.bandwidth_bytes_option JOB ID : a40c1321f3c482a8cf736863042e13c03698544b JOB LOG : /root/avocado/job-results/job-2020-01-14T00.53-a40c132/job.log (1/3) type_specific.io-github-autotest-libvirt.virsh.blockjob.positive_test.min_bandwidth: PASS (31.57 s) (2/3) type_specific.io-github-autotest-libvirt.virsh.blockjob.positive_test.max_bandwidth: PASS (28.23 s) (3/3) type_specific.io-github-autotest-libvirt.virsh.blockjob.positive_test.bandwidth_bytes_option: PASS (27.17 s) RESULTS : PASS 3 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 JOB TIME : 90.96 s
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2017