Bug 1867087

Summary: Forward-port 'blockdev-reopen' enablement upstream
Product: Red Hat Enterprise Linux 8 Reporter: Peter Krempa <pkrempa>
Component: qemu-kvmAssignee: Kevin Wolf <kwolf>
qemu-kvm sub component: Storage QA Contact: aihua liang <aliang>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: medium CC: aliang, coli, ddepaula, jferlan, jinzhao, juzhang, kchamart, kkiwi, kwolf, leidwang, ngu, virt-maint
Version: ---Keywords: FutureFeature, Triaged
Target Milestone: rc   
Target Release: 8.6   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: qemu-kvm-6.1.0-2.module+el8.6.0+12861+13975d62 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-05-10 13:18:39 UTC Type: Feature Request
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1997410    
Bug Blocks: 1985451    

Description Peter Krempa 2020-08-07 10:29:17 UTC
Description of problem:
Downstream qemu contains patches that allow libvirt usage of 'x-blockdev-reopen' by adding '__com.redhat_rhel-av-8_2_0-api' feature to the command.

Upstream qemu should enable the command so that we don't have to cary downstream hacks both in qemu an libvirt.

'blockdev-reopen' is used by libvirt for some tasks regarding incremental backup and also to support late opening semantics used by oVirts 'live storage migration' feature.

Comment 10 John Ferlan 2021-09-08 19:19:53 UTC
Bulk update: Move RHEL-AV bugs to RHEL8

Comment 12 aihua liang 2021-09-18 12:38:13 UTC
Test on qemu-kvm-6.1.0-1.module+el8.6.0+12535+4e2af250, the problem has been resolved.

Test Env:
  kernel version:4.18.0-315.el8.x86_64
  qemu-kvm version:qemu-kvm-6.1.0-1.module+el8.6.0+12535+4e2af250

Test Steps:
 1.Start guest with qemu cmds:
    /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server-noTSX',+kvm_pv_unhalt \
    -chardev socket,id=qmp_id_qmpmonitor1,server=on,path=/tmp/monitor-qmpmonitor1-20210913-223137-drnRdJs8,wait=off  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,id=qmp_id_catch_monitor,server=on,path=/tmp/monitor-catch_monitor-20210913-223137-drnRdJs8,wait=off  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idM2Q7IB \
    -chardev socket,id=chardev_serial0,server=on,path=/tmp/serial-serial0-20210913-223137-drnRdJs8,wait=off \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20210913-223137-drnRdJs8,path=/tmp/seabios-20210913-223137-drnRdJs8,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20210913-223137-drnRdJs8,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -object iothread,id=iothread0 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,filename=/home/kvm_autotest_root/images/rhel860-64-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,write-cache=on,bus=pcie-root-port-2,addr=0x0,iothread=iothread0 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:fb:5a:e8:4b:7b,id=idVnkhgS,netdev=id9XmH0X,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=id9XmH0X,vhost=on  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \
    -monitor stdio \
    -qmp tcp:0:3000,server=on,wait=off \
 
 2. Add persistent bitmap to drive_image1
    { "execute": "block-dirty-bitmap-add", "arguments": {"node": "drive_image1", "name": "bitmap0","persistent":true}}

 3. Create snapshot target: sn1 and do snapshot from drive_image1 to sn1
     {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn1','size':21474836480},'job-id':'job1'}}
     {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn1','filename':'/root/sn1'}}
     {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn1','size':21474836480},'job-id':'job2'}}
     {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn1','file':'drive_sn1'}}
     {'execute':'job-dismiss','arguments':{'id':'job1'}}
     {'execute':'job-dismiss','arguments':{'id':'job2'}}
     {"execute":"blockdev-snapshot","arguments":{"node":"drive_image1","overlay":"sn1"}}

 4. Add persistent bitmap to sn1
    { "execute": "block-dirty-bitmap-add", "arguments": {"node": "sn1", "name": "bitmap0","persistent":true}}

 5. Reopen backing image
   {'execute':'blockdev-reopen','arguments':{'options':[{'driver':'qcow2','node-name':'drive_image1', "read-only":false,'file':'file_image1'}]}}
   {"return": {}}

 6. Remove all bitmaps
    { "execute": "transaction", "arguments": { "actions": [ {"type": "block-dirty-bitmap-remove","data":{"node":"drive_image1","name":"bitmap0"}},{"type": "block-dirty-bitmap-remove","data":{"node":"sn1","name":"bitmap0"}}]}}

Comment 13 Danilo de Paula 2021-10-12 17:48:50 UTC
Hi, could please grant qa_ack, this has been verified already.

Comment 16 errata-xmlrpc 2022-05-10 13:18:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1759