Bug 1985451

Summary: Remove downstream-only commit allowing x-blockdev-reopen for libvirt when rebasing to qemu-6.1+
Product: Red Hat Enterprise Linux 8 Reporter: Peter Krempa <pkrempa>
Component: qemu-kvmAssignee: Miroslav Rezanina <mrezanin>
qemu-kvm sub component: Incremental Live Backup QA Contact: aihua liang <aliang>
Status: CLOSED ERRATA Docs Contact:
Severity: unspecified    
Priority: unspecified CC: aliang, coli, ddepaula, jinzhao, juzhang, ngu, virt-maint, yfu
Version: ---Keywords: RFE, Triaged
Target Milestone: rc   
Target Release: 8.6   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: qemu-kvm-6.1.0-1.module+el8.6.0+12535+4e2af250 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-05-10 13:20:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1867087, 1997410    
Bug Blocks:    

Description Peter Krempa 2021-07-23 15:10:46 UTC
Description of problem:

The downstream-only commit adding the versioned flag which is used by libvirt to enable x-blockdev-reopen downstream must be removed when rebasing to qemu-6.1 which now has full support for blockdev-reopen from upstream.

commit 989cfded8fdd5df3b6b1f1a304ca16c128d7561b
Author: Kevin Wolf <kwolf>
Date:   Fri Mar 13 12:34:32 2020 +0000

    block: Versioned x-blockdev-reopen API with feature flag
    
    RH-Author: Kevin Wolf <kwolf>
    Message-id: <20200313123439.10548-7-kwolf>
    Patchwork-id: 94283
    O-Subject: [RHEL-AV-8.2.0 qemu-kvm PATCH v2 06/13] block: Versioned x-blockdev-reopen API with feature flag
    Bugzilla: 1790482 1805143
    RH-Acked-by: Eric Blake <eblake>
    RH-Acked-by: John Snow <jsnow>
    RH-Acked-by: Daniel P. Berrange <berrange>
    RH-Acked-by: Peter Krempa <pkrempa>
    
    x-blockdev-reopen is still considered unstable upstream. libvirt needs
    (a small subset of) it for incremental backups, though.
    
    Add a downstream-only feature flag that effectively makes this a
    versioned interface. As long as the feature is present, we promise that
    we won't change the interface incompatibly. Incompatible changes to the
    command will require us to drop the feature flag (and possibly introduce
    a new one if the new version is still not stable upstream).
    
    Signed-off-by: Kevin Wolf <kwolf>
    Signed-off-by: Danilo C. L. de Paula <ddepaula>

Comment 1 Kevin Wolf 2021-07-27 09:58:00 UTC
Should this be assigned to the package maintainer who will forward port downstream patches (Danilo?) because it's about just not forward porting a patch rather than doing some additional work on top of it?

Comment 2 John Ferlan 2021-07-30 10:45:30 UTC
Similar to bug 1985452, change assignment as a result of handling during qemu-6.1 rebase. See bug 1867087 for more details.

I've removed the needinfo too.

Comment 6 Yanan Fu 2021-09-16 07:29:22 UTC
Add Verified: Tested, SanityOnly as gating test pass

Comment 8 aihua liang 2021-09-18 12:45:11 UTC
Test on qemu-kvm-6.1.0-1.module+el8.6.0+12535+4e2af250, the problem has been resolved.

Test Env:
  kernel version:4.18.0-315.el8.x86_64
  qemu-kvm version:qemu-kvm-6.1.0-1.module+el8.6.0+12535+4e2af250

Test Steps:
 1.Start guest with qemu cmds:
    /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server-noTSX',+kvm_pv_unhalt \
    -chardev socket,id=qmp_id_qmpmonitor1,server=on,path=/tmp/monitor-qmpmonitor1-20210913-223137-drnRdJs8,wait=off  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,id=qmp_id_catch_monitor,server=on,path=/tmp/monitor-catch_monitor-20210913-223137-drnRdJs8,wait=off  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idM2Q7IB \
    -chardev socket,id=chardev_serial0,server=on,path=/tmp/serial-serial0-20210913-223137-drnRdJs8,wait=off \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20210913-223137-drnRdJs8,path=/tmp/seabios-20210913-223137-drnRdJs8,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20210913-223137-drnRdJs8,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -object iothread,id=iothread0 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,filename=/home/kvm_autotest_root/images/rhel860-64-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,write-cache=on,bus=pcie-root-port-2,addr=0x0,iothread=iothread0 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:fb:5a:e8:4b:7b,id=idVnkhgS,netdev=id9XmH0X,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=id9XmH0X,vhost=on  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \
    -monitor stdio \
    -qmp tcp:0:3000,server=on,wait=off \
 
 2. Add persistent bitmap to drive_image1
    { "execute": "block-dirty-bitmap-add", "arguments": {"node": "drive_image1", "name": "bitmap0","persistent":true}}

 3. Create snapshot target: sn1 and do snapshot from drive_image1 to sn1
     {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn1','size':21474836480},'job-id':'job1'}}
     {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn1','filename':'/root/sn1'}}
     {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn1','size':21474836480},'job-id':'job2'}}
     {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn1','file':'drive_sn1'}}
     {'execute':'job-dismiss','arguments':{'id':'job1'}}
     {'execute':'job-dismiss','arguments':{'id':'job2'}}
     {"execute":"blockdev-snapshot","arguments":{"node":"drive_image1","overlay":"sn1"}}

 4. Add persistent bitmap to sn1
    { "execute": "block-dirty-bitmap-add", "arguments": {"node": "sn1", "name": "bitmap0","persistent":true}}

 5. Reopen backing image with x-blockdev-reopen
    {'execute':'x-blockdev-reopen','arguments':{'driver':'qcow2','node-name':'drive_image1', "read-only":false,'file':'file_image1'}}
{"error": {"class": "CommandNotFound", "desc": "The command x-blockdev-reopen has not been found"}}

 6. Reopen backing image with blockdev-reopen
   {'execute':'blockdev-reopen','arguments':{'options':[{'driver':'qcow2','node-name':'drive_image1', "read-only":false,'file':'file_image1'}]}}
   {"return": {}}

 7. Remove all bitmaps
    { "execute": "transaction", "arguments": { "actions": [ {"type": "block-dirty-bitmap-remove","data":{"node":"drive_image1","name":"bitmap0"}},{"type": "block-dirty-bitmap-remove","data":{"node":"sn1","name":"bitmap0"}}]}}

And also check qemu code on upstream, x-blockdev-reopen has been replaced by blockdev-reopen.

Comment 11 errata-xmlrpc 2022-05-10 13:20:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1759