Bug 1930033 - enable vhost-user-blk device [TechPreview]
Summary: enable vhost-user-blk device [TechPreview]
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.4
Assignee: Kevin Wolf
QA Contact: qing.wang
URL:
Whiteboard:
Depends On:
Blocks: 1884659
TreeView+ depends on / blocked
 
Reported: 2021-02-18 09:16 UTC by Pavel Hrdina
Modified: 2021-07-27 09:25 UTC (History)
12 users (show)

Fixed In Version: qemu-kvm-5.2.0-9.module+el8.4.0+10182+4161bd91
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-05-25 06:47:27 UTC
Type: Feature Request
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Pavel Hrdina 2021-02-18 09:16:33 UTC
Description of problem:

In BZ 1901323 we enable QSD in QEMU and one of the points is that it can be used through vhost-user-blk device, but that device is currently disabled in RHEL-AV.

There is libvirt BZ 1884659 to add support for vhost-user-blk as well so we should enable the device in QEMU.

Comment 2 Kevin Wolf 2021-02-19 16:29:25 UTC
Patches are posted, I'm setting ITM 18 as this is the last one before exception+ would be needed.

In order to test, launch qemu-storage-daemon in one terminal, e.g. like this:

qemu-storage-daemon \
--blockdev file,filename=/home/kwolf/images/f31.qcow2,node-name=proto \
--blockdev qcow2,file=proto,node-name=disk \
--export vhost-user-blk,id=exp0,addr.type=unix,addr.path=/tmp/vhost.sock,node-name=disk,writable=on

And qemu-kvm in another to connect to the export exposed by qemu-storage-daemon, e.g. like this:

qemu-kvm -enable-kvm \
-chardev socket,path=/tmp/vhost.sock,id=vhost -device vhost-user-blk-pci,chardev=vhost \
-object memory-backend-memfd,id=mem,size=4G,share=on -m 4G -M memory-backend=mem

Please provide qa_ack+.

Comment 11 qing.wang 2021-03-02 09:25:11 UTC
The info block command do not list the vhost-user-blk-pci

Red Hat Enterprise Linux release 8.4 Beta (Ootpa)
4.18.0-291.el8.x86_64
qemu-kvm-common-5.2.0-9.module+el8.4.0+10182+4161bd91.x86_64


Test steps
1.qemu-storage-daemon \
  --chardev socket,path=/tmp/qmp.sock,server,nowait,id=char1 \
  --monitor chardev=char1 \
  --object iothread,id=iothread0 \
  --blockdev driver=file,node-name=file1,filename=/home/kvm_autotest_root/images/disk1.qcow2 \
  --blockdev driver=qcow2,node-name=fmt1,file=file1 \
  --export type=vhost-user-blk,id=export1,addr.type=unix,addr.path=/tmp/vhost-user-blk1.sock,node-name=fmt1,writable=on,logical-block-size=512,num-queues=1,iothread=iothread0 \
  --blockdev driver=file,node-name=file2,filename=/home/kvm_autotest_root/images/disk2.qcow2 \
  --blockdev driver=qcow2,node-name=fmt2,file=file2 \
  --export type=vhost-user-blk,id=export2,addr.type=unix,addr.path=/tmp/vhost-user-blk2.sock,node-name=fmt2,writable=off,logical-block-size=1024,num-queues=2,iothread=iothread0 &


2./usr/libexec/qemu-kvm -enable-kvm \
-m 4G -M memory-backend=mem \
-nodefaults \
-vga qxl \
-smp 4 \
-object memory-backend-memfd,id=mem,size=4G,share=on \
-blockdev driver=qcow2,file.driver=file,file.filename=/home/kvm_autotest_root/images/rhel840-64-virtio-scsi.qcow2,node-name=os_image1 \
-device virtio-blk-pci,id=blk_data1,drive=os_image1,bootindex=1 \
-chardev socket,path=/tmp/vhost-user-blk1.sock,id=vhost1 \
-device vhost-user-blk-pci,chardev=vhost1,id=blk1 \
-chardev socket,path=/tmp/vhost-user-blk2.sock,id=vhost2 \
-device vhost-user-blk-pci,chardev=vhost2,id=blk2,num-queues=2 \
-vnc :5 \
-monitor stdio \
-qmp tcp:0:5955,server,nowait \

3.login guest to check disk
 lsblk


4.check block in qemu hmp
(qemu) info block
os_image1: /home/kvm_autotest_root/images/rhel840-64-virtio-scsi.qcow2 (qcow2)
    Attached to:      /machine/peripheral/blk_data1/virtio-backend
    Cache mode:       writeback

At step 4 there are no vhost-user-blk-pci disks in command outqut, is it a bug ?

same result in qemu-storage-daemon qmp command query-block ?

Comment 18 errata-xmlrpc 2021-05-25 06:47:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:2098


Note You need to log in before you can comment on or make changes to this bug.