This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
Bug 2180347 - [virtio-win][virtiofs] virtio-fs looks a bit bad in performance
Summary: [virtio-win][virtiofs] virtio-fs looks a bit bad in performance
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: virtio-win
Version: 9.2
Hardware: x86_64
OS: Windows
medium
medium
Target Milestone: rc
: ---
Assignee: Yvugenfi@redhat.com
QA Contact: xiagao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-03-21 09:07 UTC by xiagao
Modified: 2023-08-16 13:37 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-08-16 13:37:29 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker   RHEL-1212 0 None None None 2023-08-16 13:37:28 UTC
Red Hat Issue Tracker RHELPLAN-152536 0 None None None 2023-03-21 09:09:50 UTC

Description xiagao 2023-03-21 09:07:01 UTC
Description of problem:
Virtiofs performance in windows guest have 1/3 compared with RHEL guest

Version-Release number of selected component (if applicable):
virtiofsd-1.5.0-1.el9.x86_64
kernel-5.14.0-284.2.1.el9_2.x86_64
qemu-kvm-7.2.0-11.el9_2.x86_64

Steps to Reproduce:
1.start virtiofsd on host
/usr/libexec/virtiofsd --socket-path=/var/tmp/avocado_poxmt7pc/avocado-vt-vm1-viofs-virtiofsd.sock -o source=/root/avocado/data/avocado-vt/virtio_fs_test/ -o cache=auto
2.boot up win2022/RHEL9.2 guest with virtiofs device.
3.start the virtiofs service  and get the shared disk
4. do fio test on the shared disk.
Ws2022:
"C:\Program Files (x86)\fio\fio\fio.exe" --name=stress --filename=Z:/test_file --ioengine=windowsaio --rw=write/read --direct=1 --bs=4K --size=1G --iodepth=256 --numjobs=128 --runtime=1800 --thread

RHEL9:
/usr/bin/fio --name=stress --filename=/mnt/myfs/test_file --ioengine=libaio --rw=write/read --direct=1 --bs=4K --size=1G --iodepth=256 --numjobs=128 --runtime=1800

Actual results:
It indeed has a performance decreation compared with linux guest, it's about 1/3 compared with RHEL9.2 guest.

Expected results:
Basically there is no big degration compared

Additional info:

Comment 1 RHEL Program Management 2023-08-15 15:25:07 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.


Note You need to log in before you can comment on or make changes to this bug.