Bug 1809451 - [virtio-rng] Reading from virtio-rng device is slower than expected when max-bytes/period options are default
Summary: [virtio-rng] Reading from virtio-rng device is slower than expected when max-...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.2
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: rc
: 8.0
Assignee: Laurent Vivier
QA Contact: yduan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-03 07:28 UTC by yduan
Modified: 2021-01-21 15:39 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-01-21 15:39:50 UTC
Type: Bug
Target Upstream Version:


Attachments (Terms of Use)

Description yduan 2020-03-03 07:28:45 UTC
Description of problem:
Reading from virtio-rng device is slower than expected when max-bytes/period options are default


Version-Release number of selected component (if applicable):
Host:
# uname -r
4.18.0-184.el8.x86_64
# rpm -q qemu-kvm-core
qemu-kvm-core-4.2.0-13.module+el8.2.0+5898+fb4bceae.x86_64
Guest:
# uname -r
4.18.0-184.el8.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Boot a VM with a virtio-rng device:
"""
/usr/libexec/qemu-kvm \
...
 -object rng-random,id=obj0,filename=/dev/urandom \
 -device virtio-rng-pci,rng=obj0,id=rng0 \
"""
or
"""
/usr/libexec/qemu-kvm \
...
 -object rng-builtin,id=obj0 \
 -device virtio-rng-pci,rng=obj0,id=rng0 \
"""

2.Read from /dev/hwrng in guest:
# dd if=/dev/hwrng of=/dev/null bs=1024 count=1024000

Actual results:
Host:
# dd if=/dev/urandom of=/dev/null bs=1024 count=1024000
1024000+0 records in
1024000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 4.28159 s, 245 MB/s

Guest:
[rng-random backend]:
# dd if=/dev/urandom of=/dev/null bs=1024 count=1024000
1024000+0 records in
1024000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 5.04217 s, 208 MB/s
# dd if=/dev/hwrng of=/dev/null bs=1024 count=1024000
1024000+0 records in
1024000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 855.185 s, 1.2 MB/s
[rng-builtin backend]:
# dd if=/dev/urandom of=/dev/null bs=1024 count=1024000
1024000+0 records in
1024000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 5.03288 s, 208 MB/s
# dd if=/dev/hwrng of=/dev/null bs=1024 count=1024000
1024000+0 records in
1024000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 1002.2 s, 1.0 MB/s

Expected results:
Reading from /dev/hwrng should be faster than actual results.


Additional info:

Comment 1 Laurent Vivier 2020-03-03 14:17:17 UTC
Could you compare the guest performance with the host performance with the following program?

cat > getrandom.c <<EOF
#include <unistd.h>
#include <sys/random.h>

int main(void)
{
    int len;
    char buf[64];

    while (1) {
        len = getrandom(buf, sizeof(buf), 0);
        write(STDOUT_FILENO, buf, len);
    }
    return 0;
}
EOF

cc -o getrandom getrandom.c

./getrandom | dd of=/dev/null bs=1024 count=1024000

I think the bandwith is limited by the size of the buffer provided by the guest hw_random driver, which is normally 64 bytes.

If need to improve performance, we need to define a bigger buffer in the virtio-rng driver.

Comment 2 yduan 2020-03-04 01:36:19 UTC
Hi Laurent,

Here are the testing results:

Host:
[root@localhost home]# ./getrandom | dd of=/dev/null bs=1024 count=1024000
dd: warning: partial read (384 bytes); suggest iflag=fullblock
5+1023995 records in
5+1023995 records out
65547520 bytes (66 MB, 63 MiB) copied, 1.55385 s, 42.2 MB/s
[root@localhost home]# ./getrandom | dd of=/dev/null bs=1024 count=1024000
dd: warning: partial read (448 bytes); suggest iflag=fullblock
11+1023989 records in
11+1023989 records out
67484864 bytes (67 MB, 64 MiB) copied, 1.62145 s, 41.6 MB/s
[root@localhost home]# ./getrandom | dd of=/dev/null bs=1024 count=1024000
dd: warning: partial read (128 bytes); suggest iflag=fullblock
8+1023992 records in
8+1023992 records out
66973376 bytes (67 MB, 64 MiB) copied, 1.63235 s, 41.0 MB/s
[root@localhost home]# ./getrandom | dd of=/dev/null bs=1024 count=1024000
dd: warning: partial read (704 bytes); suggest iflag=fullblock
9+1023991 records in
9+1023991 records out
66165504 bytes (66 MB, 63 MiB) copied, 1.61861 s, 40.9 MB/s
[root@localhost home]# ./getrandom | dd of=/dev/null bs=1024 count=1024000
dd: warning: partial read (512 bytes); suggest iflag=fullblock
7+1023993 records in
7+1023993 records out
66045184 bytes (66 MB, 63 MiB) copied, 1.59835 s, 41.3 MB/s

Guest:
[root@dhcp-8-223 ~]# ./getrandom | dd of=/dev/null bs=1024 count=1024000
dd: warning: partial read (320 bytes); suggest iflag=fullblock
4842+1019158 records in
4842+1019158 records out
80330112 bytes (80 MB, 77 MiB) copied, 2.72621 s, 29.5 MB/s
[root@dhcp-8-223 ~]# ./getrandom | dd of=/dev/null bs=1024 count=1024000
dd: warning: partial read (256 bytes); suggest iflag=fullblock
2871+1021129 records in
2871+1021129 records out
78058240 bytes (78 MB, 74 MiB) copied, 2.55549 s, 30.5 MB/s
[root@dhcp-8-223 ~]# ./getrandom | dd of=/dev/null bs=1024 count=1024000
dd: warning: partial read (576 bytes); suggest iflag=fullblock
2897+1021103 records in
2897+1021103 records out
77280448 bytes (77 MB, 74 MiB) copied, 2.55466 s, 30.3 MB/s
[root@dhcp-8-223 ~]# ./getrandom | dd of=/dev/null bs=1024 count=1024000
dd: warning: partial read (448 bytes); suggest iflag=fullblock
599+1023401 records in
599+1023401 records out
73494208 bytes (73 MB, 70 MiB) copied, 2.37515 s, 30.9 MB/s
[root@dhcp-8-223 ~]# ./getrandom | dd of=/dev/null bs=1024 count=1024000
dd: warning: partial read (128 bytes); suggest iflag=fullblock
1023+1022977 records in
1023+1022977 records out
76673472 bytes (77 MB, 73 MiB) copied, 2.60088 s, 29.5 MB/s

Thanks,
yduan

Comment 6 Laurent Vivier 2020-08-07 08:08:32 UTC
Patch sent upstream:

  hwrng: core - allocate a one page buffer
  https://patchwork.kernel.org/patch/11703533/

Perf:

  # dd if=/dev/hwrng of=/dev/null bs=1024 count=1024000
  1048576000 bytes (1.0 GB, 1000 MiB) copied, 41.0579 s, 25.5 MB/s
  # dd if=/dev/hwrng of=/dev/null bs=4096 count=256000
  1048576000 bytes (1.0 GB, 1000 MiB) copied, 14.394 s, 72.8 MB/s

Comment 8 yduan 2021-01-20 09:02:42 UTC
Hi laurent,

Is there any update of this bz?

BR,
yduan

Comment 9 Laurent Vivier 2021-01-20 10:01:49 UTC
Hi Yanbin,

as I have no time to work on this, we are thinking about closing the BZ (WONTFIX).

What do you think about that?

Thanks

Comment 10 yduan 2021-01-21 15:39:50 UTC
Agree as it's a corner case and I'll drop this scenario in polarion case.

Thank you!


Note You need to log in before you can comment on or make changes to this bug.