Bug 1680226 - ppc64 hw acceleration support for luks encryption in QEMU
Summary: ppc64 hw acceleration support for luks encryption in QEMU
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.0
Hardware: ppc64le
OS: Linux
medium
high
Target Milestone: rc
: 8.3
Assignee: Virtualization Maintenance
QA Contact: Zhenyu Zhang
URL:
Whiteboard:
Depends On: 1680231 1762765
Blocks: 1719252
TreeView+ depends on / blocked
 
Reported: 2019-02-23 07:55 UTC by Yihuang Yu
Modified: 2021-04-09 20:41 UTC (History)
21 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1680231 (view as bug list)
Environment:
Last Closed: 2020-05-04 01:49:24 UTC
Type: Feature Request
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
iostat log (iostat 1 -x -m -p vda) (9.06 KB, application/gzip)
2019-02-23 07:55 UTC, Yihuang Yu
no flags Details
modified libgcrypt spec file for aes_ppc.patch (19.56 KB, patch)
2019-11-22 21:23 UTC, IBM Bug Proxy
no flags Details | Diff
aes ppc64le optimizations patch (95.61 KB, patch)
2019-11-22 21:23 UTC, IBM Bug Proxy
no flags Details | Diff
0/2: modified libgcrypt spec file for patches (12.96 KB, patch)
2020-01-31 21:00 UTC, IBM Bug Proxy
no flags Details | Diff
2/2: aes xts and bulk modes implemenations patch (12.96 KB, patch)
2020-01-31 21:10 UTC, IBM Bug Proxy
no flags Details | Diff
1/2: aes ppc64le optimizations patch (12.96 KB, patch)
2020-01-31 21:10 UTC, IBM Bug Proxy
no flags Details | Diff
0/2: modified libgcrypt spec file for patches (12.96 KB, patch)
2020-01-31 21:10 UTC, IBM Bug Proxy
no flags Details | Diff
0/2: modified libgcrypt spec file for patches (19.11 KB, patch)
2020-02-02 13:35 UTC, Hanns-Joachim Uhl
no flags Details | Diff
1/2: aes ppc64le optimizations patch (92.43 KB, patch)
2020-02-02 13:35 UTC, Hanns-Joachim Uhl
no flags Details | Diff
2/2: aes xts and bulk modes implementations patch (4.96 KB, patch)
2020-02-02 13:35 UTC, Hanns-Joachim Uhl
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
IBM Linux Technology Center 181962 0 None None None 2019-10-18 12:01:11 UTC

Description Yihuang Yu 2019-02-23 07:55:42 UTC
Created attachment 1537736 [details]
iostat log (iostat 1 -x -m -p vda)

Description of problem:
Performance is severely degraded using luks formatted disks.

Version-Release number of selected component (if applicable):
qemu version: qemu-kvm-3.1.0-15.module+el8+2792+e33e01a0.ppc64le
host kernel version: 4.18.0-70.el8.ppc64le
guest kernel version: 4.18.0-70.el8.ppc64le

How reproducible:
100%

Steps to Reproduce:
1. Create images in different formats
# qemu-img create -f raw data.raw 100G
# qemu-img create -f qcow2 data.qcow2 100G
# qemu-img create -f luks --object secret,id=secret0,data="redhat" -o key-secret=secret0 data.luks 100G

2. Launch a guest with different images, luks as follows
/usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1' \
    -machine pseries  \
    -nodefaults \
    -device VGA,bus=pci.0,addr=0x2  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20190221-213831-uAIBCGI0,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20190221-213831-uAIBCGI0,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control  \
    -chardev socket,id=serial_id_serial0,path=/var/tmp/serial,server,nowait \
    -device spapr-vty,reg=0x30000000,chardev=serial_id_serial0 \
    -device qemu-xhci,id=usb1,bus=pci.0,addr=0x3 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x4 \
    -blockdev driver=file,filename=rhel80-ppc64le-virtio-scsi.qcow2,cache.direct=off,cache.no-flush=on,node-name=image1 \
    -blockdev driver=qcow2,file=image1,node-name=drive_image1 \
    -device scsi-hd,id=image1,drive=drive_image1 \
    -device virtio-net-pci,mac=9a:24:25:26:27:28,id=idGN5fuf,vectors=4,netdev=idBTV5B1,bus=pci.0,addr=0x5  \
    -netdev tap,id=idBTV5B1,vhost=on \
    -object secret,id=secret0,data="redhat" \
    -blockdev driver=file,filename=data.luks,cache.direct=off,cache.no-flush=on,node-name=image2 \
    -blockdev driver=luks,file=image2,node-name=drive_image2,key-secret=secret0 \
    -device virtio-blk-pci,id=virtio_blk_pci1,drive=drive_image2,bus=pci.0,addr=0x6 \
    -m 51200  \
    -smp 16,maxcpus=16,cores=8,threads=1,sockets=2 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=utc,clock=host  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -monitor stdio

3. test disk performance via "dd" and "qemu-io", compare them.
*raw:
dd if=/dev/zero of=/dev/vda bs=1M count=20480
20480+0 records in
20480+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 7.82131 s, 2.7 GB/s

qemu-io -c 'write 0 1G' --image-opts driver=raw,file.filename=data.raw
wrote 1073741824/1073741824 bytes at offset 0
1 GiB, 1 ops; 0:00:04.92 (207.711 MiB/sec and 0.2028 ops/sec)

*qcow2:
dd if=/dev/zero of=/dev/vda bs=1M count=20480
20480+0 records in
20480+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 7.75988 s, 2.8 GB/s

qemu-io -c 'write 0 1G' --image-opts driver=qcow2,file.filename=data.qcow2
wrote 1073741824/1073741824 bytes at offset 0
1 GiB, 1 ops; 0:00:04.48 (228.175 MiB/sec and 0.2228 ops/sec)

Actual results:
*luks:
dd if=/dev/zero of=/dev/vda bs=1M count=20480
20480+0 records in
20480+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 441.068 s, 48.7 MB/s

qemu-io  --object secret,id=secret0,data="redhat" -c 'write 0 1G' --image-opts driver=luks,file.filename=data.luks,key-secret=secret0
wrote 1073741824/1073741824 bytes at offset 0
1 GiB, 1 ops; 0:00:27.54 (37.172 MiB/sec and 0.0363 ops/sec)

Expected results:
The luks format disk does not show a large performance degradation.

Additional info:
It doesn't seem to reproduce on x86, I don't see significant performance differences.

x86_64:
raw:
dd if=/dev/zero of=/dev/vdb bs=1M count=20480
20480+0 records in
20480+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 114.943 s, 187 MB/s

qemu-io -c 'write 0 1G' --image-opts driver=raw,file.filename=data.raw
wrote 1073741824/1073741824 bytes at offset 0
1 GiB, 1 ops; 0:00:09.62 (106.356 MiB/sec and 0.1039 ops/sec)

luks:
dd if=/dev/zero of=/dev/vdb bs=1M count=20480
20480+0 records in
20480+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 126.769 s, 169 MB/s

qemu-io  --object secret,id=secret0,data="redhat" -c 'write 0 1G' --image-opts driver=luks,file.filename=data.luks,key-secret=secret0
wrote 1073741824/1073741824 bytes at offset 0
1 GiB, 1 ops; 0:00:13.32 (76.833 MiB/sec and 0.0750 ops/sec)

Comment 1 Ademar Reis 2019-02-25 16:59:42 UTC
See also bug 1666336

Comment 4 Ademar Reis 2019-04-22 14:17:00 UTC
Looks like there's some confusion about what to track on this front, so let me clarify:

- There's a patch series that improves the performance of the AES/XTS (the default) on x86_64. The patches were merged upstream in QEMU-3.1 and are being backported to RHEL-7.7 (bug 1666336) and RHEL-8.0.1 (bug 1680231). RHEL-AV-8.0 already includes them (QEMU-3.1).

- Looks like there is no hardware accelerated impl for ppc64 of AES, so it will be significantly slower than x86_64. Given this BZ was originally open on power, I'm making this BZ track this missing feature. It's assigned to the ppc64 team.

- Finally, there's still a gap in performance when compared to the in-kernel implementation and we still have room for improvement in QEMU on x86. I'll open a new BZ to track it.

Comment 5 Ademar Reis 2019-04-22 14:25:49 UTC
(In reply to Ademar Reis from comment #4)
> - Finally, there's still a gap in performance when compared to the in-kernel
> implementation and we still have room for improvement in QEMU on x86. I'll
> open a new BZ to track it.

https://bugzilla.redhat.com/show_bug.cgi?id=1701948

Comment 6 Gu Nini 2019-04-23 02:04:02 UTC
(In reply to Ademar Reis from comment #4)
> Looks like there's some confusion about what to track on this front, so let
> me clarify:
> 
> - There's a patch series that improves the performance of the AES/XTS (the
> default) on x86_64. The patches were merged upstream in QEMU-3.1 and are
> being backported to RHEL-7.7 (bug 1666336) and RHEL-8.0.1 (bug 1680231).
> RHEL-AV-8.0 already includes them (QEMU-3.1).
> 
> - Looks like there is no hardware accelerated impl for ppc64 of AES, so it
> will be significantly slower than x86_64. Given this BZ was originally open
> on power, I'm making this BZ track this missing feature. It's assigned to
> the ppc64 team.

Ademar, compared with bz1680231, should there be a slow train clone of the ppc64le only bug?

> 
> - Finally, there's still a gap in performance when compared to the in-kernel
> implementation and we still have room for improvement in QEMU on x86. I'll
> open a new BZ to track it.

Comment 7 David Gibson 2019-04-23 03:36:11 UTC
Hmm.  This is well outside my area of expertise to implement from scratch for POWER.  I'm guessing it could be optimized either using one of the recent hardware accelerators, or at least using VMX/VSX instructions, but that's about the extent of it.

Without a request from IBM or customers, I don't think we can consider this high priority, dropping to medium.

Comment 8 Daniel Berrangé 2019-04-23 09:05:40 UTC
(In reply to Ademar Reis from comment #4)

> - Looks like there is no hardware accelerated impl for ppc64 of AES, so it
> will be significantly slower than x86_64. Given this BZ was originally open
> on power, I'm making this BZ track this missing feature. It's assigned to
> the ppc64 team.

The AES impl is not in QEMU itself. Don't be confused by crypto/aes.c which we no longer compile.

Instead we've delegated to libgcrypt for AES impl. In future we may well use GNUTLS instead. So any gap will need to be dealt with in one of those - more likely to be GNUTLS. In fact its possible GNUTLS already has an optimized ppc impl - i've only looked at gcrypt.

Comment 9 Ademar Reis 2019-04-23 15:14:03 UTC
(In reply to Gu Nini from comment #6)
> (In reply to Ademar Reis from comment #4)
> > Looks like there's some confusion about what to track on this front, so let
> > me clarify:
> > 
> > - There's a patch series that improves the performance of the AES/XTS (the
> > default) on x86_64. The patches were merged upstream in QEMU-3.1 and are
> > being backported to RHEL-7.7 (bug 1666336) and RHEL-8.0.1 (bug 1680231).
> > RHEL-AV-8.0 already includes them (QEMU-3.1).
> > 
> > - Looks like there is no hardware accelerated impl for ppc64 of AES, so it
> > will be significantly slower than x86_64. Given this BZ was originally open
> > on power, I'm making this BZ track this missing feature. It's assigned to
> > the ppc64 team.
> 
> Ademar, compared with bz1680231, should there be a slow train clone of the
> ppc64le only bug?

Right now our priority is to get new features and complex improvements in upstream and the fast train (and cascade them into the slow train over time). If IBM identifies this as a priority for power on the slow train, they'll make a request and we'll prioritize. So for now I think this feature request should stay as fast-train only, but it's up to the power team to figure that out.

Comment 11 IBM Bug Proxy 2019-11-22 21:23:53 UTC
------- Comment From johnjmar.com 2019-11-22 16:22 EDT-------
I've attched the proposed patch and new spec file that should accompany it.
The patch was created using libgcrypt-1.8.3-2.el8.src.rpm as starting point, and seems to pass all tests after running rpmbuild -bb

Comment 12 IBM Bug Proxy 2019-11-22 21:23:56 UTC
Created attachment 1638933 [details]
modified libgcrypt spec file for aes_ppc.patch

Comment 13 IBM Bug Proxy 2019-11-22 21:23:58 UTC
Created attachment 1638934 [details]
aes ppc64le optimizations patch


------- Comment on attachment From johnjmar.com 2019-11-22 16:19 EDT-------


Backport from libgcrypt 1.9.

Comment 14 Daniel Berrangé 2019-11-25 10:10:32 UTC
I've updated bug 1762765  against gcrypt to make it clear that we want backport of accelerated AES-XTS support for all architectures where it available, x86_64, ppc64le, aarch64

Comment 15 IBM Bug Proxy 2019-12-04 23:20:21 UTC
------- Comment From johnjmar.com 2019-12-04 18:11 EDT-------
Hello,

Just to clarify, the patch I've uploaded is part one of two patches, the second patch being needed for AES bulk + XTS mode implementation.

Should I continue this route or should I assume that RH will move to libgcrypt 1.9.0, which contains optimizations for all libraries?

Comment 16 Daniel Berrangé 2019-12-05 10:47:07 UTC
The other bug 1762765 is tracking gcrypt changes, and it is upto the gcrypt maintainer whether they'll decide to backport the perf improvements, or rebase to newer gcrypt. My guess is that they'll probably backport

Comment 17 IBM Bug Proxy 2020-01-13 07:01:04 UTC
------- Comment From iranna.ankad.com 2020-01-13 01:56 EDT-------
Will this feature be included in RHEL 8.2, now or later? If yes, our KVM FVT will plan to include this in the verification list.

Thanks!

Comment 18 David Gibson 2020-01-21 02:22:26 UTC
No, this will not be in 8.2, it's not nearly close enough to ready.

The libgcrypt bug it depends on is targetted at 8.3, so let's match that.

Comment 20 IBM Bug Proxy 2020-01-21 09:30:57 UTC
------- Comment From iranna.ankad.com 2020-01-21 04:29 EDT-------
(In reply to comment #21)
> No, this will not be in 8.2, it's not nearly close enough to ready.
> The libgcrypt bug it depends on is targetted at 8.3, so let's match that.

Sure, Thanks David!

Comment 22 IBM Bug Proxy 2020-01-31 21:00:33 UTC
Created attachment 1656793 [details]
0/2: modified libgcrypt spec file for patches


------- Comment on attachment From johnjmar.com 2020-01-31 15:58 EDT-------


modified libgcrypt spec file for aes_ppc.patch and aes_xts_bulk.patch

Comment 23 IBM Bug Proxy 2020-01-31 21:10:33 UTC
Created attachment 1656805 [details]
2/2: aes xts and bulk modes implemenations patch


------- Comment on attachment From johnjmar.com 2020-01-31 16:01 EDT-------


Backport from libgcrypt 1.9.

Comment 24 IBM Bug Proxy 2020-01-31 21:10:35 UTC
Created attachment 1656806 [details]
1/2: aes ppc64le optimizations patch


------- Comment on attachment From johnjmar.com 2020-01-31 15:59 EDT-------


Backport from libgcrypt 1.9.

Comment 25 IBM Bug Proxy 2020-01-31 21:10:37 UTC
Created attachment 1656807 [details]
0/2: modified libgcrypt spec file for patches


------- Comment on attachment From johnjmar.com 2020-01-31 15:58 EDT-------


modified libgcrypt spec file for aes_ppc.patch and aes_xts_bulk.patch

Comment 26 Hanns-Joachim Uhl 2020-02-02 13:35:08 UTC
Created attachment 1657132 [details]
0/2: modified libgcrypt spec file for patches

Comment 27 Hanns-Joachim Uhl 2020-02-02 13:35:35 UTC
Created attachment 1657133 [details]
1/2: aes ppc64le optimizations patch

Comment 28 Hanns-Joachim Uhl 2020-02-02 13:35:56 UTC
Created attachment 1657134 [details]
2/2: aes xts and bulk modes implementations patch

Comment 29 Hanns-Joachim Uhl 2020-02-02 13:48:02 UTC
hmm ... from my reading the attached patches are for _libgcrypt_ and not for qemu-kvm ...
... should this Red Hat bugzilla be moved to RHEL8 libgcrypt ...? Please advise ...

Comment 30 Tomas Mraz 2020-02-02 16:40:50 UTC
(In reply to Hanns-Joachim Uhl from comment #29)
> hmm ... from my reading the attached patches are for _libgcrypt_ and not for
> qemu-kvm ...
> ... should this Red Hat bugzilla be moved to RHEL8 libgcrypt ...? Please
> advise ...

There need to be changes on both qemu and libgcrypt. The libgcrypt change is tracked at bug 1762765. No need to reattach the patches there.

Comment 31 Ademar Reis 2020-02-05 22:54:29 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 32 David Gibson 2020-04-28 05:12:20 UTC
Tomas,

Now that bug 1762765 has a fix, what needs to be done here?

Comment 33 Tomas Mraz 2020-04-28 09:37:55 UTC
This is actually a question for Daniel, if there are any changes in qemu needed to take advantage of the changes in libgcrypt.

Comment 34 Daniel Berrangé 2020-04-28 09:39:26 UTC
It should be transparent from QEMU's POV, no changes required, since this is all exposed via regular gcrypt APIs QEMU is already using.

Comment 35 David Gibson 2020-04-29 03:03:47 UTC
Thanks Daniel,

Yihuang, can you retest with the latest gcrypt package (including the fix from bug 1762765)?

Comment 37 Yihuang Yu 2020-04-30 01:56:13 UTC
With the fixed version, the performance of luks image really improved a lot.

dd: from 110 MB/s --> 790 MB/s
qemu-io: from 64.791 MiB/sec --> 142.377 MiB/sec


Full test results:
env:
# rpm -qa | egrep "qemu-kvm-4|qemu-img|kernel-4"
kernel-4.18.0-193.13.el8.ppc64le
qemu-img-4.2.0-19.module+el8.3.0+6371+f67a7ce3.ppc64le
qemu-kvm-4.2.0-19.module+el8.3.0+6371+f67a7ce3.ppc64le


* libgcrypt-1.8.3-4.el8.ppc64le
raw:
# dd if=/dev/zero of=/dev/vda bs=1M count=20480
20480+0 records in
20480+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 3.06246 s, 7.0 GB/s

# qemu-io -c 'write 0 1G' --image-opts driver=raw,file.filename=data.raw
wrote 1073741824/1073741824 bytes at offset 0
1 GiB, 1 ops; 0:00:05.94 (172.331 MiB/sec and 0.1683 ops/sec)

qcow2:
# dd if=/dev/zero of=/dev/vda bs=1M count=20480
20480+0 records in
20480+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 3.11996 s, 6.9 GB/s

# qemu-io -c 'write 0 1G' --image-opts driver=qcow2,file.filename=data.qcow2
wrote 1073741824/1073741824 bytes at offset 0
1 GiB, 1 ops; 0:00:05.78 (177.146 MiB/sec and 0.1730 ops/sec)

luks:
# dd if=/dev/zero of=/dev/vda bs=1M count=20480
20480+0 records in
20480+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 195.529 s, 110 MB/s

# qemu-io  --object secret,id=secret0,data="redhat" -c 'write 0 1G' --image-opts driver=luks,file.filename=data.luks,key-secret=secret0
wrote 1073741824/1073741824 bytes at offset 0
1 GiB, 1 ops; 0:00:15.80 (64.791 MiB/sec and 0.0633 ops/sec)

* libgcrypt-1.8.5-3.el8.ppc64le
raw:
# dd if=/dev/zero of=/dev/vda bs=1M count=20480
20480+0 records in
20480+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 3.38454 s, 6.3 GB/s

# qemu-io -c 'write 0 1G' --image-opts driver=raw,file.filename=data.raw
wrote 1073741824/1073741824 bytes at offset 0
1 GiB, 1 ops; 0:00:05.77 (177.449 MiB/sec and 0.1733 ops/sec)

qcow2:
# dd if=/dev/zero of=/dev/vda bs=1M count=20480
20480+0 records in
20480+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 3.39842 s, 6.3 GB/s

# qemu-io -c 'write 0 1G' --image-opts driver=qcow2,file.filename=data.qcow2
wrote 1073741824/1073741824 bytes at offset 0
1 GiB, 1 ops; 0:00:05.86 (174.870 MiB/sec and 0.1708 ops/sec)

luks:
# dd if=/dev/zero of=/dev/vda bs=1M count=20480
20480+0 records in
20480+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 27.1743 s, 790 MB/s

# qemu-io  --object secret,id=secret0,data="redhat" -c 'write 0 1G' --image-opts driver=luks,file.filename=data.luks,key-secret=secret0
wrote 1073741824/1073741824 bytes at offset 0
1 GiB, 1 ops; 0:00:07.19 (142.377 MiB/sec and 0.1390 ops/sec)

Comment 38 David Gibson 2020-05-04 01:49:24 UTC
Thanks Yihuang,

It looks like with the libgcrypt package updated we don't need anything else, so we can close this bug.

Comment 39 IBM Bug Proxy 2020-07-03 11:32:02 UTC
------- Comment From kumuda.govind.com 2020-07-03 07:25 EDT-------
Comparing to RHEL8.2 version libgcrypt-1.8.3-4.el8.ppc64le, performance of luks formatted disk is better in RHEL8.3 libgcrypt-1.8.5-4.el8.ppc64le.

On RHEL8.2
[root@ltc-boston120 ~]# uname -a
Linux ltc-boston120.aus.stglabs.ibm.com 4.18.0-193.el8.ppc64le #1 SMP Fri Mar 27 14:40:12 UTC 2020 ppc64le ppc64le ppc64le GNU/Linux
[root@ltc-boston120 kumuda]# rpm -qa|grep libgcrypt
libgcrypt-1.8.3-4.el8.ppc64le
[root@ltc-boston120 kumuda]# qemu-img create -f luks --object secret,id=secret0,data="redhat" -o key-secret=secret0 data.luks 100G
Formatting 'data.luks', fmt=luks size=107374182400 key-secret=secret0
[root@ltc-boston120 kumuda]# qemu-io  --object secret,id=secret0,data="redhat" -c 'write 0 1G' --image-opts driver=luks,file.filename=data.luks,key-secret=secret0
1 GiB, 1 ops; 0:00:45.14 (22.685 MiB/sec and 0.0222 ops/sec)
[root@localhost ~]# dd if=/dev/zero of=/dev/vdb bs=1M count=20480
21474836480 bytes (21 GB, 20 GiB) copied, 763.378 s, 28.1 MB/s

On RHEL8.3
[root@tempbmc1-p1 ~]# uname -a
Linux tempbmc1-p1.aus.stglabs.ibm.com 4.18.0-214.el8.ppc64le #1 SMP Fri Jun 12 08:59:58 UTC 2020 ppc64le ppc64le ppc64le GNU/Linux
[root@tempbmc1-p1 kumuda]# rpm -qa|grep libgcrypt
libgcrypt-1.8.5-4.el8.ppc64le
[root@tempbmc1-p1 kumuda]# qemu-io  --object secret,id=secret0,data="redhat" -c 'write 0 1G' --image-opts driver=luks,file.filename=data.luks,key-secret=secret0
1 GiB, 1 ops; 0:00:11.15 (91.844 MiB/sec and 0.0897 ops/sec)
[root@localhost ~]# dd if=/dev/zero of=/dev/vdb bs=1M count=20480
21474836480 bytes (21 GB, 20 GiB) copied, 21.355 s, 1.0 GB/s

Comment 40 IBM Bug Proxy 2021-04-09 20:21:15 UTC
------- Comment From gcwilson.com 2021-04-09 16:12 EDT-------
ChaCha20: 557702f0d53a7ad1cf2ce0333c9df799a8abad59
CRC: 0486b85bd1fb65013e77f858cae9ea4530f868df
Poly1305: 0564757b934d24c7fef10df8594099985fbbc0ac
SHA-256: e19dc973bc8e2a0ce92dd87515df3ee338265a8d
SHA-512: 93632f1adf57f142e5d9e9653c405f2ca8c601c0

Compile arch specific GCM implementations only on target arch: 43302b960f546fd60ed7fefb2b0404ee69491e93
configure.ac: fix digest implementations going to cipher list: 8892510bb8f45438144a7449440fcb32ae4c5f7b
cipher-gcm-ppc: tweak for better performance: 760ef8baee06db5ce4da55eb5648e605aa511d2d
VPMSUMD acceleration for GCM mode on PPC: 440332532a1c107e2baeafda5464e0707f634be1
chacha20-ppc: fix 32-bit counter overflow handling: ed45eac3b721c1313902b977379fbd4886ccca7b
ppc: avoid using vec_vsx_ld/vec_vsx_st for 2x64-bit vectors: 1250a9cd859d99f487ca8d76a98d70d464324bbe
crc-ppc: fix bad register used for vector load/store assembly: b64b029318e7d0b66123015146614118f466a7a9
rinjdael-aes: use zero offset vector load/store when possible: 89776d45c824032409f581e5fd1db6bf149df57f
Add POWER9 little-endian variant of PPC AES implementation: 114bbc45e9717f9ad9641f64d8df8690db8da434
rijndael-ppc: performance improvements: 110077505acacae62cec3d09b32a084b9cee0368
rijndael-ppc: fix bad register used for vector load/store assembly: 0837d7e6be3e604c1f7b86d18c582d8aa7ed858c
Small tweak for PowerPC Chacha20-Poly1305 round loop: 96b91e164160dfbd913aefe258f472d386f5b642
Add PowerPC extra CFLAGS also for chacha20-ppc and crc-ppc: 5516072451d46be8827455afff840eb6d49155fb

------- Comment From gcwilson.com 2021-04-09 16:14 EDT-------
Previous comment was in wrong bug.


Note You need to log in before you can comment on or make changes to this bug.