Bug 1612973 - [GSS] TLS/SSL access of GlusterFS mounts is 50% slower than with no TLS/SSL enabled.
Summary: [GSS] TLS/SSL access of GlusterFS mounts is 50% slower than with no TLS/SSL e...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: core
Version: rhgs-3.3
Hardware: x86_64
OS: Linux
urgent
high
Target Milestone: ---
: RHGS 3.5.z Batch Update 4
Assignee: Mohit Agrawal
QA Contact: Vivek Das
URL:
Whiteboard:
Depends On:
Blocks: 1649191 1787325
TreeView+ depends on / blocked
 
Reported: 2018-08-06 16:13 UTC by Ben Turner
Modified: 2022-03-13 15:21 UTC (History)
15 users (show)

Fixed In Version: glusterfs-6.0-50
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1787325 (view as bug list)
Environment:
Last Closed: 2021-04-29 07:20:37 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2021:1462 0 None None None 2021-04-29 07:21:06 UTC

Description Ben Turner 2018-08-06 16:13:59 UTC
Description of problem:

When we enable TLS/SSL on our data path we see a 50% performance hit in throughput.

Version-Release number of selected component (if applicable):

All glusterFS versions

How reproducible:

Every time.

Steps to Reproduce:
1.  Run read / write perf test(say DD), note results
2.  Enable TLS/SSL on the volume
3.  Re run perf tests, compare throughput.

Actual results:

50% perf hit

Expected results:

Minimal performance hit.

Additional info:

I have tested this on older versions, this looks to be an issue on all gluster versions.

Comment 8 Milan Zink 2018-10-02 09:16:04 UTC
Hi,

volumes were configured with default options - simple 2-way replica. Only client/server.ssl was added when using IO/network encryption. rest_ means that bricks were on LVM with LUKS encrypted PV.

Nothing special.

We were asked to test encryption capabilities of RHGS. Goal is to have storage encrypted at REST (LUKS) and also in transit (TLS/SSL network encryption).

We expect some performance penalty, however, it's not acceptable to loose 40-50%.

We think that RHGS encryption needs improvements. Otherwise, it's not really usable (unless you don't care about performance)

Check this blog (NFSv4 improvements on NetApp storage achieved with AES-NI offloading)
https://whyistheinternetbroken.wordpress.com/2017/07/24/ontap92-krb5p/

Or we may think about completely different encryption model for gluster. 

Cheers
Milan

Comment 10 Milan Zink 2018-10-10 19:01:25 UTC
Hi All,

we've run test on six node cluster with 10g nics. We used flexible io tester from phoronix test suite.

Tested sequential/random read/writes differnet block sizes 4k-128m. Performance drop was usually between 30-50% percent in average.

Smallfile utility had similar results.

Comment 22 Milan Zink 2018-11-13 15:00:08 UTC
Hi all,

I hope it should make a difference. 

openssl speed -elapsed aes-128-cbc
...
The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
aes-128 cbc      76800.44k    82479.51k    83961.09k    84551.68k    84680.70k

And with 'cpu help'
grep -m1 -o aes /proc/cpuinfo 
aes

openssl speed -elapsed -evp aes-128-cbc
...
The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
aes-128-cbc     452043.64k   482091.61k   489904.73k   491719.34k   492388.35k

Comment 23 Tomas Mraz 2018-11-13 15:35:13 UTC
Milan, the openssl should already use the aes-ni instruction set, you definitely won't see such difference.

Comment 29 Yaniv Kaul 2019-04-29 09:28:31 UTC
Ben, what's the latest (wrt comment 28 ) ?

Comment 44 Yaniv Kaul 2020-03-11 08:22:46 UTC
What's the plan here to address this in 3.5.3, other than moving to AES-128?

Comment 67 errata-xmlrpc 2021-04-29 07:20:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:1462


Note You need to log in before you can comment on or make changes to this bug.