Description of problem: When we enable TLS/SSL on our data path we see a 50% performance hit in throughput. Version-Release number of selected component (if applicable): All glusterFS versions How reproducible: Every time. Steps to Reproduce: 1. Run read / write perf test(say DD), note results 2. Enable TLS/SSL on the volume 3. Re run perf tests, compare throughput. Actual results: 50% perf hit Expected results: Minimal performance hit. Additional info: I have tested this on older versions, this looks to be an issue on all gluster versions.
Hi, volumes were configured with default options - simple 2-way replica. Only client/server.ssl was added when using IO/network encryption. rest_ means that bricks were on LVM with LUKS encrypted PV. Nothing special. We were asked to test encryption capabilities of RHGS. Goal is to have storage encrypted at REST (LUKS) and also in transit (TLS/SSL network encryption). We expect some performance penalty, however, it's not acceptable to loose 40-50%. We think that RHGS encryption needs improvements. Otherwise, it's not really usable (unless you don't care about performance) Check this blog (NFSv4 improvements on NetApp storage achieved with AES-NI offloading) https://whyistheinternetbroken.wordpress.com/2017/07/24/ontap92-krb5p/ Or we may think about completely different encryption model for gluster. Cheers Milan
Hi All, we've run test on six node cluster with 10g nics. We used flexible io tester from phoronix test suite. Tested sequential/random read/writes differnet block sizes 4k-128m. Performance drop was usually between 30-50% percent in average. Smallfile utility had similar results.
Hi all, I hope it should make a difference. openssl speed -elapsed aes-128-cbc ... The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128 cbc 76800.44k 82479.51k 83961.09k 84551.68k 84680.70k And with 'cpu help' grep -m1 -o aes /proc/cpuinfo aes openssl speed -elapsed -evp aes-128-cbc ... The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 452043.64k 482091.61k 489904.73k 491719.34k 492388.35k
Milan, the openssl should already use the aes-ni instruction set, you definitely won't see such difference.
Ben, what's the latest (wrt comment 28 ) ?
What's the plan here to address this in 3.5.3, other than moving to AES-128?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:1462