Bug 2231786 - ODF v4.13.0-186 shows poor read performance at 128/4096 block sizes with on-wire encryption enabled vs. disabled
Summary: ODF v4.13.0-186 shows poor read performance at 128/4096 block sizes with on-w...
Keywords:
Status: NEW
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 6.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 7.2
Assignee: Radoslaw Zarzynski
QA Contact: Pawan
URL:
Whiteboard:
Depends On:
Blocks: 2215628
TreeView+ depends on / blocked
 
Reported: 2023-08-14 06:12 UTC by Mudit Agarwal
Modified: 2025-05-15 05:11 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 2215628
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-7208 0 None None None 2023-10-04 23:49:49 UTC

Description Mudit Agarwal 2023-08-14 06:12:38 UTC
+++ This bug was initially created as a clone of Bug #2215628 +++

Description of problem (please be detailed as possible and provide log
snippests):

ODF v4.13.0-186 shows poor read performance at 128/4096 block sizes with on-wire encryption enabled vs. disabled in FIO tests.  We see this for both RBD and Cephfs storage classes.  We are measuring IOPS

I am using FIO with 50 servers, numjobs 4, and this relates to 128 and 4096K block sizes.  The spreadsheet showing this degration is here:

https://docs.google.com/spreadsheets/d/101e3upvYuOG2lYxIstKjnIR4HvrDYEk6l8LD9Wf1-vQ/edit#gid=0

Dell 740xd systems
12 OSDs spread over 3 workers in 6 node cluster
nvme disks are 1.5 Tb
Systems have 192 Gb memory


Version of all relevant components (if applicable):
OCP v4.13.0-rc6
ODF v4.13.0-186
local storage 4.12.0-202304190215


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

no

Is there any workaround available to the best of your knowledge?

no

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

3

Can this issue reproducible?

yes

Can this issue reproduce from the UI?

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.  configure ODF storagecluster w/ on-wire encryption disabled
2.  run FIO tests are 128 and 4096K block sizes as described above
3.  capture IOPS and other info in perf dashboards
4   re-start same 3 steps w/ on-wire encryption enabled

Actual results:


Expected results:


Additional info:

--- Additional comment from RHEL Program Management on 2023-06-16 19:32:00 UTC ---

This bug having no release flag set previously, is now set with release flag 'odf‑4.14.0' to '?', and so is being proposed to be fixed at the ODF 4.14.0 release. Note that the 3 Acks (pm_ack, devel_ack, qa_ack), if any previously set while release flag was missing, have now been reset since the Acks are to be set against a release flag.

--- Additional comment from Venky Shankar on 2023-06-17 13:00:42 UTC ---

(In reply to mcurrier from comment #0)
> Description of problem (please be detailed as possible and provide log
> snippests):
> 
> ODF v4.13.0-186 shows poor read performance at 128/4096 block sizes with
> on-wire encryption enabled vs. disabled in FIO tests.  We see this for both
> RBD and Cephfs storage classes.  We are measuring IOPS

This looks like a candidate for the core team (messenger) to look into first (especially since the behaviour experienced is same for cephfs and rbd).

--- Additional comment from Red Hat Bugzilla on 2023-08-03 08:30:42 UTC ---

Account disabled by LDAP Audit


Note You need to log in before you can comment on or make changes to this bug.