Bug 2231786

Summary: ODF v4.13.0-186 shows poor read performance at 128/4096 block sizes with on-wire encryption enabled vs. disabled
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Mudit Agarwal <muagarwa>
Component: RADOSAssignee: Radoslaw Zarzynski <rzarzyns>
Status: NEW --- QA Contact: Pawan <pdhiran>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 6.1CC: bhubbard, bniver, ceph-eng-bugs, cephqe-warriors, ebenahar, mcurrier, muagarwa, nojha, odf-bz-bot, rzarzyns, sostapov, vumrao
Target Milestone: ---   
Target Release: 7.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 2215628 Environment:
Last Closed: Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2215628    

Description Mudit Agarwal 2023-08-14 06:12:38 UTC
+++ This bug was initially created as a clone of Bug #2215628 +++

Description of problem (please be detailed as possible and provide log
snippests):

ODF v4.13.0-186 shows poor read performance at 128/4096 block sizes with on-wire encryption enabled vs. disabled in FIO tests.  We see this for both RBD and Cephfs storage classes.  We are measuring IOPS

I am using FIO with 50 servers, numjobs 4, and this relates to 128 and 4096K block sizes.  The spreadsheet showing this degration is here:

https://docs.google.com/spreadsheets/d/101e3upvYuOG2lYxIstKjnIR4HvrDYEk6l8LD9Wf1-vQ/edit#gid=0

Dell 740xd systems
12 OSDs spread over 3 workers in 6 node cluster
nvme disks are 1.5 Tb
Systems have 192 Gb memory


Version of all relevant components (if applicable):
OCP v4.13.0-rc6
ODF v4.13.0-186
local storage 4.12.0-202304190215


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

no

Is there any workaround available to the best of your knowledge?

no

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

3

Can this issue reproducible?

yes

Can this issue reproduce from the UI?

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.  configure ODF storagecluster w/ on-wire encryption disabled
2.  run FIO tests are 128 and 4096K block sizes as described above
3.  capture IOPS and other info in perf dashboards
4   re-start same 3 steps w/ on-wire encryption enabled

Actual results:


Expected results:


Additional info:

--- Additional comment from RHEL Program Management on 2023-06-16 19:32:00 UTC ---

This bug having no release flag set previously, is now set with release flag 'odf‑4.14.0' to '?', and so is being proposed to be fixed at the ODF 4.14.0 release. Note that the 3 Acks (pm_ack, devel_ack, qa_ack), if any previously set while release flag was missing, have now been reset since the Acks are to be set against a release flag.

--- Additional comment from Venky Shankar on 2023-06-17 13:00:42 UTC ---

(In reply to mcurrier from comment #0)
> Description of problem (please be detailed as possible and provide log
> snippests):
> 
> ODF v4.13.0-186 shows poor read performance at 128/4096 block sizes with
> on-wire encryption enabled vs. disabled in FIO tests.  We see this for both
> RBD and Cephfs storage classes.  We are measuring IOPS

This looks like a candidate for the core team (messenger) to look into first (especially since the behaviour experienced is same for cephfs and rbd).

--- Additional comment from Red Hat Bugzilla on 2023-08-03 08:30:42 UTC ---

Account disabled by LDAP Audit