Bug 2215628

Summary: [Tracker for Ceph BZ #2231786] ODF v4.13.0-186 shows poor read performance at 128/4096 block sizes with on-wire encryption enabled vs. disabled
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: mcurrier
Component: cephAssignee: Brad Hubbard <bhubbard>
ceph sub component: RADOS QA Contact: Elad <ebenahar>
Status: NEW --- Docs Contact:
Severity: low    
Priority: low CC: bhubbard, bniver, ddomingu, etamir, muagarwa, nojha, nravinas, odf-bz-bot, rzarzyns, shberry, sheggodu, sostapov
Version: 4.13Flags: sheggodu: needinfo? (rzarzyns)
muagarwa: needinfo? (shberry)
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2231786 (view as bug list) Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2231786    
Bug Blocks:    

Description mcurrier 2023-06-16 19:31:53 UTC
Description of problem (please be detailed as possible and provide log
snippests):

ODF v4.13.0-186 shows poor read performance at 128/4096 block sizes with on-wire encryption enabled vs. disabled in FIO tests.  We see this for both RBD and Cephfs storage classes.  We are measuring IOPS

I am using FIO with 50 servers, numjobs 4, and this relates to 128 and 4096K block sizes.  The spreadsheet showing this degration is here:

https://docs.google.com/spreadsheets/d/101e3upvYuOG2lYxIstKjnIR4HvrDYEk6l8LD9Wf1-vQ/edit#gid=0

Dell 740xd systems
12 OSDs spread over 3 workers in 6 node cluster
nvme disks are 1.5 Tb
Systems have 192 Gb memory


Version of all relevant components (if applicable):
OCP v4.13.0-rc6
ODF v4.13.0-186
local storage 4.12.0-202304190215


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

no

Is there any workaround available to the best of your knowledge?

no

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

3

Can this issue reproducible?

yes

Can this issue reproduce from the UI?

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.  configure ODF storagecluster w/ on-wire encryption disabled
2.  run FIO tests are 128 and 4096K block sizes as described above
3.  capture IOPS and other info in perf dashboards
4   re-start same 3 steps w/ on-wire encryption enabled

Actual results:


Expected results:


Additional info:

Comment 2 Venky Shankar 2023-06-17 13:00:42 UTC
(In reply to mcurrier from comment #0)
> Description of problem (please be detailed as possible and provide log
> snippests):
> 
> ODF v4.13.0-186 shows poor read performance at 128/4096 block sizes with
> on-wire encryption enabled vs. disabled in FIO tests.  We see this for both
> RBD and Cephfs storage classes.  We are measuring IOPS

This looks like a candidate for the core team (messenger) to look into first (especially since the behaviour experienced is same for cephfs and rbd).

Comment 10 mcurrier 2023-11-27 16:35:08 UTC
I am no longer on the Ceph storage team.  Matt