Bug 2215628 - [Tracker for Ceph BZ #2231786] ODF v4.13.0-186 shows poor read performance at 128/4096 block sizes with on-wire encryption enabled vs. disabled [NEEDINFO]
Summary: [Tracker for Ceph BZ #2231786] ODF v4.13.0-186 shows poor read performance at...
Keywords:
Status: NEW
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ceph
Version: 4.13
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: ---
Assignee: Brad Hubbard
QA Contact: Elad
URL:
Whiteboard:
Depends On: 2231786
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-06-16 19:31 UTC by mcurrier
Modified: 2025-04-12 08:28 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2231786 (view as bug list)
Environment:
Last Closed:
Embargoed:
sheggodu: needinfo? (rzarzyns)
muagarwa: needinfo? (shberry)


Attachments (Terms of Use)

Description mcurrier 2023-06-16 19:31:53 UTC
Description of problem (please be detailed as possible and provide log
snippests):

ODF v4.13.0-186 shows poor read performance at 128/4096 block sizes with on-wire encryption enabled vs. disabled in FIO tests.  We see this for both RBD and Cephfs storage classes.  We are measuring IOPS

I am using FIO with 50 servers, numjobs 4, and this relates to 128 and 4096K block sizes.  The spreadsheet showing this degration is here:

https://docs.google.com/spreadsheets/d/101e3upvYuOG2lYxIstKjnIR4HvrDYEk6l8LD9Wf1-vQ/edit#gid=0

Dell 740xd systems
12 OSDs spread over 3 workers in 6 node cluster
nvme disks are 1.5 Tb
Systems have 192 Gb memory


Version of all relevant components (if applicable):
OCP v4.13.0-rc6
ODF v4.13.0-186
local storage 4.12.0-202304190215


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

no

Is there any workaround available to the best of your knowledge?

no

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

3

Can this issue reproducible?

yes

Can this issue reproduce from the UI?

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.  configure ODF storagecluster w/ on-wire encryption disabled
2.  run FIO tests are 128 and 4096K block sizes as described above
3.  capture IOPS and other info in perf dashboards
4   re-start same 3 steps w/ on-wire encryption enabled

Actual results:


Expected results:


Additional info:

Comment 2 Venky Shankar 2023-06-17 13:00:42 UTC
(In reply to mcurrier from comment #0)
> Description of problem (please be detailed as possible and provide log
> snippests):
> 
> ODF v4.13.0-186 shows poor read performance at 128/4096 block sizes with
> on-wire encryption enabled vs. disabled in FIO tests.  We see this for both
> RBD and Cephfs storage classes.  We are measuring IOPS

This looks like a candidate for the core team (messenger) to look into first (especially since the behaviour experienced is same for cephfs and rbd).

Comment 10 mcurrier 2023-11-27 16:35:08 UTC
I am no longer on the Ceph storage team.  Matt


Note You need to log in before you can comment on or make changes to this bug.