Bug 1957594

Summary: Performance degradation on RBD with Random IO
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Avi Liani <alayani>
Component: cephAssignee: Ilya Dryomov <idryomov>
Status: CLOSED WONTFIX QA Contact: Raz Tamir <ratamir>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 4.7CC: bniver, kramdoss, madam, muagarwa, ocs-bugs, odf-bz-bot
Target Milestone: ---Keywords: Automation, Performance
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-08-20 04:09:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Avi Liani 2021-05-06 06:08:32 UTC
Description of problem (please be detailed as possible and provide log
snippests):

when running Random I/O on RBD volume i see performance degradation of ~80% with 1M block size compared to OCS 4.6.0 as you can see in the document : https://docs.google.com/document/d/1_-XI4qOTM-nwhV_1T0KII2RKP51CJLZ59KiS5U-Xqyo/edit?ts=60911d54#heading=h.kmewff7hkh3z

I ran this test several times, and the results was the same.


Hardware Environment

AWS with 3 master and 3 Workers (m5.4xl) and 3 x 2TiB OSD's (EBS)


Version of all relevant components (if applicable):

OCS 4.7 
 

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

No



Is there any workaround available to the best of your knowledge?

No


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

1


Can this issue reproducible?

Yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:

Yes, on OCS 4.6.0 we saw better results



Steps to Reproduce:
1. deploy OCS 4.7 on AWS with 2TiB OSD
2. run from OCS-CI the test : tests/e2e/performance/test_fio_benchmark.py::TestFIOBenchmark::test_fio_workload_simple[CephBlockPool-random]
3.


Actual results:

Degradation as seen in the document : https://docs.google.com/document/d/1_-XI4qOTM-nwhV_1T0KII2RKP51CJLZ59KiS5U-Xqyo/edit?ts=60911d54#heading=h.kmewff7hkh3z


Expected results:

No Degradation in the performance

Additional info:

Comment 2 Mudit Agarwal 2021-06-09 17:25:41 UTC
No investigations yet, can't be a part of 4.2z2. Moving it out of 4.8

Comment 3 Mudit Agarwal 2021-08-20 02:56:02 UTC
I don't see any investigations on this, and the last results were on 4.7
Is there a possibility to retest this on the latest builds?

Comment 4 krishnaram Karthick 2021-08-20 04:09:21 UTC
This bug can be closed.

the results we see for 4.6 and 4.8 are the same for the random workload on 1024Ki blocksize. 

please see,

https://docs.google.com/document/d/1-lOb4szqLM4LoWnMr_JCp9zurBqpjeva5BUEH-yer4s/edit#heading=h.w4cb5hofz7uj
https://docs.google.com/document/d/1_-XI4qOTM-nwhV_1T0KII2RKP51CJLZ59KiS5U-Xqyo/edit#heading=h.kmewff7hkh3z