Description of problem:
Performance degradation in sequential IO for borh RBD and CephFS ( also in Random - described in https://bugzilla.redhat.com/show_bug.cgi?id=2015520 ) is observed with 4.9 ( OCP 4.9 and OCS 4.9) as compared to 4.8 and OCS 4.7 with 4Kib, 16Kib and 64Kib block size.
Please note that those results are consistent - similar results were received by running the FIO benchmark test twice.
Version-Release number of selected component (if applicable):
OCS versions
==============
NAME DISPLAY VERSION REPLACES PHASE
noobaa-operator.v4.9.0 NooBaa Operator 4.9.0 Succeeded
ocs-operator.v4.9.0 OpenShift Container Storage 4.9.0 Succeeded
odf-operator.v4.9.0 OpenShift Data Foundation 4.9.0 Succeeded
ODF (OCS) build : full_version: 4.9.0-210.ci
Rook versions
===============
2021-11-04 09:27:36.633082 I | op-flags: failed to set flag "logtostderr". no such flag -logtostderr
rook: 4.9-210.f6e2005.release_4.9
go: go1.16.6
Ceph versions
===============
ceph version 16.2.0-143.el8cp (0e2c6f9639c37a03e55885fb922dc0cb1b5173cb) pacific (stable)
Full Version list is available here :
http://ocsperf.ceph.redhat.com/logs/Performance_tests/4.9/RC0/Vmware-LSO/versions.txt
How reproducible:
Run test_fio_benchmark test
Steps to Reproduce:
1. Run test_fio_benchark test ( sequential RBD and sequential CephFS)
2. Compare the results to OCS 4.8 and OCS 4.7
Actual results:
The performance is worse than in 4.8 and also worse than in 4.7
Expected results:
Performance should be at least similar to 4.8.
Additional info:
Comparison results between OCS 4.7, OCS 4.8 and OCS 4.9 are available here:
http://ocsperf.ceph.redhat.com:8080/index.php?version1=2&build1=6&platform1=2&az_topology1=1&test_name%5B%5D=1&version2=5&build2=11&platform2=2&az_topology2=1&version3=6&build3=18&platform3=2&az_topology3=1&version4=&build4=&platform4=2&az_topology4=1&submit=Choose+options
Comparison PErformance report for VMware LSO ( 4.9 vs 4.8) is available here :
https://docs.google.com/document/d/1Ft7gzWCcID2RTXILW3GrN8a6O5v5VidDICuG_tX__v8/edit#