Bug 2230198 - [GSS] [ceph-osd] Slow cluster wide performance
Summary: [GSS] [ceph-osd] Slow cluster wide performance
Keywords:
Status: CLOSED DUPLICATE of bug 2230199
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ceph
Version: 4.10
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: ---
Assignee: Radoslaw Zarzynski
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-08-08 21:59 UTC by Steve Baldwin
Modified: 2023-08-11 15:22 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-08-11 15:22:18 UTC
Embargoed:


Attachments (Terms of Use)

Description Steve Baldwin 2023-08-08 21:59:02 UTC
Description of problem (please be detailed as possible and provide log
snippests):
Cluster wide performance is slow where customer is unable to start all of the applications (using both cepfs and rbd pvs) without encountering timeouts.  We are seeing some MDS slow requests reported over past few days but no slow requests reported on the OSDs. We performed fio tests on cephfs mount to worker node and storage node and observed slow writes using both 4k and 4 block sizes.  Performing the same test on storage node using local disk using same back-end storage (different luns) on Dell EMC powerflex storage performed much better.

Version of all relevant components (if applicable):
ODF 4.10.12 / rhcs 16.2.7-126

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes, the customer is unable to bring up all of the applications

Is there any workaround available to the best of your knowledge?
None at the moment

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
5

Can this issue reproducible?
Yes, at the customer site.


Additional info:
See comment #1 for fio results

Comment 3 Mudit Agarwal 2023-08-11 15:22:18 UTC

*** This bug has been marked as a duplicate of bug 2230199 ***


Note You need to log in before you can comment on or make changes to this bug.