Bug 2044340
| Summary: | AWS | gp3 volume provide only default performance | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Avi Liani <alayani> |
| Component: | ocs-operator | Assignee: | Mudit Agarwal <muagarwa> |
| Status: | CLOSED NOTABUG | QA Contact: | Elad <ebenahar> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.10 | CC: | etamir, mmuench, muagarwa, nigoyal, odf-bz-bot, owasserm, sostapov |
| Target Milestone: | --- | Keywords: | Performance |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-08-08 08:35:45 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Moving it to the ocs-operator The reason it's default is that this is what you get from free. I think we can expect ODF to change this when we fully understand the benefits of gp3 (which I'm sure we have), with or WITHOUT changing the CPU+memory of the OSDs! Since we're still exploring the utility/viability of gp3 for our backend volumes (as well as the pricing differences), moving this out to ODF 4.11. We will still continue our testing, of course. |
Description of problem (please be detailed as possible and provide log snippests): while deploying ODF 4.10 on AWS with default gp3 backend volumes, we are getting only 3K IOPS and 125 MB/Sec throughput - this is the defaults from AWS. Version of all relevant components (if applicable): OCP : 4.10 OCS : 4.10 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? default performance is 50% of default gp2 (2 TiB volumes) Is there any workaround available to the best of your knowledge? yes - by deploying manually user can setup the gp3 to work with better performance Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? yes Can this issue reproduce from the UI? yes If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. deploy OCP 4.10 on AWS 2. deploy ODF 4.10 on default gp3 storageclass 3. run FIO and mesure performance Actual results: the performance is 50% less than performance on gp2 storageclass Expected results: better performance from gp2 Additional info: for deploying with "high-performance" gp3 devices, we need to create new storage class like : kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: gp3-iops-10k annotations: storageclass.kubernetes.io/is-default-class: "true" allowVolumeExpansion: true provisioner: ebs.csi.aws.com volumeBindingMode: WaitForFirstConsumer parameters: type: gp3 iops: "16000" throughput: "500" or setting to our storageclass the parameters : parameters: type: gp3 iops: "16000" throughput: "500"