Bug 1593079
Summary: | IO performance is slow | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Thang <thangvubk> |
Component: | rdma | Assignee: | bugs <bugs> |
Status: | CLOSED WONTFIX | QA Contact: | |
Severity: | urgent | Docs Contact: | |
Priority: | medium | ||
Version: | mainline | CC: | atumball, bugs, thangvubk |
Target Milestone: | --- | Flags: | thangvubk:
needinfo-
|
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-06-17 11:10:54 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Thang
2018-06-20 03:03:59 UTC
This most likely is a bug intended for the Gluster Community project, and not the Red Hat Gluster Storage product. I am moving this to the right location now. In case you are using the Red Hat Gluster Storage product, open a support case at https://access.redhat.com/support/cases/new and mention this bug report. Please let us know if you started a discussion on the Gluster users mailinglist as suggested in the GitHub issue. We would also need to know which version of Gluster you are using (you can change it in this bug). Release 3.12 has been EOLd and this bug was still found to be in the NEW state, hence moving the version to mainline, to triage the same and take appropriate actions. Thanks for the report, but we are not able to look into the RDMA section actively, and are seriously considering from dropping it from active support. More on this @ https://lists.gluster.org/pipermail/gluster-devel/2018-July/054990.html > ‘RDMA’ transport support: > > Gluster started supporting RDMA while ib-verbs was still new, and very high-end infra around that time were using Infiniband. Engineers did work > with Mellanox, and got the technology into GlusterFS for better data migration, data copy. While current day kernels support very good speed with > IPoIB module itself, and there are no more bandwidth for experts in these area to maintain the feature, we recommend migrating over to TCP (IP > based) network for your volume. > > If you are successfully using RDMA transport, do get in touch with us to prioritize the migration plan for your volume. Plan is to work on this > after the release, so by version 6.0, we will have a cleaner transport code, which just needs to support one type. |