Bug 963223
Summary: | Re-inserting a server in a v3.3.2qa2 distributed-replicate volume DOSes the volume | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | hans |
Component: | core | Assignee: | bugs <bugs> |
Status: | CLOSED EOL | QA Contact: | |
Severity: | urgent | Docs Contact: | |
Priority: | unspecified | ||
Version: | pre-release | CC: | brian, bugs, gluster-bugs |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-10-22 15:40:20 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
hans
2013-05-15 12:37:38 UTC
In addition : this issue is not about data transfer saturation but rather about IO per second saturation on the individual harddisks. Thus moving to faster ethernet or infiniband won't help. We see gluster management operations saturating brick IO/s in several cases : replace-brick, rebalance and now self-heal. In order to prevent this behaviour we must throttle gluster management traffic to, say, 100 IOPS per brick. Ideally it'll be configurable to either a hardcoded limit or a percentage. In addition : moving the stor2 server to a new DNS name stor5 and IP address and force-replace a single brick from the stor2 to stor5 still DOSes the entire volume. In addition : disabling the self-heal daemon on the volume does NOT help. Bringing the new server with the single brick up still DOSes the entire volume. pre-release version is ambiguous and about to be removed as a choice. If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it. |