Bug 1296271 - Enhancement: Configure maximum iops for shd
Enhancement: Configure maximum iops for shd
Status: NEW
Product: GlusterFS
Classification: Community
Component: replicate (Show other bugs)
mainline
All All
unspecified Severity low
: ---
: ---
Assigned To: Ravishankar N
: FutureFeature, Reopened, Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-01-06 13:46 EST by Joe Julian
Modified: 2017-03-08 09:58 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-03-08 06:03:57 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Joe Julian 2016-01-06 13:46:57 EST
Servers can become resource starved during self-heal events causing a performance impact to the clients. Allow a configurable iops cap to shd to allow less impact to the clients.

Today, a large game developer was in IRC trying to track down a problem with his write performance. His hardware was more than adequate to keep up with his iops and throughput needs (HP Z420 with 8 SSDs in RAID 0 attached to a LSI Raid controller), but during a self-heal event, writes were noticeably slower.

As part of his failure strategy, if a server fails, he replaces it with a new one and populates the new server via self-heal with 23 million files totalling 2.3TB. It is during this event that he experiences slow writes.

If we had a way to limit the resources used by shd, we should be able to prevent this type of problem.
Comment 1 Pranith Kumar K 2016-01-18 05:54:14 EST
Nice timing of the bug Joe, Ravi is working on self-heal throttling feature to do this. Assigning the bug to him.

Pranith
Comment 2 Kaushal 2017-03-08 06:03:57 EST
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.

Note You need to log in before you can comment on or make changes to this bug.