Bug 1296271

Summary: Enhancement: Configure maximum iops for shd
Product: [Community] GlusterFS Reporter: Joe Julian <joe>
Component: replicateAssignee: Ravishankar N <ravishankar>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: low Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, kdhananj, ksubrahm, pkarampu, ravishankar, smohan
Target Milestone: ---Keywords: FutureFeature, Reopened, Triaged
Target Release: ---   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: glusterfs-4.0.0 Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-11-19 11:28:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Joe Julian 2016-01-06 18:46:57 UTC
Servers can become resource starved during self-heal events causing a performance impact to the clients. Allow a configurable iops cap to shd to allow less impact to the clients.

Today, a large game developer was in IRC trying to track down a problem with his write performance. His hardware was more than adequate to keep up with his iops and throughput needs (HP Z420 with 8 SSDs in RAID 0 attached to a LSI Raid controller), but during a self-heal event, writes were noticeably slower.

As part of his failure strategy, if a server fails, he replaces it with a new one and populates the new server via self-heal with 23 million files totalling 2.3TB. It is during this event that he experiences slow writes.

If we had a way to limit the resources used by shd, we should be able to prevent this type of problem.

Comment 1 Pranith Kumar K 2016-01-18 10:54:14 UTC
Nice timing of the bug Joe, Ravi is working on self-heal throttling feature to do this. Assigning the bug to him.

Pranith

Comment 2 Kaushal 2017-03-08 11:03:57 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.

Comment 3 Karthik U S 2018-11-19 11:28:57 UTC
From glusterfs-4.0.0 we have cgroups based scripts available to regulate the usage of CPU & memory of any gluster daemon processes. This is added by the patch https://review.gluster.org/#/c/glusterfs/+/18404/ and tracked using the BZ #1496335. Hence closing this bug as CURRENTRELEASE.