Bug 1333224

Summary: read/write performance degrades as the cache tier fills up
Product: [Community] GlusterFS Reporter: Paul Cuzner <pcuzner>
Component: tieringAssignee: bugs <bugs>
Status: CLOSED EOL QA Contact: bugs <bugs>
Severity: low Docs Contact:
Priority: medium    
Version: 3.7.10CC: bugs, dlambrig, nbalacha, sankarshan
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-03-08 10:54:34 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Paul Cuzner 2016-05-05 01:48:28 UTC
Description of problem:
During a continuous write workload, writes were accelerated by the cache layer up until the low watermark was reached. At this point performance to the client degraded to pre-cache levels. 

Version-Release number of selected component (if applicable):


How reproducible:
Tried several times, and each time the effect was observed

Steps to Reproduce:
1. create a tier volume with SSD as the tier
2. create files on the volume with dd repeatedly
3. observe throughput degradation as the tier >  60% utilised

Actual results:
client writes degrade considerably

Expected results:
client writes should degrade, but I saw 150MB/s become 4MB/s

Additional info:
The test environment I used was vm's under KVM. Each gluster node has an SSD disk for hot, and a replica 2 for cold tier.

Comment 1 Kaushal 2017-03-08 10:54:34 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.