Bug 1333224 - read/write performance degrades as the cache tier fills up
Summary: read/write performance degrades as the cache tier fills up
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: 3.7.10
Hardware: x86_64
OS: Linux
medium
low
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-05 01:48 UTC by Paul Cuzner
Modified: 2017-03-08 10:54 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-08 10:54:34 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Paul Cuzner 2016-05-05 01:48:28 UTC
Description of problem:
During a continuous write workload, writes were accelerated by the cache layer up until the low watermark was reached. At this point performance to the client degraded to pre-cache levels. 

Version-Release number of selected component (if applicable):


How reproducible:
Tried several times, and each time the effect was observed

Steps to Reproduce:
1. create a tier volume with SSD as the tier
2. create files on the volume with dd repeatedly
3. observe throughput degradation as the tier >  60% utilised

Actual results:
client writes degrade considerably

Expected results:
client writes should degrade, but I saw 150MB/s become 4MB/s

Additional info:
The test environment I used was vm's under KVM. Each gluster node has an SSD disk for hot, and a replica 2 for cold tier.

Comment 1 Kaushal 2017-03-08 10:54:34 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.


Note You need to log in before you can comment on or make changes to this bug.