Bug 1392419 - High i/o latency with random write workload from virtual machine(s)
Summary: High i/o latency with random write workload from virtual machine(s)
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: rhgs-3.2
Hardware: x86_64
OS: Linux
low
medium
Target Milestone: ---
: ---
Assignee: Pranith Kumar K
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On: 1348068
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-07 13:03 UTC by Krutika Dhananjay
Modified: 2018-08-14 11:17 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1348068
Environment:
Last Closed: 2017-11-21 11:59:20 UTC
Embargoed:


Attachments (Terms of Use)

Description Krutika Dhananjay 2016-11-07 13:03:01 UTC
+++ This bug was initially created as a clone of Bug #1348068 +++

Description of problem:
When using either a replica 3 or sharded replica 3 volume with virtual machines, profiles that exhibit random writes cause high latencies than can sometimes span several seconds.

This has been observed with both fio and iometer as load generators

Sequential writes do NOT exhibit this affect

Version-Release number of selected component (if applicable):


How reproducible:
Every time the workload is run this profile is observed


Steps to Reproduce:
1. use either an fio workload with random write or an iometer profile (attached)
2. track the latency in the vm with pcp
3. 

Actual results:
High latencies are observed which could impact application response times

Expected results:
Spikes in random write workloads are accepted - may be in the 10's of milliseconds - but latencies that are between 600ms 2s are a problem


Additional info:

--- Additional comment from Paul Cuzner on 2016-06-20 01:56 EDT ---

Added a screenshot showing the max latency observed from the iometer run. This is from a single vm running the workload (as per the icf file attached to the case)


Note You need to log in before you can comment on or make changes to this bug.