Bug 1451843

Summary: gluster volume performance issue
Product: [Community] GlusterFS Reporter: 3242650
Component: replicateAssignee: Karthik U S <ksubrahm>
Status: CLOSED EOL QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.8CC: 3242650, bugs, ksubrahm
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-11-07 10:39:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description 3242650 2017-05-17 15:40:59 UTC
Description of problem:Glaster volume performance is measured poorly.
I create the gluster volume and then mount the volume as fuse in RHEV.
If I use gluster volume at VM creation and deployment on RHEV, the time is the very slow.
So I made measurements using FIO benchmark tool. The measurement results are as follows.
-----------------------------------------
volume : jbod replica
fio profile : 
filesize - 30G
number of files - 36
blocksize - 4K
measured value :
read iops - 7587
write iops - 3254
-----------------------------------------
volume : jbod disperse
fio profile : 
filesize - 30G
number of files - 1
blocksize - 4K
measured value :
read iops - 2498
write iops - 1070
-----------------------------------------
local(no use gluster volume) :
fio profile : 
filesize - 30G
number of files - 1
blocksize - 4K
read iops - 42044
write iops - 18020
-----------------------------------------


Version-Release number of selected component (if applicable):glusterfs-3.8.4-18


How reproducible:Always


Steps to Reproduce:
1.Configure the gluster volume
2.Mounting as fuse gluster volume in RHEV
3.Create the vm
4.Use benchmark tool FIO to measure performance at VM or server

Actual results:
VM creationtome is later than other storage methods (SAN Storage, local NFS)


Expected results:
Same as actual result

Additional info:
Configuration environment
os : RHVH4.1
gluster version : glsuterfs 3.8.4
node : 3
disk : 6(SSD 800G) per server(except os disk)
gluster volume : replica, disperse
Switch : RDMA
Additional environment : DMCache(using VNMe)

Comment 1 Karthik U S 2017-05-24 06:22:19 UTC
Could you reproduce the issue by setting the following volume options and measure the performance and send the results please?

group -> virt
storage.owner-uid -> 36
storage.owner-gid -> 36
network.ping-timeout -> 30
performance.strict-o-direct -> on
network.remote-dio -> off
cluster.granular-entry-heal -> enable (this is optional)

Comment 2 Niels de Vos 2017-11-07 10:39:47 UTC
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.

Comment 3 Red Hat Bugzilla 2023-09-14 03:57:45 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days