Bug 1451843 - gluster volume performance issue
Summary: gluster volume performance issue
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.8
Hardware: All
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Karthik U S
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-05-17 15:40 UTC by 3242650
Modified: 2023-09-14 03:57 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-11-07 10:39:47 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description 3242650 2017-05-17 15:40:59 UTC
Description of problem:Glaster volume performance is measured poorly.
I create the gluster volume and then mount the volume as fuse in RHEV.
If I use gluster volume at VM creation and deployment on RHEV, the time is the very slow.
So I made measurements using FIO benchmark tool. The measurement results are as follows.
-----------------------------------------
volume : jbod replica
fio profile : 
filesize - 30G
number of files - 36
blocksize - 4K
measured value :
read iops - 7587
write iops - 3254
-----------------------------------------
volume : jbod disperse
fio profile : 
filesize - 30G
number of files - 1
blocksize - 4K
measured value :
read iops - 2498
write iops - 1070
-----------------------------------------
local(no use gluster volume) :
fio profile : 
filesize - 30G
number of files - 1
blocksize - 4K
read iops - 42044
write iops - 18020
-----------------------------------------


Version-Release number of selected component (if applicable):glusterfs-3.8.4-18


How reproducible:Always


Steps to Reproduce:
1.Configure the gluster volume
2.Mounting as fuse gluster volume in RHEV
3.Create the vm
4.Use benchmark tool FIO to measure performance at VM or server

Actual results:
VM creationtome is later than other storage methods (SAN Storage, local NFS)


Expected results:
Same as actual result

Additional info:
Configuration environment
os : RHVH4.1
gluster version : glsuterfs 3.8.4
node : 3
disk : 6(SSD 800G) per server(except os disk)
gluster volume : replica, disperse
Switch : RDMA
Additional environment : DMCache(using VNMe)

Comment 1 Karthik U S 2017-05-24 06:22:19 UTC
Could you reproduce the issue by setting the following volume options and measure the performance and send the results please?

group -> virt
storage.owner-uid -> 36
storage.owner-gid -> 36
network.ping-timeout -> 30
performance.strict-o-direct -> on
network.remote-dio -> off
cluster.granular-entry-heal -> enable (this is optional)

Comment 2 Niels de Vos 2017-11-07 10:39:47 UTC
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.

Comment 3 Red Hat Bugzilla 2023-09-14 03:57:45 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.