Description of problem:Glaster volume performance is measured poorly. I create the gluster volume and then mount the volume as fuse in RHEV. If I use gluster volume at VM creation and deployment on RHEV, the time is the very slow. So I made measurements using FIO benchmark tool. The measurement results are as follows. ----------------------------------------- volume : jbod replica fio profile : filesize - 30G number of files - 36 blocksize - 4K measured value : read iops - 7587 write iops - 3254 ----------------------------------------- volume : jbod disperse fio profile : filesize - 30G number of files - 1 blocksize - 4K measured value : read iops - 2498 write iops - 1070 ----------------------------------------- local(no use gluster volume) : fio profile : filesize - 30G number of files - 1 blocksize - 4K read iops - 42044 write iops - 18020 ----------------------------------------- Version-Release number of selected component (if applicable):glusterfs-3.8.4-18 How reproducible:Always Steps to Reproduce: 1.Configure the gluster volume 2.Mounting as fuse gluster volume in RHEV 3.Create the vm 4.Use benchmark tool FIO to measure performance at VM or server Actual results: VM creationtome is later than other storage methods (SAN Storage, local NFS) Expected results: Same as actual result Additional info: Configuration environment os : RHVH4.1 gluster version : glsuterfs 3.8.4 node : 3 disk : 6(SSD 800G) per server(except os disk) gluster volume : replica, disperse Switch : RDMA Additional environment : DMCache(using VNMe)
Could you reproduce the issue by setting the following volume options and measure the performance and send the results please? group -> virt storage.owner-uid -> 36 storage.owner-gid -> 36 network.ping-timeout -> 30 performance.strict-o-direct -> on network.remote-dio -> off cluster.granular-entry-heal -> enable (this is optional)
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days