Description of problem:
During various tests Customer experiences constant slow I/O between client and server:
Copying 1.3 Go of data (4Ko file) :
- Source -> Gluster Volume (fuse) : 30 Ko/s
- Source -> Gluster client local : 3 Mo/s
- Gluster client local -> Gluster Volume (fuse) : 30 Ko/s
Several runs were performed, results are consistent.
Version-Release number of selected component (if applicable):
Red Hat Gluster Storage 3.1 update 3
Red Hat Enterprise Linux 7.3
gluster-nagios-addons-0.2.7-1.el7rhgs.x86_64 Wed Feb 8 14:35:51 2017
gluster-nagios-common-0.2.4-1.el7rhgs.noarch Wed Feb 8 14:34:28 2017
glusterfs-3.7.9-12.el7rhgs.x86_64 Wed Feb 8 14:34:18 2017
glusterfs-api-3.7.9-12.el7rhgs.x86_64 Wed Feb 8 14:34:19 2017
glusterfs-cli-3.7.9-12.el7rhgs.x86_64 Wed Feb 8 14:34:23 2017
glusterfs-client-xlators-3.7.9-12.el7rhgs.x86_64 Wed Feb 8 14:34:19 2017
glusterfs-fuse-3.7.9-12.el7rhgs.x86_64 Wed Feb 8 14:34:20 2017
glusterfs-ganesha-3.7.9-12.el7rhgs.x86_64 Wed Feb 8 14:36:10 2017
glusterfs-geo-replication-3.7.9-12.el7rhgs.x86_64 Wed Feb 8 14:35:47 2017
glusterfs-libs-3.7.9-12.el7rhgs.x86_64 Wed Feb 8 14:34:17 2017
glusterfs-rdma-3.7.9-12.el7rhgs.x86_64 Wed Feb 8 14:36:11 2017
glusterfs-server-3.7.9-12.el7rhgs.x86_64 Wed Feb 8 14:35:45 2017
nfs-ganesha-gluster-2.3.1-8.el7rhgs.x86_64 Wed Feb 8 14:35:48 2017
python-gluster-3.7.9-12.el7rhgs.noarch Wed Feb 8 14:34:29 2017
samba-vfs-glusterfs-4.4.5-3.el7rhgs.x86_64 Wed Feb 8 14:36:00 2017
vdsm-gluster-4.17.33-1.el7rhgs.noarch Wed Feb 8 14:36:05 2017
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
In next comment.
An year since the last update, and the case is closed! (without any response from customer).
We know for sure that a single threaded copy on glusterfs for small file can lead to performance issues, and the suggested work-around looks fine.
Let us know what are the next steps here! We are in the process of fixing the performance issues for small file in upstream, and there are discussions, but we can't do anything for this particular bug for now.
Planning to close it as WONTFIX?