Bug 1156003 - write speed degradation after replica brick is down
Summary: write speed degradation after replica brick is down
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.5.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
Assignee: Anuradha
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-10-23 11:33 UTC by G K
Modified: 2023-09-14 02:49 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-06-17 16:23:41 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description G K 2014-10-23 11:33:18 UTC
Description of problem:
We have created gluster replicated volume on two servers:
gluster volume create VolumeName transport tcp replica 2 Server1:/brick Server2:/brick
On gluster client machine we have mounted volume using glusterfs.
Read/Write speed at glusterfs mount on gluster client machine are around 120MBytes/sec at regular state.
If one of servers(brick) goes down or gets offline from network read speed remains same as in regular state, but write speed degrades to 3Mbytes/sec.


Version-Release number of selected component (if applicable):
3.5.2

How reproducible:


Steps to Reproduce:
1. create replicated volume on two nodes (servers)
2. mount volume on client machine using glusterfs
3. test write speed on glusterfs mount
4. make one of replica volume nodes offline
5. retest write speed on glusterfs mount


Actual results:
3MBytes/s

Expected results:
120MBytes/s

Additional info:

Comment 1 Niels de Vos 2014-10-28 12:35:47 UTC
Could you explain what kind of writes you are doing? Like, is it a sequential write to one big file, or many smaller writes to the same, or different files?

Thanks!

Comment 2 G K 2014-10-28 14:50:58 UTC
It's happening when is Samba share on mount point. We are writing one big file.

Comment 3 Anuradha 2016-06-15 09:16:22 UTC
Hello,

Could you please check if this happens even on 3.7.x?

Thanks,
Anuradha.

Comment 4 Niels de Vos 2016-06-17 16:23:41 UTC
This bug is getting closed because the 3.5 is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.

Comment 5 Red Hat Bugzilla 2023-09-14 02:49:31 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.