Hide Forgot
Created attachment 1514967 [details] paper describing performance of glusterfs on ssds Description of problem: While working on bz 1629589, * xavi pointed out io-cache could be the right place for current writes to update the already-read data (another one is read-ahead). * me and xavi both found out default page-size of io-cache (128K) is too agressive and causes performance regression for random read workloads. This was also reported in the paper attached. The paper was pointed out by Shawn Houston from Red Hat. I assume the cache being referred in the paper refers to io-cache. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
I am proposing to have 8k as the default page-size for io-cache.
REVIEW: https://review.gluster.org/21398 (performance/io-cache: update pages with write data) posted (#4) for review on master by Raghavendra G
REVIEW: https://review.gluster.org/21398 (performance/io-cache: update pages with write data) posted (#5) for review on master by Raghavendra G
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/