Description of problem: When Manoj Pillai, Raghavendra G and I were working on enhancing gluster read/write performance on NVMe backend, at one point it was observed that the fuse reader thread hit ~97% utilization even with client-io-threads enabled. At this point, single-threaded fuse reader became the bottleneck. The only option left was to scale the number of fuse threads. With this, we saw 8K iops improvement with 4 reader threads. Refer to https://goo.gl/AubdwP and/or https://goo.gl/6VbqRA for more information on actual tests performed and the results. Clone of https://github.com/gluster/glusterfs/issues/412 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
sorry, should have asked the questions at one go, sorry for the inconvinience. Below are some more questions 1) is there any other options that needs to be set, I believe we need to change io-threads value too see the optimal value. Any insights here will be really helpful 2) ideally this the fuse reader threads must be handled internally as a QOS,based on the load, instead of leaving it to the end user to set the value, as the end user may not be educated on the purpose and also can't keep remounting again. Is there a bug for this(could be an RFE)?
(In reply to nchilaka from comment #12) > sorry, should have asked the questions at one go, sorry for the > inconvinience. > Below are some more questions > > > 1) is there any other options that needs to be set, I believe we need to > change io-threads value too see the optimal value. Any insights here will be > really helpful Wasn't necessary in our experience at least. You can have the client-io-threads disabled since parallel requests can now be handled by multiple fuse threads. No change in the brick stack. > 2) ideally this the fuse reader threads must be handled internally as a > QOS,based on the load, instead of leaving it to the end user to set the > value, as the end user may not be educated on the purpose and also can't > keep remounting again. Is there a bug for this(could be an RFE)? Yes, there is - https://github.com/gluster/glusterfs/issues/406 -Krutika
The doc text has been updated. Kindly review the technical accuracy.
Copy-pasting the provided doc text here: "Red Hat Gluster Storage introduces a feature for multi-threaded FUSE reader threads, which imports parallel requests on a FUSE mount and handled them by multiple threads, proffering better I/O performance." Two things: 1. The verb tense used in "which imports parallel requests on a FUSE mount and ***handled*** them by multiple threads" doesn't sound right. 2. Are we not going to mention the name of the option and how to configure it as well, as part of the doc text? -Krutika
(In reply to Krutika Dhananjay from comment #24) > Copy-pasting the provided doc text here: > > "Red Hat Gluster Storage introduces a feature for multi-threaded FUSE reader > threads, which imports parallel requests on a FUSE mount and handled them by > multiple threads, proffering better I/O performance." > > Two things: > 1. The verb tense used in "which imports parallel requests on a FUSE mount > and ***handled*** them by multiple threads" doesn't sound right. >> Have made the necessary changes. > 2. Are we not going to mention the name of the option and how to configure > it as well, as part of the doc text? >> The doc bug will take care of the configuration part. I have added the name of the option in the doc text. > -Krutika >> Let me know if anything else needs to be added here.
(In reply to Srijita Mukherjee from comment #25) > (In reply to Krutika Dhananjay from comment #24) > > Copy-pasting the provided doc text here: > > > > "Red Hat Gluster Storage introduces a feature for multi-threaded FUSE reader > > threads, which imports parallel requests on a FUSE mount and handled them by > > multiple threads, proffering better I/O performance." > > > > Two things: > > 1. The verb tense used in "which imports parallel requests on a FUSE mount > > and ***handled*** them by multiple threads" doesn't sound right. > >> Have made the necessary changes. > > 2. Are we not going to mention the name of the option and how to configure > > it as well, as part of the doc text? > >> The doc bug will take care of the configuration part. I have added the name of the option in the doc text. > > -Krutika > > >> Let me know if anything else needs to be added here. Looks good to me. -Krutika
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0263