Description of problem: parallel-readdir has a host of dependencies currently: 1. performance.readdir-ahead 2. readdirplus has to be turned on by setting any of the following options: 2a. --use-readdirp=yes while mounting 2b. turning on performace.force-readdirp 2c. turning on cluster.force-readdirp However, the user would expect performance.parallel-readdir functioning right away after turning on performance.parallel-readdir. So, from usability perspective it makes sense to turn on all the dependencies above mentioned above automatically when performance.readdir-ahead is set. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. turn off performance.readdir-ahead 2. turn on performance.parallel-readdir 3. Actual results: On inspecting volume file generated, we won't see readdir-ahead as parent of each DHT subvolume. Expected results: On inspecting volume file generated, we should see readdir-ahead as parent of each DHT subvolume. Additional info:
What should happen when parallel-readdir is disabled? should readdir-ahead be disabled? if it was automatically enabled as a part of parallel-readdir.
(In reply to Poornima G from comment #2) > What should happen when parallel-readdir is disabled? should readdir-ahead > be disabled? if it was automatically enabled as a part of parallel-readdir. Two choices: 1. we can remember the earlier state of dependencies and restore them back. 2. print a msg on cli saying the current values of dependencies asking the sysadmin to review them and set to values they intend to. Also, we can go with a hybrid approach of having 1 and 2, 3. restore to defaults and print a msg saying values are being restored to so and so asking sysadmin/user to review whether they make sense currently.
The fix would be in glusterd1 and 2, in glusterd-volgen code we need to set readdir-ahead to on, when parallel-readdir is enabled.
Can we assign someone to do this, or close-defer?
The fix for this bug would involves the code fix on the glusterd side to enable readdir-ahead option on the volume, when parallel-readdir is enabled. 1. But what is the requirement for enabling performance.parallel-readdir ? 2. By default performance.parallel-readdir is disabled for the volume. For which use case do we recommend enabling performance.parallel-readdir ? 3. Have we documented this use case which requires enabling performance.parallel-readdir in RHGS guides ? 4. Qualification of this option - enabling parallel-readdir - would need performance regression to be carried out. QE is not performing performance regression testing. Is dev has that bandwidth to support performance regression ? 5. Dependent bugs were CLOSED as WONTFIX. There are no pending customer cases attached to this bug. Is there any customers or any use cases waiting for this fix ? 6. Customers running in to any performance problems, which requires performance.parallel-readdir and performance.readdir-ahead then they can enable these options separately and the same could be documented Here are the thoughts from QE perspective. This fix from glusterd is simple, but the need for such an option should be justified. These 2 options can be enabled discretely whenever the situation demands it. And considering there are no customer cases waiting for such change, QE proposes to close this bug. @sunil, can you share your thoughts ?
The new patch which addresses this bz is this one: https://github.com/gluster/glusterfs/pull/2019 And I just noticed that I haven't added the link to the new patch in the comments above (apologies for the mistake). This new patch actually adds a dependency chain in glusterd code which checks for the option set being set by the user and approves/fails the operation based on if the dependency is satisfied to not. Now, this can be used for other options as well but to start with the patch adds the dependency for the `parallel-readddir` option only, and only if the dependent options which in this case is `readdir-ahead`, `dht.force-readdirp`, and `performance.force-readdirp` satisfies the clause/dependency to set the `parallel-readdir` option only then the set operation succeeds. Now, this is a big change and adds POC for further changes that can be done for other dependent options. We don't have any customer requirement for this change (as of now), this is only an enhancement to the code(but a little bigger than what was expected). No performance regression is done for the patch as of now. I won't say this is a **required** enhancement, so from my end, I think we can close this bz as UPSTREAM if other's agree as well. Adding a needinfo on @Csaba as well, since he is also equally involved in the POC being implemented in the above patch. Thanks Nikhil
(In reply to Nikhil Ladha from comment #33) > The new patch which addresses this bz is this one: > https://github.com/gluster/glusterfs/pull/2019 > And I just noticed that I haven't added the link to the new patch in the > comments above (apologies for the mistake). > > This new patch actually adds a dependency chain in glusterd code which > checks for the option set being set by the user and approves/fails the > operation based on if the dependency is satisfied to not. > Now, this can be used for other options as well but to start with the patch > adds the dependency for the `parallel-readddir` option only, and only if the > dependent options which in this case is `readdir-ahead`, > `dht.force-readdirp`, and `performance.force-readdirp` satisfies the > clause/dependency to set the `parallel-readdir` option only then the set > operation succeeds. > > Now, this is a big change and adds POC for further changes that can be done > for other dependent options. We don't have any customer requirement for this > change (as of now), this is only an enhancement to the code(but a little > bigger than what was expected). No performance regression is done for the > patch as of now. > > I won't say this is a **required** enhancement, so from my end, I think we > can close this bz as UPSTREAM if other's agree as well. > > Adding a needinfo on @Csaba as well, since he is also equally involved in > the POC being implemented in the above patch. > > > Thanks > Nikhil Thanks Nikhil. Waiting for input from Csaba. Hi Csaba, Please share your suggestions considering comment32 and comment33
I agree with closing this bug and continuing upstream, as the situation that we intend to improve here did not manifest as a customer problem.
Closing this bz as UPSTREAM based on comment 32, comment 33, and comment 37. For reference, issue link: https://github.com/gluster/glusterfs/issues/1416