| Summary: | [Doc RFE] Include multi-threaded self heal related documentation updates | ||
|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | Anjana Suparna Sriram <asriram> |
| Component: | doc-Administration_Guide | Assignee: | Divya <divya> |
| doc-Administration_Guide sub component: | Default | QA Contact: | Nag Pavan Chilakam <nchilaka> |
| Status: | CLOSED CURRENTRELEASE | Docs Contact: | |
| Severity: | high | ||
| Priority: | unspecified | CC: | amukherj, asriram, asrivast, divya, mhideo, nchilaka, nlevinki, pkarampu, rcyriac, rhinduja, rhs-bugs, rwheeler, storage-doc |
| Version: | rhgs-3.1 | Keywords: | Documentation, FutureFeature, ZStream |
| Target Milestone: | --- | Flags: | divya:
needinfo-
|
| Target Release: | RHGS 3.1.3 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Enhancement | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-06-29 14:20:07 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Bug Depends On: | |||
| Bug Blocks: | 1311845 | ||
|
Description
Anjana Suparna Sriram
2016-03-16 11:02:26 UTC
Divya,
When I click on the link above it gives 404 error.
Pranith
Documentation content is fine, except in the option description may be it is better to say self-heal daemon rather than shd. Pranith Hi Pranith, I have incorporated your comments. http://jenkinscat.gsslab.pnq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.1.3-Administration_Guide%20%28html-single%29/582/artifact/tmp/en-US/html-single/index.html#Configuring_Volume_Options Please review and sign-off. Based on Comment 12, moving the bug to ON_QA. I would refrain from using the word thread as it can send a wrong message. It is number of self heals(files) that can be happen rather than really spawning that many threads: cluster.shd-max-threads current text: Specifies the number of self-heal threads to be run for healing on each replica by self-heal daemon. My suggestion: Specifies the number of entries that can be self healed in parallel on each replica by self-heal daemon (In reply to nchilaka from comment #14) > I would refrain from using the word thread as it can send a wrong message. > It is number of self heals(files) that can be happen rather than really > spawning that many threads: > > > > cluster.shd-max-threads > > current text: > Specifies the number of self-heal threads to be run for healing on each > replica by self-heal daemon. > > My suggestion: > Specifies the number of entries that can be self healed in parallel on each > replica by self-heal daemon I have updated the description of cluster.shd-max-threads" based on your suggestion. Link to the latest doc: http://jenkinscat.gsslab.pnq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.1.3-Administration_Guide%20%28html-single%29/582/artifact/tmp/en-US/html-single/index.html#Configuring_Volume_Options Hi Divya, Unable to open the link ...404 error page not found (In reply to nchilaka from comment #16) > Hi Divya, > Unable to open the link ...404 error page not found Hi Nag, Please try this link: http://jenkinscat.gsslab.pnq.redhat.com:8080/view/Gluster/job/doc-Red_Hat_Gluster_Storage-3.1.3-Administration_Guide%20%28html-single%29/lastBuild/artifact/tmp/en-US/html-single/index.html#Configuring_Volume_Options looks good to me moving to verified Alok advised me to add a sub-section on multi-threaded self-heal in the administration guide as this is a significant change. I will add the same and move this bug back to ON_QA for QA verification. Akk ... will update the doc and then move it to ON_QA. Alok, Pranith, Based on our email discussions, I have added a subsection/formal para for documenting multithreaded self-heal in the Administration guide. Link to the documentation: http://jenkinscat.gsslab.pnq.redhat.com:8080/view/Gluster/job/doc-Red_Hat_Gluster_Storage-3.1.3-Administration_Guide%20%28html-single%29/lastBuild/artifact/tmp/en-US/html-single/index.html#Triggering_Self-Healing_on_Replicated_Volumes Please review and provide feedback/sign-off. Alok, Based on our email discussions, I have added a subsection/formal para for documenting multithreaded self-heal in the Administration guide. Link to the documentation: http://jenkinscat.gsslab.pnq.redhat.com:8080/view/Gluster/job/doc-Red_Hat_Gluster_Storage-3.1.3-Administration_Guide%20%28html-single%29/lastBuild/artifact/tmp/en-US/html-single/index.html#Triggering_Self-Healing_on_Replicated_Volumes Please review and provide feedback/sign-off. Based on my email discussions with Alok and Pranith, I have added a subsection/formal para for documenting multithreaded self-heal in the Administration guide. Link to the documentation: http://jenkinscat.gsslab.pnq.redhat.com:8080/view/Gluster/job/doc-Red_Hat_Gluster_Storage-3.1.3-Administration_Guide%20%28html-single%29/lastBuild/artifact/tmp/en-US/html-single/index.html#Triggering_Self-Healing_on_Replicated_Volumes Alok has signed-off on the content in Comment 23. Hence, moving the bug to ON_QA. looks good to me Alok, I have updated the description of "cluster.shd-wait-qlength" option to: Specifies the number of entries that must be kept in the queue for self-heal daemon threads to take up as soon as any of the threads are free to heal. This value should be changed based on how much memory self-heal daemon process can use for keeping the next set of entries that need to be healed. Link to the latest doc: http://jenkinscat.gsslab.pnq.redhat.com:8080/view/Gluster/job/doc-Red_Hat_Gluster_Storage-3.1.3-Administration_Guide%20%28html-single%29/lastBuild/artifact/tmp/en-US/html-single/index.html#Configuring_Volume_Options Let me know if it requires any change. Hi Divya, I felt the description previously in the options list was right and that was not requiring change. What was needed to be changed was in section "10.11.3. Triggering Self-Healing on Replicated Volumes" Under Multithreaded Self-heal: Self-heal daemon has the capability to handle multiple heals in parallel and is supported on Replicate and Distribute-replicate volumes. Increasing the number of heals has impact on I/O performance. You can configure the number of entries that can be self healed in parallel on each replica by self-heal daemon using cluster.shd-max-threads volume option. Using cluster.shd-wait-qlength volume option, you can configure the ============================>waiting period for self-heal daemon <=================================== to crawl and keep the entries that need to be healed, ready for the threads to take up as soon as any of the threads are free. It is not waiting period but number of heal entries that queue up. (In reply to nchilaka from comment #30) > Hi Divya, > I felt the description previously in the options list was right and that was > not requiring change. > What was needed to be changed was in section "10.11.3. Triggering > Self-Healing on Replicated Volumes" > Under Multithreaded Self-heal: > Self-heal daemon has the capability to handle multiple heals in parallel and > is supported on Replicate and Distribute-replicate volumes. Increasing the > number of heals has impact on I/O performance. You can configure the number > of entries that can be self healed in parallel on each replica by self-heal > daemon using cluster.shd-max-threads volume option. Using > cluster.shd-wait-qlength volume option, you can configure the > ============================>waiting period for self-heal daemon > <=================================== to crawl and keep the entries that need > to be healed, ready for the threads to take up as soon as any of the threads > are free. > > It is not waiting period but number of heal entries that queue up. Hi Nag, The description of "cluster.shd-max-threads" option is added in three places (twice in administration guide (volume options and Multithreaded Self-heal section) and in What's New chapter of Release Notes). I am waiting for Alok's confirmation to update it in all the locations. |