Bug 1147427
| Summary: | High memory usage by rebalance process | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Krutika Dhananjay <kdhananj> |
| Component: | distribute | Assignee: | Krutika Dhananjay <kdhananj> |
| Status: | CLOSED ERRATA | QA Contact: | Amit Chaurasia <achauras> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | rhgs-3.0 | CC: | asrivast, mzywusko, nbalacha, nsathyan, shmohan, ssamanta, surs, vagarwal |
| Target Milestone: | --- | Keywords: | Patch, ZStream |
| Target Release: | RHGS 3.0.3 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-3.6.0.31-1 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1144413 | Environment: | |
| Last Closed: | 2015-01-15 13:40:33 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1162694 | ||
|
Description
Krutika Dhananjay
2014-09-29 09:06:14 UTC
Patch merged. Verified the bug using multiple scenarios: 1. First created nearly 50k files on the root of the mount point and performed rebalance. 2. The files were put in a sub-folder and performed the rebalance. 3. Then created a deep directory structure with the depth of 25 sub-folders and nearly 17 lakh(1.7 million) files scattered in those folders. 4. Performed rebalance after adding bricks. Each time recorded the statedump of the rebalance process and monitored the memory usage using top and vmstat. In statedump, the hot-count for dict_t hovered between 20-30 while the cold-count between 4060-4080. The memory consumption of the whole glusterd process never crossed more than 4%. Seems there is no memory leak issue. Marking the bug verified. Verified the bug using multiple scenarios: 1. First created nearly 50k files on the root of the mount point and performed rebalance. 2. The files were put in a sub-folder and performed the rebalance. 3. Then created a deep directory structure with the depth of 25 sub-folders and nearly 17 lakh(1.7 million) files scattered in those folders. 4. Performed rebalance after adding bricks. Each time recorded the statedump of the rebalance process and monitored the memory usage using top and vmstat. In statedump, the hot-count for dict_t hovered between 20-30 while the cold-count between 4060-4080. The memory consumption of the whole glusterd process never crossed more than 4%. Seems there is no memory leak issue. Marking the bug verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-0038.html |