Bug 985957
Summary: | Rebalance memory leak | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Pierre-Francois Laquerre <pierre.francois> |
Component: | distribute | Assignee: | Nagaprasad Sathyanarayana <nsathyan> |
Status: | CLOSED EOL | QA Contact: | |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.4.1 | CC: | bjoern.teipel, bugs, gluster-bugs, jbd, jbyers, joe, kdhananj, pierre.francois, smohan, social |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-10-07 13:16:48 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1144792 | ||
Bug Blocks: |
Description
Pierre-Francois Laquerre
2013-07-18 15:12:52 UTC
This issue already existed for me on Gluster 3.3.x Any brick action like rebalancing or removing causes the glusterfsd process to eat more and more memory. After the file handle bug another serious issue, I never was able to complete a rebalance because my filesystem is 36 million inodes This process as example had 22% memory consumption yesterday, now at 30%. I just have to wait until the VMM kills the process .... root 21446 16.5 30.6 10380448 10061080 ? Ssl Oct15 481:04 /usr/sbin/glusterfs -s localhost --volfile-id content1 --xlator-option *dht.use-readdirp=yes --xlator-option *dht.lookup-unhashed=yes --xlator-option *dht.assert-no-child-down=yes --xlator-option *replicate*.data-self-heal=off --xlator-option *replicate*.metadata-self-heal=off --xlator-option *replicate*.entry-self-heal=off --xlator-option *replicate*.readdir-failover=off --xlator-option *dht.rebalance-cmd=5 --xlator-option *dht.node-uuid=88a7a0a0-91e6-4228-8d66-5b11b3b4e7b2 --socket-file /var/lib/glusterd/vols/content1/rebalance/88a7a0a0-91e6-4228-8d66-5b11b3b4e7b2.sock --pid-file /var/lib/glusterd/vols/content1/rebalance/88a7a0a0-91e6-4228-8d66-5b11b3b4e7b2.pid -l /var/log/glusterfs/content1-rebalance.log This is still a problem in 3.4.1. This bug was fixed as part of https://bugzilla.redhat.com/show_bug.cgi?id=1144792 and whose fix will be available in glusterfs-3.4.6. Hence, moving the state of the bug to MODIFIED. GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5. This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs". If there is no response by the end of the month, this bug will get automatically closed. GlusterFS 3.4.x has reached end-of-life. If this bug still exists in a later release please reopen this and change the version or open a new bug. |