Bug 956781
Summary: | On volume rebalance, max file limit is reached because files doesn't seem to be closed after the move (too many open files) | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | loopx <laurent.henssen> |
Component: | distribute | Assignee: | Kaushal <kaushal> |
Status: | CLOSED DUPLICATE | QA Contact: | |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.3.1 | CC: | gluster-bugs, kaushal |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2013-04-26 06:40:03 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
loopx
2013-04-25 15:40:15 UTC
More information on the "source" side : - file are still open - but, at the end, CPU go very high - at the same time, number of opened file start decreasing - so, when CPU is high, file number decrease - I suppose that file open number will go to "normal" number - and then, CPU will go back to normal load The client have trouble accessing the volume : very slow Command "gluster" is also very slow (and could fail). If you need more information ... just ask Please also note that I'm running RHEL5.9 on these 4 nodes. On source nodes : - they get the problem, but end not specially at the same time - one is done and file opened is normal - but the second source node has not finished yet and have still more than 20.000 opened files For now, second node source is still at 100% (50 minutes) and will probably go to 1 hour before calm down ... How much open file Gluster is requiring ? Thanks for reporting this. This issue has been reported earlier and is being tracked in another bug, for which we have a fix. Closing this as a duplicate. *** This bug has been marked as a duplicate of bug 928631 *** |