Hide Forgot
GlusterFS allocates thousands of FDs for one particular directory. This FD allocation is present on all nodes that serve the GlusterFS volume. Over time, GlusterFS FD allocation will increase from ~5k to 50k+ FDs in the span of 10-20 minutes. http://pastie.org/2633989 contains some information that was gathered for submission to the IRC channel. It was suggested that I file a bug report. If specific log files are required, let me know and I will attach them. Thank you
Can you attach the entire nfs log file? Did any of the servers go down and come back up? Are you using 3.1.6?
(In reply to comment #1) > Can you attach the entire nfs log file? Did any of the servers go down and come > back up? Are you using 3.1.6? The servers did not go down or reboot but certain daemons like SSHD were not able to open file handles for incoming connections. Other processes on the machines were also adversely affected. The nfs.log is too big and cannot be attached (8MB gzipped). Can I email it to you instead? We are using 3.1.6. Thank you
Created attachment 684 file one of ten
Created attachment 685 file 2 of 10
Created attachment 686
Created attachment 687
Created attachment 688
Created attachment 689
Created attachment 690
Created attachment 691
Created attachment 692
Created attachment 693
Hi Anthony, can you reproduce the issue with the latest release (3.3)?
Anthony, you should not see this issue in 3.3 as we use "anonymous" stateless FDs and hence open-fd leak problem should not be seen at all. Please re-open it if you still see it.