cancel
Showing results for 
Search instead for 
Did you mean: 

Too many files open

nyronian
Champ in-the-making
Champ in-the-making
My system crashed with a pile of the following exceptions:

WARNING: Reinitializing ServerSocket
Oct 29, 2008 2:44:32 PM org.apache.tomcat.util.net.PoolTcpEndpoint acceptSocket
SEVERE: Endpoint ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=80] ignored exception: java.net.SocketException: Too many open files
java.net.SocketException: Too many open files
        at java.net.PlainSocketImpl.socketAccept(Native Method)
        at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
        at java.net.ServerSocket.implAccept(ServerSocket.java:453)
        at java.net.ServerSocket.accept(ServerSocket.java:421)
        at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)
        at org.apache.tomcat.util.net.PoolTcpEndpoint.acceptSocket(PoolTcpEndpoint.java:408)
        at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:71)
        at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:685)
        at java.lang.Thread.run(Thread.java:619)

It's not clear to me where to search for a solution.  Do I have a configuration issue on linux?
5 REPLIES 5

nadaoneal
Champ in-the-making
Champ in-the-making
I'm just experiencing this problem now. It looks to me from searching the forums like it might be caused by, among other things, the wiki function on 3.0 versions of Labs software, and that it's fixed in the 3.1 and 3.2 versions.

If you're stuck with using the 3.0 version for whatever reason, you can also change the file descriptor limits, so Alfresco can have more than 1024 files open. (geez, I know!) Here's a good tutorial on how to do that: http://www.xenoclast.org/doc/benchmark/HTTP-benchmarking-HOWTO/node7.html

I should mention, of course, that increasing the file descriptor limit is a bit of a hack - you're going to continue to have problems if files aren't being closed for whatever reason. Other things to look at are custom software you've added to your installation and lucene stuff - I recommend using google to search this forum (with site:forums.alfresco.com) to get answers that are closer to your specific situation.

nadaoneal
Champ in-the-making
Champ in-the-making
Hi, I just realized that I was a little off in my advice above - apparently it's normal procedure to have a file handle limit of 4096 on Lucene systems:
http://wiki.alfresco.com/wiki/Search#File_Handles_and_Lucene
… so if you aren't running any custom software, it's totally legit, not a bandaid, to just increase this limit per the documentation.

dmorozov
Champ in-the-making
Champ in-the-making
Hello,
I got exactly the same exception and stack trace is exactly as mentioned at Alfresco's wiki:
http://wiki.alfresco.com/wiki/Too_many_open_files

But!
May 10, 2011 9:39:09 AM org.apache.tomcat.util.net.JIoEndpoint$Acceptor run
SEVERE: Socket accept failed
java.net.SocketException: Too many open files
   at java.net.PlainSocketImpl.socketAccept(Native Method)
   at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
   at java.net.ServerSocket.implAccept(ServerSocket.java:453)
   at java.net.ServerSocket.accept(ServerSocket.java:421)
   at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)
   at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:352)
   at java.lang.Thread.run(Thread.java:619)

alfresco@share:~$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 20
file size               (blocks, -f) unlimited
pending signals                 (-i) 16382
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 20000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks

So, I have it set to 20000!!! How it possible to have "too many open files"?

More over:
$ lsof -p 858 | wc -l
821

and I have less then 100 TCP connections to the server.

Is anybody can help and get at least some ideas where I can look for the solution?

P.S. by the way we have pretty huge content repository (23G) but I believe that shouldn't affect the background scheduled indexing job that will deal with only new documents.

loftux
Star Contributor
Star Contributor
What of the recommended settings from the Wiki page have you applied? What is you OS? Ubuntu for example require extra config for changes to take effect, it may be the same for others.
Do you run as root? Because it may not work to increase file handles for root.
Did you run the command (for pid…) to see that you config changes actually have taken effect?

dmorozov
Champ in-the-making
Champ in-the-making
An actually seems we found the issue.
We  have Ubuntu OS and have /etc/security/limits.conf adjusted but forgot about file-max settings in /etc/sysctl.conf file.
We adjusted in this file as well according article:
http://unixfoo.blogspot.com/2008/01/kernel-parameter-file-nr-and-file-max.html

The issue is gone.