Page tree
Skip to end of metadata
Go to start of metadata

Your Rating: Results: 1 Star2 Star3 Star4 Star5 Star 144 rates
It is very likely when you are using Ubuntu that you run into this problem. It happened to me on Ubuntu 9.04 and Ubuntu 8.10

This can happen when running Magnolia on Unix (including OSX) with File Based database like Derby or when using FileBasedDBPersistenceManager.

The cause of the problem is limit in total amount of file descriptors allowed to be open on any given system.

You can also solve this issue by getting rid of Derby and switching to database supported persistence.

Also: See the Too Many Open Files section in the Known issues section of documentation.

Symptoms

This can vary depending on what was happening when the limit was hit

 It can be refusal when trying ot open socket:

SEVERE: Endpoint ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=80] ignored exception: java.net.SocketException: Too many open files
java.net.SocketException: Too many open files

Failure to create a file:

ERROR  org.apache.jackrabbit.core.SearchManager SearchManager.java(onEvent:431) 17.01.2008 12:52:00  Error indexing node.
java.io.FileNotFoundException: /usr/local/tomcat/webapps/myApp/repositories/magnolia/workspaces/config/index/redo.log (Too many open files)

Or even SQLException:

<Sorry don't have a stacktrace at hand, but feel free to contribute yours>

On some of the systems you get a "Too many open files" message on others there is no such clue provided.

Verification

Check your system limits:

$ cat /proc/sys/fs/file-max

or

$ sysctl -a |grep fs.file-max

Each of the commands above should give you the limit value for files descriptors. I've seen the error on systems where this limit was set to 250 000 or less. Please note that this number is sum of all open files on given system (sockets are treated as files on *nix so they contribute to the total number as well).

To check the number of file descriptors used by any given process use

$ ls -la /proc/<pid>/fd

or use the lsof (LiSt Open Files) command

$ lsof -p <pid of process>

or (my favourite)

$ lsof -p <pid> | wc -l

non of the above were sufficient under Ubuntu. Please run

$ ulimit -n

1024 is too few. Note that ulimit is built into bash, not a program of its own. If you use something else, type bash to change the shell for the ulimit command. Please apply both steps of the following solution

Solution

System wide limit change 

Change limit by editing /etc/sysctl.conf and set fs.file-max to 5000
Then run

$ sysctl -p

to apply your changes to your system. In some cases (saw that for on system using kernel 2.6.28) this might still not be enough and you need to restart complete system before change is applied even tho using commands above it appears that the change was applied!
If this is still not enough you might want to increase the number even further. This can happen on server systems running more applications that use file system extensively (e.g. anything that uses derby DB, or any FS based indexing services)

The above changes the system wide file descriptor limit. Check the documentation for your system to find out if there are other limits that need to be changed. For example OSX Leopard seems to be setting limit of 1000 file descriptors per user. On other systems limits are set per process.

Per user/process limit change 

For Linux systems running PAM (that is e.g. Ubuntu)  you will need to adjust /etc/security/limits.conf The format of this file is <username> <limit type> <item> <value>.
E.g. to set the limit for the user tomcatservice the following line would be used:

tomcatservice hard nofile 5000

In other systems it might be the /etc/limits file that needs to be modified. To do the equivalent of PAM setting add following line:

tomcatservice N 5000


or use

ulimit -n 5000

This latest command will only affect the current user in the current session, and only root is allowed to change the ulimit so you would need to run magnolia as root.

This call does not really work on my computers with Ubuntu. I was only be able to reduce the number of open files. Increasing was prohibited. Under ubuntu please verify with the last proposed step (ulimit -n).

5 Comments

  1. Running into this on installation can screw up instance (there won't be corrupted nodes on the repository, but there can be nodes missing completely) so it's better to reinstall afterwards.

  2. Also worth to mention in case someone else would run into similar problems, if you have a GlassFish v3 running on Linux as user "root", then the changes in /etc/security/limits.conf don't take effect and the deployment will fail.
    GlassFish should be started as unprivileged user, and the limits set up accordingly.

  3. On Ubuntu (10.04 LTS) you might also need to edit /etc/pam.d/common-session and add the following line: 

    session required pam_limits.so

    You can than check with 

    su tomcat
    ulimit -n

    if the new values set in /etc/security/limits.conf have been applied successfully.

  4. Recently, we got this asymptomatic error log - increasing of the max open file handles resolved that issue: (just in case someone searches for that problem)

    Caused by: javax.jcr.RepositoryException: file backing binary value not found
    	at org.apache.jackrabbit.core.value.BLOBInTempFile.getStream(BLOBInTempFile.java:140)
    	at org.apache.jackrabbit.core.PropertyImpl.getStream(PropertyImpl.java:527)
    	at info.magnolia.jcr.wrapper.DelegatePropertyWrapper.getStream(DelegatePropertyWrapper.java:166)
    	at info.magnolia.jcr.wrapper.DelegatePropertyWrapper.getStream(DelegatePropertyWrapper.java:166)
    	at info.magnolia.jcr.wrapper.DelegatePropertyWrapper.getStream(DelegatePropertyWrapper.java:166)
    	at info.magnolia.cms.core.BinaryNodeData.getStream(BinaryNodeData.java:118)
    	... 117 more
    Caused by: java.io.FileNotFoundException: /srv/tomcat/application/temp/bin2538969685057012346.tmp
    	at org.apache.jackrabbit.core.data.LazyFileInputStream.<init>(LazyFileInputStream.java:63)
    	at org.apache.jackrabbit.core.value.BLOBInTempFile.getStream(BLOBInTempFile.java:138)
    	... 122 more
  5. On my Lubuntu 15.10 (after upgrade from Ubuntu 12), the very first start of Magnolia 5.4.5 (under Java Oracle 8) also resulted in the "max open files limit" error and stopped the start sequence.

    Increasing the SOFT limit (which was 1024 after the installation, hard was 65536; check with "ulimit -Sn" and "ulimit -Hn", respectively) to 5000, i.e. adding a line

    <username> soft nofile 5000

    into "/etc/security/limits.conf", helped overcome this issue.