cancel
Showing results for 
Search instead for 
Did you mean: 

Plumbr detects memory Leak

sleeper
Champ in-the-making
Champ in-the-making
I have been having problems running out of permgen space.  My maxPermSize is set to 1G and I have to restart the server about once a month because I run out of permgen space.

So I attached plumbr to the JVM and it says that it found a memory leak.

<report>

Currently 121618 leaked objects exist of class
net.sf.ehcache.store.chm.ConcurrentHashMap$HashEntry

The objects are created at
net.sf.ehcache.store.chm.ConcurrentHashMap$Segment.put(java.lang.Object, int, java.lang.Object, boolean):457

and are being held
in table of net.sf.ehcache.store.chm.ConcurrentHashMap$Segment
in segments of net.sf.ehcache.store.chm.SelectableConcurrentHashMap
in map of net.sf.ehcache.store.MemoryStore
in memoryStore of net.sf.ehcache.Cache
in cache of org.alfresco.repo.cache.EhCacheAdapter
in sharedCache of org.alfresco.repo.cache.TransactionalCache
in childByNameCache of org.alfresco.repo.domain.node.ibatis.NodeDAOImpl
in nodeDAO of org.alfresco.repo.domain.permissions.AclDAOImpl
in fAclDAO of org.alfresco.repo.avm.AVMDAOs
in fgInstance of org.alfresco.repo.avm.AVMDAOs
in elementData of java.util.Vector
in classes of org.apache.catalina.loader.WebappClassLoader
in com.thoughtworks.xstream.core.util.CompositeClassLoader

</report>

I have been searching for a fix on line and I have seen that some people see this error with Tomcat and they need to move the jar file under $TOMCAT/common/lib but I cannot do that.
Does anybody know what might be causing this error and what I might be able to do about it?

Thank you.
8 REPLIES 8

afaust
Legendary Innovator
Legendary Innovator
Hello,

120 k HashEntry instances don't seem too problematic - keep in mind that EhCache is used to keep some elements, which may be used often and may be expensive to fetch in memory to speed up the general application. The HashEntry instances are "just" data structure elements that make up the cache and depending on how many objects are maintained, they can quickly add up.
Are there any other suspect instance types in the report output? Usually, when there is a memory leak, you can drill it down to some type of complex Alfresco object which claims a lot of memory with just a limited amount of instances through containment.

Also, the PermGen is usually the least affected by leaked objects which reside on the Heap. PermGen is used for class objects, compiled code and internalized Strings. Not all memory analyzers are capable of investigating into PermGen errors. An example of what can eat up your PermGen are dynamically compiled JavaScript files used by the Rhino engine. To find out you need to obtain a list of loaded classes from your environment / memory dump.

1G is a very extreme PermGen - I usually make do with about 320 M (Repository-only Tomcat) even in long running systems.

Regards
Axel

sleeper
Champ in-the-making
Champ in-the-making
Thanks Axel, I agree it is a big permGen but it sill fills up.
The report I have above is as detailed as it gets.  I use mysql as my authentication source with the plugin. I believe there is an issue here. 
I did find something interesting.  Once I start alfresco I connect with visualVM and there are about 25,000 classes loaded.  I then run some test scripts that search and then get content.  If I use the admin account, which is in the alfresco database, then the class number stays the same.  However, when I run the same script to test with users from my sql auth database, the classes start going up really quickly. 
Of course as that class number goes up, my permGen use goes up.  So it seems there is a problem with the way the plugin is used.
I am looking into that now.  Any ideas about that?

Thank you.

sleeper
Champ in-the-making
Champ in-the-making
I am still having problems with this.
I started the JVM and had it list all the classes loaded.

Every time I make even a simple request, these classes get loaded and I don’t see why they get loaded every time

[Loaded sun.reflect.GeneratedConstructorAccessor241 from __JVM_DefineClass__]
[Loaded sun.reflect.GeneratedConstructorAccessor242 from __JVM_DefineClass__]
[Loaded sun.reflect.GeneratedConstructorAccessor243 from __JVM_DefineClass__]
[Loaded sun.reflect.GeneratedConstructorAccessor244 from __JVM_DefineClass__]
[Loaded org.mozilla.javascript.gen.workspace___SpacesStore_Company_Home_Data_Dictionary_Web_Scripts_Extensions_10 from file:/var/local/cms/server/tomcat/webapps/alfresco/WEB-INF/lib/js.jar]
[Loaded org.mozilla.javascript.gen.workspace___SpacesStore_Company_Home_Data_Dictionary_Web_Scripts_Extensions_11 from file:/var/local/cms/server/tomcat/webapps/alfresco/WEB-INF/lib/js.jar]
[Loaded org.mozilla.javascript.gen.workspace___SpacesStore_Company_Home_Data_Dictionary_Web_Scripts_Extensions_12 from file:/var/local/cms/server/tomcat/webapps/alfresco/WEB-INF/lib/js.jar]

Why does it load a new class for the ftl output?

Thanks.

afaust
Legendary Innovator
Legendary Innovator
I don't believe it is loading these classes for FTL output but for actual web script logic (JavaScript controller). This is Rhino script compiling which is known to create memory problems when compiling dynamic strings (which a script inside the Data Dictionary basically is considered to be since it can change at any time), see
<ul>
<li>https://bugzilla.mozilla.org/show_bug.cgi?id=49205</li>
<li>https://bugzilla.mozilla.org/show_bug.cgi?id=243305</li>
<li>https://bugzilla.mozilla.org/show_bug.cgi?id=49925</li>
</ul>

The simplest way to avoid this is to set the Rhino script processor to not optimize JavaScript files too much. By default, Alfresco should not use an aggressive optimization level - it does not specify the optimization level which defaults to -1 for "no optimizations" / "no class generation".
Have you or any of the addons you have installed changed anything within the RhinoScriptProcessor?

Regards
Axel

sleeper
Champ in-the-making
Champ in-the-making
Okay I will buy that.  It is loading from the js.jar file and that is rhino.  I don’t think I have any third party’s installed. 
I downloaded the lated Rhino js.jar and I have the same issue.  It just keeps loading the ftl and incrementing the number at the end.
Can you tell me where the Rhino stuff is configured?  I have not been able to find an configuration that refers to rhino.

sleeper
Champ in-the-making
Champ in-the-making
I was not able to find where in the alfresco code to set the Optimization Level, So I edited the Rhino function for compile to just always set the optimization to -1 but it still seem to be adding the classes for every output.
I would think that would mean that is is always set now since it is set in the function no matter what else tries to set it, it should be what I want it to be.

sleeper
Champ in-the-making
Champ in-the-making
Okay so from what I have learned I would agree with you.
I have done everything I can to get Rhino to not load the script as a class.  I have change the Rhino jar so that so matter what you set the optimization is -1.  I have also gone into the RhinoScriptProcessor.java  file and set the optimization to -1 for anything that was cx.
so all the compileScript calls I could fine have been been set to -1.
But still every time I make a call with an ftl output, it loads a class.  Do you have any ideas about where else I could look that cause the same issue?

Thank you

afaust
Legendary Innovator
Legendary Innovator
Hello,

I don't know why you have added a custom js.jar into Alfresco - there should only be a rhino-js-16R7.jar in there. You should not need to modify any Rhino code. The modification in RhinoScriptProcessor about setting optimization level to -1 should be enough and only needs to be made in the blocks where cx.compileXXX is called.

If your web scripts from the web scripts extension are meant to be long term web scripts you could also move them from the Repository onto the classpath to avoid compiled scripts being uncacheable.

Regards
Axel
Getting started

Tags


Find what you came for

We want to make your experience in Hyland Connect as valuable as possible, so we put together some helpful links.