cancel
Showing results for 
Search instead for 
Did you mean: 

How to Scale up in Alfresco 6.2(Docker-Compose)

ujkim
Confirmed Champ
Confirmed Champ

Hello!  We're using on Alfresco 6.2 Community Docker-compose Service. (Link)

We have a question about Scale up Topics.

Currently, We are operating the service with HA Proxy configuration on 2 servers.

However, I'm thinking about how to scale up in a situation where there is not much memory available.

Is there a way to scale up Alfresco Docker-Compose Community Service without adding memory to the device?

We ask for help from the Alfresco team!

I'll wait for your thoughts. Thanks all the time for the help!

Uijoong-Kim

4 REPLIES 4

afaust
Legendary Innovator
Legendary Innovator

What do you want to scale up specifically, "without adding memory to the device"? Number of concurrent users? Number of items (files / folders) in the DB? Number of elements held in cache to avoid DB access? Size of the SOLR index?

Depending on where specifically you are concerned about scaling, there may be configurations / design considerations to take into account that can help limit the growth of required memory. But generally speaking, you should be able to adequately scale the available memory with your specific use case.

As one of the developers of aldica I can at least tell you that that addon can help limit the amount of physical memory an Alfresco system needs by using off-heap caching and swapping to disk to scale up the number of cached elements without having to scale the memory allocated as much as one would have in default Alfresco.

ujkim
Confirmed Champ
Confirmed Champ

Thanks for your response!

I'm sorry for late response. I used to ask questions in the wrong information. Sorry!

I was considering a scale out plan.

Recently, Alfresco's memory usage rate(confirmed through the Solr Dashboard) was used at a high rate (90 to 95%), and the phenomenon coninused to die due to insufficient Heap Size. (currently, as a temporary measure, a script that checks the memory size and automatically restarts has been applied. We are aware that this cannot be a fundamental solution.)

We are currently thinking about ways to imrove operational stability as it's not possible to add devices.

We are considering a way to scale out the docker container service currently in use through kubernetes by putting weight on the scale out plan. (However, it is currently difficult to apply)

I don't know why the usage ratio of Physical Memory keeps increasing as we use it.

Is there any wat to reduce the usage ratio of Physical Memory? (How to turn off unnecessary services, etc...)

Ask the Alfresco team for help. @afaust 

(In addition, Solr Physical Memory starts from 50GB from the initial startup(after restart action). It's probabily due to cache. So I cleared the page cache and restarted it as shown below and is currently running.)

sync; echo 1 > /proc/sys/vm/drop_caches

afaust
Legendary Innovator
Legendary Innovator

"I don't know why the usage ratio of Physical Memory keeps increasing as we use it." - Well, for SOLR, the more elements are indexed, overhead can grow as some internal structures are sized based on the number of total elements. Without having any idea about your system sizing (number of elements, differentiation of content vs. non-content nodes, number of ACLs, number of primary vs. secondary child associations), use case and specific search service version, it is hard to gauge anything.

It is also not clear to me what you are actually measuring - real physical memory on the OS layer or the heap vs. max heap allowance on the Java layer. The thing about /proc/sys/vm/drop_caches for example has nothing to do with SOLR directly, and is exclusively for any OS level file system caching. If your index is 50 GiB or more in size, it would be perfectly normal and expected to see 50 GiB of use in physical memory for caches, provided all applications are able to get their allotted / required memory, as the OS is simply improving read access to the index as it is being used. Clearing those caches does free up memory, but is not really recommended as it is likely to tank search performance. In any way, those 50 GiB have nothing to do with the Java Heap, and if you get out-of-memory errors in SOLR, but have 50 GiB memory in caches, than it stands to reason you may have simply not configured a large enough heap for Java - any additinoal memory assigned to the Java heap would directly contribute to reducing/avoiding out-of-memory, while the max memory used for caching should decrease accordingly, keeping the absolute total the same.

A "scale out" in the context of search service could be done by applying SOLR sharding, e.g. splitting the index in multiple fragments and having those distributed over multiple hosts. Sharding typically becomes relevant around the medium, double-digit million range of number of elements in an Alfresco system.

Camron2152
Champ in-the-making
Champ in-the-making

Thanks for the step by step tutorial. Works like a charm!

Tell The Bell