Alfresco 3.4d - Running out of disk space

Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-13-2012 07:30 PM
Hi,
We are running the community edition of Alfresco 3.4d and we are noticing that after posting around 100 documents of size 22K through the webservice interface (individually - one file is being sent at a time), the machine is running out of disk space after 2 - 3 hours. The current disk size on the machine is 60GB and 6GB RAM. From investigation we noticed that the alf_data size is around 2.9GB and the index is 1.9GB. Also, the CPU is at 100% the whole time ater the documents are added.
The content store mainly contains PDF files of about 20KB and that there about 125,000 of them.
The main folder increasing in size is /alf_data/lucene-indexes/workspace/SpacesStore/.
Our JVM Startup configuration
Any input to help resove this issue would be appreciated.
Please let us know if you need further information.
Thanks
Varun
We are running the community edition of Alfresco 3.4d and we are noticing that after posting around 100 documents of size 22K through the webservice interface (individually - one file is being sent at a time), the machine is running out of disk space after 2 - 3 hours. The current disk size on the machine is 60GB and 6GB RAM. From investigation we noticed that the alf_data size is around 2.9GB and the index is 1.9GB. Also, the CPU is at 100% the whole time ater the documents are added.
The content store mainly contains PDF files of about 20KB and that there about 125,000 of them.
The main folder increasing in size is /alf_data/lucene-indexes/workspace/SpacesStore/.
Our JVM Startup configuration
-server -d64 -Xss1M -Xms2G -Xmx4G -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:NewSize=1G -XX:MaxPermSize=256M -XX:CMSInitiatingOccupancyFraction=80 -XX:+UseParNewGC
Some information from our repository configuration is as below. index.recovery.mode=VALIDATE## Properties for merge (not this does not affect the final index segment which will be optimised)# Max merge docs only applies to the merge process not the resulting index which will be optimised.#lucene.indexer.mergerMaxMergeDocs=1000000lucene.indexer.mergerMergeFactor=9lucene.indexer.mergerMergeBlockingFactor=1lucene.indexer.mergerMaxBufferedDocs=-1lucene.indexer.mergerRamBufferSizeMb=16## Properties for delta indexes (not this does not affect the final index segment which will be optimised)# Max merge docs only applies to the index building process not the resulting index which will be optimised.#lucene.indexer.writerMaxMergeDocs=1000000lucene.indexer.writerMergeFactor=9lucene.indexer.writerMaxBufferedDocs=-1lucene.indexer.writerRamBufferSizeMb=16## Target number of indexes and deltas in the overall index and what index size to merge in memory#lucene.indexer.mergerTargetIndexCount=5lucene.indexer.mergerTargetOverlayCount=5lucene.indexer.mergerTargetOverlaysBlockingFactor=1lucene.indexer.maxDocsForInMemoryMerge=10000lucene.indexer.maxRamInMbForInMemoryMerge=16lucene.indexer.maxDocsForInMemoryIndex=10000lucene.indexer.maxRamInMbForInMemoryIndex=16
Any input to help resove this issue would be appreciated.
Please let us know if you need further information.
Thanks
Varun
Labels:
- Labels:
-
Archive
1 REPLY 1

Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-14-2012 09:44 AM
I think this can be due to indexing all versions of uploaded documents, or some content transformer error. Do you need versioning, or have you tried to turn it off? Also try to turn off transformer for application/pdf and check if that behaviour still hapenning.
