cancel
Showing results for 
Search instead for 
Did you mean: 

ERROR [org.alfresco.repo.node.db.DeletedNodeCleanupWorker]

thanthtoozin
Champ in-the-making
Champ in-the-making
In alfresco log I see some background processing error



21:00:05,328 ERROR [org.alfresco.repo.node.db.DeletedNodeCleanupWorker] Failed to purge nodes.  If the purgable set is too large for the available DB resources
   then the nodes can be purged manually as well.
   Set log level to WARN for this class to get exception log:
   Max commit time: 1386763200095
   Error:   

## Error updating database. Cause: org.postgresql.util.Update of PSQLException:ERROR: table "alf_node" or the deletion violates outside key to table "alf_child_assoc" limitation "fk_alf_cass_pnode" Detailed l: key (id)=(6385) is still referred to by table "alf_child_assoc"




Please explain me why this error occur and how I can solve this.
Alfresco Version : 4.0 d
Database : postgres 8.4

Thank you very much.
5 REPLIES 5

afaust
Legendary Innovator
Legendary Innovator
Hello,

translation of the database error: A node that should be purged is still referenced by a child association (coming from another node) and thus cannot be purged, as this would violate the integritry of the database. At that point, you would need to inspect the database tables and find the affected entries to see which node still references the purgable node. Basically that error should not occur unless you have some piece of functionality that may have performed some non-standard operations on the Alfresco database or there is a bug in that specific Community version.
How this might be solved depends largely on what node references the purgable node (i.e. if it may be purged as well), any other issues coming to light afterwards and how good/comfortable you are in manually correcting the database state using SQL.

Regards
Axel

tinlinnnsoe
Champ in-the-making
Champ in-the-making
I am facing this error and I am also trying to solve this error. But the problem is ,How to do to occur this error. i.e how to do mis operations to occur this error. 

I found the possible solution to solve this problem. (https://forums.alfresco.com/forum/installation-upgrades-configuration-integration/installation-upgra... [Comment No #9 of "ashwini"]).But I cannot try this plan on my production environment. So I want to try it on my development environment. So I want to create or reproduce to occur this error in my development environment.
If I know the reason of error or , If I have that error in my environment, I can try this plan to solve this error.

afaust
Legendary Innovator
Legendary Innovator
Hello,

in such a case the usual approach would be to clone / copy the production database into a test environment and try the procedure there. It is sometimes much more difficult to try to reconstruct a specific constellation if you can not know what caused it in the first place just for testing purposes.

Regards
Axel

tinlinnnsoe
Champ in-the-making
Champ in-the-making
Hello,
Sometimes, It is difficult to copy "database" and "contents" of production environment to testing environment
because of some restriction.
If I couldn't solve this error, what will happen in future??
[Now error log is outputted everyday at 21:00]

Best Regards,
Tin

afaust
Legendary Innovator
Legendary Innovator
Hello,

fortunately, the failure to purge nodes should not be noticable in normal operations. It will have the most impact upon indexing, specifically rebuilding indexes from scratch. If you are using Lucene instead of SOLR, the more "purgable" nodes you have in your database the longer it will take to rebuild the index - and due to the way those nodes are handled during index rebuild, the additional cost is substantial.
For every "purgable" node in the database, the index rebuild process will check the entire index if there is any reference to the node and remove it (even if the index is guaranteed not to contain it during a full build). This is the slowest possible operation during rebuild and can not be executed in parallel. So the effect is that your parallel rebuild is blocked while one thread scans the entire index.

In one of our customer projects we once had a count of about 100.000 "purgable" nodes compared to 1.8 million normal ones and the full rebuild took almost a day. The same rebuild without those 100.000 nodes was done in under 1.5 hours.
Note: This was on a 3.2/3.4 environment - I do not know if there have been changes in that regard.

Regards
Axel