how to resolve ActivitiOptimisticLockingException
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎01-12-2012 09:00 AM
[attachment=0]temp.jpg[/attachment]
you should modify ParallelGatewayActivityBehavior class, comment out following line:
execution.inactivate();
//lockConcurrentRoot(execution);
you should modify ParallelGatewayActivityBehavior class, comment out following line:
execution.inactivate();
//lockConcurrentRoot(execution);
Labels:
- Labels:
-
Archive
6 REPLIES 6
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎01-12-2012 09:32 AM
Most certainly not 
If we do that, we loose synchronization at the parallel join.

If we do that, we loose synchronization at the parallel join.
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎01-12-2012 08:51 PM
No, I don't think so, in fact, when multiple concurrent executions run to same join node, one of the execution will update the parent execution even though there is no changes for any fields in parent execution, so I think that is an issue.
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎01-12-2012 08:53 PM
select * from ACT_RU_HIST_EXECUTION order by seq;
create table ACT_RU_HIST_EXECUTION as select * from ACT_RU_EXECUTION where 1=2;
alter table ACT_RU_HIST_EXECUTION add SEQ NUMBER(19) not null;
create sequence ACT_RU_HIST_EXECUTION_SEQ;
CREATE OR REPLACE TRIGGER ACT_RU_EXECUTION_AFTER_UPDATE
AFTER INSERT OR UPDATE ON ACT_RU_EXECUTION
FOR EACH ROW
DECLARE
BEGIN
INSERT INTO ACT_RU_HIST_EXECUTION
(SEQ,
ID_,
REV_,
PROC_INST_ID_,
BUSINESS_KEY_,
PARENT_ID_,
PROC_DEF_ID_,
SUPER_EXEC_,
ACT_ID_,
IS_ACTIVE_,
IS_CONCURRENT_,
IS_SCOPE_)
SELECT ACT_RU_HIST_EXECUTION_SEQ.NEXTVAL,
:NEW.ID_,
:NEW.REV_,
:NEW.PROC_INST_ID_,
:NEW.BUSINESS_KEY_,
:NEW.PARENT_ID_,
:NEW.PROC_DEF_ID_,
:NEW.SUPER_EXEC_,
:NEW.ACT_ID_,
:NEW.IS_ACTIVE_,
:NEW.IS_CONCURRENT_,
:NEW.IS_SCOPE_
FROM DUAL;
END;
when you create a trigger in Activiti database, check the entries in ACT_RU_HIST_EXECUTION, you will find my theory is right.
create table ACT_RU_HIST_EXECUTION as select * from ACT_RU_EXECUTION where 1=2;
alter table ACT_RU_HIST_EXECUTION add SEQ NUMBER(19) not null;
create sequence ACT_RU_HIST_EXECUTION_SEQ;
CREATE OR REPLACE TRIGGER ACT_RU_EXECUTION_AFTER_UPDATE
AFTER INSERT OR UPDATE ON ACT_RU_EXECUTION
FOR EACH ROW
DECLARE
BEGIN
INSERT INTO ACT_RU_HIST_EXECUTION
(SEQ,
ID_,
REV_,
PROC_INST_ID_,
BUSINESS_KEY_,
PARENT_ID_,
PROC_DEF_ID_,
SUPER_EXEC_,
ACT_ID_,
IS_ACTIVE_,
IS_CONCURRENT_,
IS_SCOPE_)
SELECT ACT_RU_HIST_EXECUTION_SEQ.NEXTVAL,
:NEW.ID_,
:NEW.REV_,
:NEW.PROC_INST_ID_,
:NEW.BUSINESS_KEY_,
:NEW.PARENT_ID_,
:NEW.PROC_DEF_ID_,
:NEW.SUPER_EXEC_,
:NEW.ACT_ID_,
:NEW.IS_ACTIVE_,
:NEW.IS_CONCURRENT_,
:NEW.IS_SCOPE_
FROM DUAL;
END;
when you create a trigger in Activiti database, check the entries in ACT_RU_HIST_EXECUTION, you will find my theory is right.
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎01-13-2012 06:11 AM
Hi lxjchengcu,
we use optimistic locking on the parent execution to make sure only one execution continues after the parallel join. That is what I meant by "synchronization".
we use optimistic locking on the parent execution to make sure only one execution continues after the parallel join. That is what I meant by "synchronization".
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎01-15-2012 09:16 PM
why not allow multiple sub executions to pass the join node? we do have such requirement in our business, if one execution fails to pass the join node due to the optimistic locking mechanism, user has to clear the lock info and exception info fields to enable job executor to run it again, if the last service task needs to cost multiple hours, but we have to run it again due to the optimistic locking, that is not easily explained.
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎01-15-2012 09:48 PM
Hi meyerd,
I understand what you mean,
A and B are parallel executions, C is the parent, it is process instance,
so after A runs the last service task on its branch, the activity will be updated to join activity, and the version of C will be increased.
then B runs the last service task on its branch, Activiti engine will get the latest A and C from DB for B, so B and C will be deleted, and the activity in C will be updated to next activity just right after the join activity, when both A and B will update C, the ActivitiOptimisticLockingException occurs, that is the case.
In general, there is no problem, but we may forget one thing, if in the last service task of B, runs following code piece
public void execute(DelegateExecution execution) throws Exception {
ExecutionEntity executionEntity = (ExecutionEntity) execution;
Map<String, Object> variables = execution.getVariables();
}
since we want to get all the variables including the variables from process instance C, so the execution C object will be got from DB and put to cache at this time, so that after B completes the run of last service task, it will not query C from DB again, instead, it will get it from cache, but the C has been updated by A, so that is the conflict.
I understand what you mean,
A and B are parallel executions, C is the parent, it is process instance,
so after A runs the last service task on its branch, the activity will be updated to join activity, and the version of C will be increased.
then B runs the last service task on its branch, Activiti engine will get the latest A and C from DB for B, so B and C will be deleted, and the activity in C will be updated to next activity just right after the join activity, when both A and B will update C, the ActivitiOptimisticLockingException occurs, that is the case.
In general, there is no problem, but we may forget one thing, if in the last service task of B, runs following code piece
public void execute(DelegateExecution execution) throws Exception {
ExecutionEntity executionEntity = (ExecutionEntity) execution;
Map<String, Object> variables = execution.getVariables();
}
since we want to get all the variables including the variables from process instance C, so the execution C object will be got from DB and put to cache at this time, so that after B completes the run of last service task, it will not query C from DB again, instead, it will get it from cache, but the C has been updated by A, so that is the conflict.
