cancel
Showing results for 
Search instead for 
Did you mean: 

Multiple exceptions when stressing a workflow

rufini
Champ in-the-making
Champ in-the-making
Hi. I'm having random errors when stressing my app.

I made a workflow with a UserTask that times out after 200 sec.

With a JMeter, I set up:

1) Request via GET the execution id "askForExecutionID"
2) Post that execution id with some extra data to process "processData"

In JMeter\Thread Group, I set: 100 threads & ramp-up period: 10 seconds

I start the tomcat, and run the JMeter, and, after a time, I got the 100 "askForExecutionID" OK and [usually] the 100 "processData" also OK.

But, when I run it a few times [waiting to all threads ends], I start to get random errors [about 13-25%] in the "processData"s.

The exceptions may vary and are:


org.activiti.engine.ActivitiOptimisticLockingException: TimerEntity[39844] was updated by another transaction concurrently


org.activiti.engine.ActivitiOptimisticLockingException: ExecutionEntity[6246] was updated by another transaction concurrently


org.activiti.engine.ActivitiException: Cannot find task with id 6432


org.apache.ibatis.exceptions.PersistenceException:
### Error updating database.  Cause: com.mysql.jdbc.exceptions.MySQLTransactionRollbackException: Deadlock found when trying to get lock; try restarting transaction
### The error may involve org.activiti.engine.impl.persistence.entity.HistoricTaskInstanceEntity.updateHistoricTaskInstance-Inline
### The error occurred while setting parameters
### Cause: com.mysql.jdbc.exceptions.MySQLTransactionRollbackException: Deadlock found when trying to get lock; try restarting transaction

My config:

org.springframework.jdbc.datasource.SimpleDriverDataSource (com.mysql.jdbc.Driver)
org.springframework.jdbc.datasource.DataSourceTransactionManager


   <bean id="processEngineConfiguration" class="org.activiti.spring.SpringProcessEngineConfiguration">
      <property name="dataSource" ref="dataSource" />
      <property name="transactionManager" ref="transactionManager" />
      <property name="databaseSchemaUpdate" value="true" />
      <property name="jobExecutorActivate" value="true" />
      <property name="deploymentResources" value="some.bpmn20.xml" />
      <property name="beans">
         <map>
            <entry key="timeoutTask" value-ref="timeoutTask"/>
         </map>
      </property>
   </bean>


Please, letme know wich config data can I post to make this question cleaner.

Thanks in advance!
rufini
12 REPLIES 12

rufini
Champ in-the-making
Champ in-the-making
Update:

I've found that the scheduled task (quartz) that trims the history is causing the Deadlock exception.

My task looks like:

 
  this.jdbcTemplate.update(
    "DELETE FROM ACT_HI_ACTINST WHERE END_TIME_ < ?", olderDate);

  this.jdbcTemplate.update(
    "DELETE FROM ACT_HI_PROCINST WHERE END_TIME_ < ?", olderDate);

  this.jdbcTemplate.update(
    "DELETE FROM ACT_HI_TASKINST WHERE END_TIME_ < ?", olderDate);

Any idea how could avoid the deadlock?

Thanks again,
rufini

ronald_van_kuij
Champ on-the-rise
Champ on-the-rise
First of all, isn't the delete of the process instance casading? Might not be, just not sure.

Secondly, maybe an index on the END_TIME might help if it is not already there. Might prevent full tablescans and thus locking. But…. I'm no db expert at all (more STILL a noob actually   Smiley Surprisedops: )

rufini
Champ in-the-making
Champ in-the-making
Thanks, ronald!

I added key indexes to the "end_time_" fields, and stressed the app with ridiculous parameters, testing it hundred of times, and it didn't fail once.

It fixed it Smiley Happy

Thanks!
rufini

ronald_van_kuij
Champ on-the-rise
Champ on-the-rise
Thanks for reporting back. From now on I will never ever call myself a DB n00b anymore. 🙂

crico
Champ in-the-making
Champ in-the-making
I have a listener associated with the event "end" of a process with the following logic:

public void notify(DelegateExecution execution) throws Exception {
            String processInstanceId = execution.getProcessInstanceId();
           
            Context.getCommandContext().getHistoricProcessInstanceManager()
                    .deleteHistoricProcessInstanceById(processInstanceId);
}

This generates the following exception when I launch multiple instances of the process:

Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException: Deadlock found when trying to get lock; try restarting transaction
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.Util.getInstance(Util.java:386)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1064)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3597)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3529)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1990)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2151)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2625)
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2119)
at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1362)
at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172)
at org.apache.commons.dbcp.DelegatingPreparedStatement.execute(DelegatingPreparedStatement.java:172)
at org.apache.ibatis.executor.statement.PreparedStatementHandler.update(PreparedStatementHandler.java:22)
at org.apache.ibatis.executor.statement.RoutingStatementHandler.update(RoutingStatementHandler.java:51)
at org.apache.ibatis.executor.SimpleExecutor.doUpdate(SimpleExecutor.java:29)
at org.apache.ibatis.executor.BaseExecutor.update(BaseExecutor.java:88)
at org.apache.ibatis.executor.CachingExecutor.update(CachingExecutor.java:43)
at org.apache.ibatis.session.defaults.DefaultSqlSession.update(DefaultSqlSession.java:122)

What could be the cause?

Thanks.

ronald_van_kuij
Champ on-the-rise
Champ on-the-rise
does this not happen if you start just 1?

crico
Champ in-the-making
Champ in-the-making
does this not happen if you start just 1?

No, it only happens when launching multiple processes concurrently. Specifically, the error occurs in the deleteHistoricProcessInstanceById method.

crico
Champ in-the-making
Champ in-the-making
does this not happen if you start just 1?

No, it only happens when launching multiple processes concurrently. Specifically, the error occurs in the deleteHistoricProcessInstanceById method.

I have also tried the following code:

            JdbcTemplate jdbcTemplate = (JdbcTemplate) ApplicationContextProvider
                    .getInstance().getBean("jdbcTemplate");

            jdbcTemplate.update(
                    "DELETE FROM ACT_HI_ACTINST WHERE PROC_INST_ID_ < ?",
                    processInstanceId);

            jdbcTemplate.update(
                    "DELETE FROM ACT_HI_PROCINST WHERE PROC_INST_ID_ < ?",
                    processInstanceId);

            jdbcTemplate.update(
                    "DELETE FROM ACT_HI_TASKINST WHERE PROC_INST_ID_ < ?",
                    processInstanceId);

and then I get this error:

org.springframework.dao.CannotAcquireLockException: PreparedStatementCallback; SQL [DELETE FROM ACT_HI_ACTINST WHERE PROC_INST_ID_ < ?]; Lock wait timeout exceeded; try restarting transaction; nested exception is java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction…

frederikherema1
Star Contributor
Star Contributor
Deleting from the history through the HistoricXXManager (engine IMPL, not API) while the process is ending is generally not a good idea.

1. Can't you delete the history externally (from the method that is calling the API)
2. If you don't want history, why don't you just turn the history-level to none?