03-20-2018 09:22 PM
When the engine is restarted, does it try to run timer task even though the subprocess that contains it; died in the last execution?
03-21-2018 07:37 AM
Could you explain a bit about your process and why you're asking the question? Any diagram you can include might help. I think the answer is that a timer task will be created when the transaction is committed on entering the subprocess and so it would depend on whether the engine 'dies' before that transaction is committed. I'm not sure what kind of death scenario you have in mind or why you're focused on it so I'm not sure how precise of an answer you're looking for.
03-21-2018 05:10 PM
Hi Ryan,
Thank you for replying. I am sorry I did not give you context about the my requirements. I am running activiti with Spring. ( With asyncJobExecutor). Right now We have following model :
Requirements are as follows :
I want to run a parallel task in conjunction to Job 76 (same for Job 77), which will report the status/progress of Job 76 and as soon as Job 76 finishes, parallel task finishes. If process instance dies , it should not picked by any other process engine sharing the same database or when same spring application restarts again ?
I came with following logic with Event based gateway which relies on signal (using from the above architecture). It is job complete signal thrown whenever Job 76 is complete.
I have following concerns :
1. If my spring application dies( killing the process instance and process engine) in between of above process execution,and then I restart the the application. Will above timer will picked again( Like by org.activiti.engine.impl.asyncexecutor.AcquireTimerJobsRunnable - ?
If yes how to disable timer tasks/events to picked again if current process instance dies ( not by other process engine sharing the same activiti database and not by same process engine when it restarts) .
2. If there is better solution to my requirement . Please let me know.
I appreciate it.
03-22-2018 06:37 AM
If you're running multiple instances of the engine you can configure them so that only one instance runs the jobExecutor. And you could configure it not to retry.
If your aim is to put a time-limit on job 76 then some kind of timer event seems like the way to go.
I'm guessing your job 76 is doing something long-running and intensive and maybe you're worried about it consuming memory and crashing the process that the engine runs in? Then presumably you don't want a retry because that would result in another crash?
If only one instance is running the jobExecutor and the long-running job crashes that instance then I would not expect any other instance to retry it. When the instance that is running the jobExecutor is restarted I am not sure whether it will retry or not. It might depend upon whether the service task is async. You could create a POC with a service task that does a System.exit to end the java process and then try restarting it and see whether it happens again.
03-22-2018 06:42 AM
If crashing the process is the concern then you could also think about calling out to whatever implements the dangerous logic in a separate java process and communicating to it via messaging. Then if it failed it would not crash the java process that the engine is running in. (I'm guessing you've already thought about whether Activiti Cloud's idea of service tasks as distinct microservices would be relevant. You could implement the messaging yourself if you want to do that but not use Activiti Cloud.)
Explore our Alfresco products with the links below. Use labels to filter content by product module.