5.9 and exclusive jobs

Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-09-2012 07:24 AM
Hi,
Just wanted to share this observation, maybe other people have the same opinion.
The doc says about exclusive jobs:
I wanted to get a feeling for the impact of this and created a small test case (see attached picture). Just imagine that the actual process is much larger than that but all non-service tasks have been stripped from the process flow for the sake of clearness.
[attachment=0]exclusivejobs.jpg[/attachment]
All tasks are defined as <serviceTask id="servicetaskX" name="Service Task" activiti:delegateExpression="${theServiceTask}" activiti:async="true" /> and the JavaDelegate implementation is just this
When i start this process i get this output, totally expected
We observe that 1) all service tasks are executed on the same thread and 2) they are executed sequentially.
For me the impact of this is that we should NEVER put any long-executing async service tasks (e.g. report generation) in a process definition because they can potentially block all execution paths of the process instance. Rather we should have the service task finish quickly, and do the real execution async somewhere else. A receive task after the service task could wait for execution to continue.
So it is true that the current implementation does not impact process performance as such, but the risk of creating "bottlenecks" is very real if any of your service tasks is long running. In "real world" processes it are the machine oriented process flows (=many service tasks, little or no user tasks) that will suffer more from this.
Thanks for any thoughts on this.
Just wanted to share this observation, maybe other people have the same opinion.
The doc says about exclusive jobs:
It is actually not a performance issue. Performance is an issue under heavy load. Heavy load means that all worker threads of the job executor are busy all the time.
I wanted to get a feeling for the impact of this and created a small test case (see attached picture). Just imagine that the actual process is much larger than that but all non-service tasks have been stripped from the process flow for the sake of clearness.
[attachment=0]exclusivejobs.jpg[/attachment]
All tasks are defined as <serviceTask id="servicetaskX" name="Service Task" activiti:delegateExpression="${theServiceTask}" activiti:async="true" /> and the JavaDelegate implementation is just this
@Component("theServiceTask")public class ServiceTask implements JavaDelegate { private AtomicInteger order = new AtomicInteger(0);@Override public void execute(DelegateExecution execution) throws Exception { order.getAndIncrement(); int sleeptimeMilliseconds = new Random().nextInt(5) * 1000; long id = Thread.currentThread().getId(); System.out.println(order.get() + " - thread " + id + " sleeping " + sleeptimeMilliseconds + " ms"); Thread.sleep(sleeptimeMilliseconds); System.out.println(order.get() + " - thread " + id + " finished"); }}
When i start this process i get this output, totally expected
1 - thread 20 sleeping 3000 ms1 - thread 20 finished2 - thread 20 sleeping 3000 ms2 - thread 20 finished3 - thread 20 sleeping 3000 ms3 - thread 20 finished4 - thread 20 sleeping 1000 ms4 - thread 20 finished5 - thread 20 sleeping 3000 ms5 - thread 20 finished6 - thread 20 sleeping 2000 ms6 - thread 20 finished7 - thread 20 sleeping 0 ms7 - thread 20 finished8 - thread 20 sleeping 2000 ms8 - thread 20 finished9 - thread 20 sleeping 3000 ms9 - thread 20 finished10 - thread 20 sleeping 4000 ms10 - thread 20 finished11 - thread 20 sleeping 3000 ms11 - thread 20 finished12 - thread 20 sleeping 0 ms12 - thread 20 finished13 - thread 20 sleeping 0 ms13 - thread 20 finished
We observe that 1) all service tasks are executed on the same thread and 2) they are executed sequentially.
For me the impact of this is that we should NEVER put any long-executing async service tasks (e.g. report generation) in a process definition because they can potentially block all execution paths of the process instance. Rather we should have the service task finish quickly, and do the real execution async somewhere else. A receive task after the service task could wait for execution to continue.
So it is true that the current implementation does not impact process performance as such, but the risk of creating "bottlenecks" is very real if any of your service tasks is long running. In "real world" processes it are the machine oriented process flows (=many service tasks, little or no user tasks) that will suffer more from this.
Thanks for any thoughts on this.
Labels:
- Labels:
-
Archive
2 REPLIES 2
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-09-2012 07:43 AM
Consider:
- If you only have a single process instance in db: Since activiti enforces sequential execution of the process instance, the instance itself runs longer than if tasks from the instance would be executed concurrently.
- If you have multiple process instances: each thread of the job executor will execute a single instance. You will still do work concurrently but from different instances.
- If you only have a single process instance in db: Since activiti enforces sequential execution of the process instance, the instance itself runs longer than if tasks from the instance would be executed concurrently.
- If you have multiple process instances: each thread of the job executor will execute a single instance. You will still do work concurrently but from different instances.
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-09-2012 07:51 AM
Long running service tasks are always a problem: since delegates are invoked synchronously, the transaction will be keept open for the time you work. If you take a long time to finish the work you will either run into transaction timeouts or lock the underlying resource (db) for too long.
Also keep in mind that the job executor has a max lock time after which it will assume that a job has failed.
Also keep in mind that the job executor has a max lock time after which it will assume that a job has failed.
