cancel
Showing results for 
Search instead for 
Did you mean: 

Activiti 6 pluggable persistence feedback

bjoern_s
Champ in-the-making
Champ in-the-making
Hello,

I do not need any help here, this is just a feedback Smiley Wink  I like to share my first experience using the new pluggable persistence facility. My current use case is to completely replacing the "core" persistence implementation by a Hibernate JPA based implementation. This works well so far, but I found following drawbacks/issues

1. There is a strange "setId" invocation on the  ProcessDefinitionEntity which replaces the (JPA generated) id with another String in the form workflowid:number:newid
So it is not possible to work with plain generated ids, because the process definition id seems to be "special"
-> workaround possible by setting a larger column length on this id and make it updateable


2. ExecutionEntity and TaskEntity implement/inherit from VariableScope. It would be nice if this dependency would be moved to a delegate class because it does not do anything about persistence at all and creates a lot of boilerplate 200+ lines in custom Implementations. For example the access could be refactored to ExecutionEntity/TaskEntitygetVarialbeScope()… and this may return a VariableScopeInstance.

3. Some interfaces leak ByteArrayRef. This class is a concrete final class and can not be subclassed. This makes implementing custom binary storage quiet difficult. My solution is always returning "new ByteArrayRef()" on all getters and implementing a custom blob field.

4. Hard to use JPA typical eager/lazy/batched loading. Due to the fact the references and collections are handled by setting and getting the ids and perform query on ids, the usage of JPA specific lazy/eager/batched collection fetching hard to implement.
This may be circumvented, but just a warning to all users who try the same

(!) 5. I think i found a major bug in TakeOutgoingSequenceFlowsOperation.leaveFlowNode which causes problems while executing a workflow within the same transaction in memory:
If more than one connection is leaving a node, a new execution is spawned. This execution is not added to the parent child executions list, only the parent id is set on the child.
So if the child executions are accessed within the same transaction, the spawned execution is missing. 
My current workaround is to add the current execution to the parent execution ececutions list every time setCurrentFlowElement is called on the ExecutionEntity
public void setCurrentFlowElement(FlowElement currentFlowElement) {
        // workaround hack to ensure parent has this element as child execution
        if (parentId != null) {
            ExecutionEntityJpa result = (ExecutionEntityJpa) RequestContext.getThreadContext().getManager().find(ExecutionEntityJpa.class, parentId);
            if (!result.getExecutions().contains(this)) {
                System.out.println("WORKARUND WORKING");
                result.addChildExecution(this);
            }
        }

}

This would be an easy fix by calling
addChildExecution(outgoingExecutionEntity)
in TakeOutgoingSequenceFlowsOperation.leaveFlowNode



Should I open an bug report for this (5.) ?

Björn S.


3 REPLIES 3

jbarrez
Star Contributor
Star Contributor
Hi Björn, finally got time to review this:

1. True. The workaround would be to have a custom BPMNDeployer I would guess. The special way of handling that id is because of the process definition being cached.

2. Not sure if I see the problem here. Wouldn't you simply be able to extend those classes in your own implementation? I'm not sure this would be an easy change.

3. Correct. This has been fixed on the activiti6 branch last week.

4. I can see your point. But it's probably hard to refactor that without breaking a lot of people's code … With JPA lazy loading/etc it still could work, right, you would have to check if some property is already loaded.

5. Correct. Has been fixed on the activiti6 branch.

bjoern_s
Champ in-the-making
Champ in-the-making
Thank for your reply!

->2. Right, so did I. This was not a real issue, just an inconvenience.
->4. Found a way to handle this, most of the "*DataManager*" now fetch the owning JPA Entity and iterate the collections instead of performing a DB query. I re-implemented all relations as JPA collections with ManyToOne and OneToMany. This of course requires all "back link collections" to be filled (5.) to be useable within the same transaction. This is works really fast - most of the JPA first and second level caching mechanism can be used. Indeed we can only support the required minimum for the query operations, but this is ok.

I continued my JPA+Activiti jurney and I it worked quiet well so far. You did an amazing job with the activiti project. If it helps, I continue to give some feedback. I have found some minor points (which I was able to work around using your powerful interfaces). I needed to:

6. subclass org.activiti.engine.impl.variable.SerializableType to remove two calls to Context.getCommandContext().getDbSqlSession() (this does not exist in JPA)
7. subclass org.activiti.engine.impl.persistence.entity.HistoricDetailEntityManagerImpl to remove/replace the setting of the ByteArray handling (maybe already fixed)
8. subclass org.activiti.engine.impl.el.VariableScopeElResolver setValue to set context.setPropertyResolved(true) to prevent resolving the property while in the same transaction (where the property may not be flushed to the JPA db yet)

jbarrez
Star Contributor
Star Contributor
Thanks for the kinds words. Much appreciated!

4. Great to hear. That's indeed the way I would have done it too.

6. This has been removed on master 🙂

7. Yup, should be fixed.

8. hmmm that sounds interesting. Can you explain with an example … i don't think I'm fully grasping what you're trying to say here.
Getting started

Tags


Find what you came for

We want to make your experience in Hyland Connect as valuable as possible, so we put together some helpful links.