06-06-2010 11:18 AM
06-09-2010 04:35 AM
06-09-2010 11:31 AM
Here's our ideas on the topic: The PersistenceSession should be refactored (if necessary) so that e.g. a MongoDbPersistenceSession can be implemented. The first step that we should do is to implement a JSON serialization of the runtime process instance data structures. There is some work started on that in package org.activiti.json Then the mongo db persistence session can serialize a whole process instance as a text string.
To what extend can Mongo DB give concurrency control? Currently we delegate concurrency control to the DB as well. All runtime updates are guarded with optimistic locking. That way processes executed on Activiti can be seen as specifying transactional control flow. The process executes in discrete steps (=transactions) form one state to the another.
{"Session":"SessionKey",
"Flow":"requestNewFeature"
" Roles": ["user":"Josh", "Role"requestFeature", "activitiesCompleted" : [{"start":infoOnStart},{"ThinkAboutANewFeature":"NoSQL rocks"}, {"registerAccountOnForum":"account:JoshuaP, post ="Let's do cool new things"}, {"postOnBoard":"think about mongo"},
"user":"tombaeyens", "Role"CreatorOfActiviti", "activitiesCompleted" : [{"postOnBoard":"whatAboutLocking?"}]
]
}In this scenario, you would need a globally meaningful session key that would encapsulate all information regarding a particular session. It might be as simple as "user-process" but it might need to be more complex depending on what scenario's you want to use.
Why use this type of structure?
1) Get everything we need to know about a session in a single I/O call.
2) Can append to nested structures (for example, Session.Roles.Josh.activitiesCompleted[]) dynamically.
3) Append only is lock friendly.So we need to figure out how we want to deal with situations like this:
When a job needs to be executed, we currently use optimistic locking to prevent that multiple job executors start executing the same job.
Another example: if two operations occur on different nodes in the cloud, (potentially in concurrent paths of execution), how would that conflict be resolved when those updates meet each other in the cloud.
With Mongo, you only have a single master. You can do a check to make sure that a similar process doesn't exit prior to running:
db.sessions.update({sessionKey: 15, {$not:{$exists:{status:1}}}}, { $set : {status : "start" } }
Again, you can also do a append to a existing object, which I think is a bit cleaner.We also have been thinking about an in-memory locking server/cluster. The function would be to obtain a global lock on a certain process instance. Because all locked process instances are kept in memory, it could make sure that it's fast and still scales. Making that a cluster of e.g. 3 nodes could provide failover. All commands in the Activiti API that operate on a process instance could obtain a lock for the duration of the command. This could be combined with storing the JSON serialized process instances in a big-table like cloud persistence solution. That way users would still get transactional control flow semantics from Activiti.
WDYT?
09-28-2011 08:39 AM
09-28-2011 09:55 AM
Tags
Find what you came for
We want to make your experience in Hyland Connect as valuable as possible, so we put together some helpful links.