cancel
Showing results for 
Search instead for 
Did you mean: 

Workflow Queues

Lawrence_Ballar
Confirmed Champ
Confirmed Champ

What is the scalability of the Workflow queues?  How many documents could we have in a queue if we have the proper hardware resources? 

3 REPLIES 3

Seth_Yantiss
Star Collaborator
Star Collaborator

[quote user="UOP-LB"]How many documents could we have in a queue if we have the proper hardware resources? 

UOP,

I have some pretty standard hardware and software for an enterprise, and have found no limit to the number of items allowed in a workflow queue.  I have had over a million items in one queue at one time.

However, once the queue exceeded 10,000 items, the time to load the next work item was GREATLY diminished.  Once the queue was at 1,000,000 it took nearly 3 minutes to show the retrieval hit list.

Based on this, I never allow a user queue to exceed 10,000.  I have a system queue hold the bulk of the items and I pass in more once the User Queue drops below 10,000.

Cheers,
Seth

Greg_Seaton
Star Contributor
Star Contributor

UOP-LB,

The OnBase software does not limit the number of documents that can be in Workflow. However, Workflow is not intended to be a document repository and documents should not sit in there for a long period of time. Since every OnBase system is unique, the only way to tell how many documents you can handle in Workflow is to perform tests in your environment.

What will most likely be the bottleneck is the database or the Web/Application Server. If you use filters to display and/or filter on keywords, that is going to put additional work on the database to query keyword tables as well. Also, when a client limits the number of results displayed in a hit list, the Application Server has to load up in memory all the documents in that queue when being accessed. There are several considerations for performance listed in the Workflow Module Reference Guide under Workflow Best Practices. I would highly recommend taking a look at those if you are experiencing slower than expected performance when working with a large number of documents in a queue.

The technical reason for this "limitation" is that having too many documents with the same statenum (the queue's unique ID) in the lcstate table (where the document-to-queue relationship is stored) causes the index on that column to become less than optimal. The database then must resort to performing table scans, which are highly inefficient and slow.

Making sure your database server is powerful and highly optimized would help take the edge off, but this has diminishing returns. Best to reorganize your queues to spread the documents around better, or avoid keeping documents in queues altogether if possible. Sometimes just setting a keyword to a status value and exiting Workflow can be just as good, depending on what you're doing.

Also, be careful if your life cycle is deleting documents. Remember to include an action to remove the deleted document from all life cycles first, otherwise it will remain in lcstate and contribute to the problem.