cancel
Showing results for 
Search instead for 
Did you mean: 

64bit database conversion

Robert_Cook2
Confirmed Champ
Confirmed Champ

We have recently started seeing an increase in the number of documents that we are importing/working in our environment. As a result our diskgroup paths are hitting their limits more frequently. We have increased the diskgroup volume sizes to mitigate.

 

Our FLOS has suggested that we look into migrating to a 64bit database structure to prevent the above from still affecting the filepath as well as to address our projected document volume. Has anyone gone through this process? Can you provide some idea of what is involved? Are there any other factors we should be aware of? We are running OnBase EP3, database is SQL 2017.

 

1 ACCEPTED ANSWER

David_Juhlin
Elite Collaborator
Elite Collaborator

We completed this process on our production database last weekend.  Hyland can give you a ball-park estimate for  the downtime, but I HIGHLY recommend you get an EXACT copy of your database server (RAM, processors, database  version) and copy your production database over to it so that Hyland can run a test on that and give you a very realistic time estimate.  The time will depend on three main things:

- Processing power of your server.

- RAM allocated to your server.

- Number of records / size of the database.

 

Plan on needing roughly double your current size to manage the conversion.  Hyland's scripts will get a record count of each table, then for each table, copy it (converting the INT to BIGINT), rebuild the indexes, verify row counts, drop the old table, rename the new table.

 

You will need a full backup before Hyland starts, and you need to stop all import processes and prevent all users from logging in. (OnBase System lockout works great for that, as long as you make sure services don't automatically start up.)  Hyland can do their work in one fell swoop (one single downtime) or over the course of multiple downtimes.  (They will have a list of tables from the test, and an estimate of how long each table takes, so they know how much they can get done in a couple hours, or how long you need to be down for some of the larger tables.)

 

Estimates will vary based on the items listed above, but here are some rough numbers to give you an idea:

- Health Care system running for just over ten years;

- Roughly 3TB of data in the database;

- Full backup takes just under two hours;

- Largest table took about 5 hours to copy and rebuild indexes;

- Estimated time for Hyland to run scripts was 20.5 hours;

- We planned for a 24 hour downtime and operational owners agreed to a single long instance;

- We had some issues, so we were down around 27 hours;

 

Plan on a LOT of communication with end users, so they know the full extent of what the downtime means.  (No access to documents, no imports, no extracts, etc.)

View answer in original post

9 REPLIES 9

As a note, Hyland has to run the scripts, they will not give them to you due to them being proprietary.

 

Some notes that I would plan on doing is:

 

1. Make sure you are utilizing the most recent FULL backup of your database with the testing of the conversion. Do NOT use a stripped down version.

2. If possible, create a separate instance from your normal environments to just simply use as a means to test the conversion process.

 

The other thing I want to note to you is that ours definitely took about 2 hours which included the backup. So definitely plan on a longer downtime.

Thanks for the replies, I don't think I've said that but I want to say it again, Thank you.

 

We have four environments that we maintain and do a complete database restore to each several times a year to keep them consistent. I don't see an issue with the test environment that we have as it is an exact clone of the production environment minus the changes to pointers and naming. We do running backups of our database but a full backup is not a problem. During an upgrade we generally take full snapshots of everything as an added precaution.

 

I should also clarify our expectations on running the scripts, I assumed they would be run by Hyland. From the SOW that we have though it makes it seem like an entirely new database is being created given the hours and resources stated. I don't want to downplay or belittle the complexity of this but our expectation is this is a known process not a fully custom, one-off, request. Everything that has been stated tracks with what we expect but what we are getting from our FLOS is very different and it is frustrating not having any kind of transparency.

David_Juhlin
Elite Collaborator
Elite Collaborator

We completed this process on our production database last weekend.  Hyland can give you a ball-park estimate for  the downtime, but I HIGHLY recommend you get an EXACT copy of your database server (RAM, processors, database  version) and copy your production database over to it so that Hyland can run a test on that and give you a very realistic time estimate.  The time will depend on three main things:

- Processing power of your server.

- RAM allocated to your server.

- Number of records / size of the database.

 

Plan on needing roughly double your current size to manage the conversion.  Hyland's scripts will get a record count of each table, then for each table, copy it (converting the INT to BIGINT), rebuild the indexes, verify row counts, drop the old table, rename the new table.

 

You will need a full backup before Hyland starts, and you need to stop all import processes and prevent all users from logging in. (OnBase System lockout works great for that, as long as you make sure services don't automatically start up.)  Hyland can do their work in one fell swoop (one single downtime) or over the course of multiple downtimes.  (They will have a list of tables from the test, and an estimate of how long each table takes, so they know how much they can get done in a couple hours, or how long you need to be down for some of the larger tables.)

 

Estimates will vary based on the items listed above, but here are some rough numbers to give you an idea:

- Health Care system running for just over ten years;

- Roughly 3TB of data in the database;

- Full backup takes just under two hours;

- Largest table took about 5 hours to copy and rebuild indexes;

- Estimated time for Hyland to run scripts was 20.5 hours;

- We planned for a 24 hour downtime and operational owners agreed to a single long instance;

- We had some issues, so we were down around 27 hours;

 

Plan on a LOT of communication with end users, so they know the full extent of what the downtime means.  (No access to documents, no imports, no extracts, etc.)

Hi David,

 

That's a lot of great info - thanks for providing it.  Can I ask what kind of issues you had that extended your down time?

 

Matt

@Matt Norton,

 

Most of our issues were self-inflicted.   You want the system to be effectively down, but the database has to be up.  OnBase Config System Lockout lets you 'lock down' the application to allow access (from an OnBase appliction) only for a single user and/or from a single machine.  (This does not affect direct database connections.)  I opted for both (so I could log in to allow access again, but someone else could also, from a specific server).  The problem was that even though I stopped all the services to avoid issues with them trying to connect and failing, I forgot to stop a scheduled task on the one server allowed to connect.  That scheduled task restarts all our services, so even though I stopped them all, this Task started them up again, and the System Lockout let the service account log in from that one server.

 

The services were mostly Thick Client batch processes (COLD, DIP, scan queue sweep), but that interfered with the table migrations (causing table locks, etc.) which slowed down the migration process.  I stopped the services as soon as I realized what happened, and then we had to do a bit of cleanup after-the-fact to make sure all those documents processed correctly (added to workflow, etc.)

 

So heed the lesson: if you want to allow access from a machine (and not just one user) then make sure no OnBase services are set to run on that one machine (just in case you forget to shut everything down).