cancel
Showing results for 
Search instead for 
Did you mean: 

Migration Plans and Guidelines from other WCM

arash_kaffamane
Champ in-the-making
Champ in-the-making
is there any Migration Plans or Guidelines for migrating content from other WCM systems. If not, I'm sure that it will come very soon, as many companies have to migrate to alfresco wcm.

Is anybody of you thinking about migration concepts?
3 REPLIES 3

brentkastner
Champ in-the-making
Champ in-the-making
Hi Arash,

Yes, we are definitely thinking of migrating from other platforms (IWOV, Vignette, etc) to Alfresco.  Because Alfresco is xml based I think that one could use and/or write their own ETL tool to migrate content into the Alfresco repository, then write the specific bits onto the created file to identify it as a form.  If you content is already in a database, things may even be easier, as it is generally trivial to write XML from a DB.

Does this help?

Brent Kastner
Eye Street Software

arash_kaffamane
Champ in-the-making
Champ in-the-making
Hi Brent,

Does this help?

many thanks, yes sure. Nice to know, that other professionals like you are thinking and working on such concepts, we are thinking again too!

Since years we are working with CoreMedia, RedDot, FatWire, OpenCms and some other wcms and would like to encourage our customers to migrate soon to Alfresco, after it becomes a real good and production ready WCM 😉

In one of our largest CMS <-> Portal Integration projects (OpenCms <-> WebSphere Portal Server), we have 50 sites with 120.000 resources (xml pages, documents and images) which are build through more than 40 XMLContents (forms). As I can judge for now, Alfresco's Engineers need more time to implement the xml schema implementation for creating forms with Link Management capabilities (no dead links possible) and I'm sure it will come with one of the next releases hopefully this year.

Kind Regards,
Arash Kaffamanesh

Pomegranate Software



jcox
Champ in-the-making
Champ in-the-making
Arash,

I'll be implementing some fairly sophisticated link management features
for Alfresco over the next couple of months, but you *can* do some
things right away if you'd like.

One approach is to just point a link checker spider at a website
you've exposed via the virtualization server prior to depoying
it to a server that's ever seen by an end-user (customer).

A low-tech technique for automating this would be to set up a
cron job that does this periodically and sends out email alerts. 
There are many free/open-source link checking spiders you can
use (as well as some commercial ones) to accomplish this task.

What the built-in feature I'll be implementing over the next
couple of months will do is to help users identify problems
at *submit* time.  It turns out that there are some cute
tricks within the AVM that makes it possible to do this
efficiently.

Definitions:
   "Submit time" – the moment when content is submitted for review
   "Update time" – the moment when content is pushed into the staging area

It's important to realize that submit-time checking alone cannot catch
all possible problems because depending on how & when reviews are
done, a "submission" and it's associated "update" in staging can be
quite far apart in time.   If the reviewer rejects the submission,
the update won't occur at all.   To compound the issues a bit more,
when you've got many users and/or many reviewers, the relative
ordering of submissions and their associated updates to staging
aren't necessarily done in an order-preserving way because if
we set a global transaction lock that preserved order across
all submissions globally, one slow reviewer could bring the entire
collaborative effort to a screeching halt.  This is plainly unacceptable..
Any link is potentially dependent upon a huge number of assets via
some webapp-specific logic involving an arbitrary number of levels
of data-driven indirection.

Thus, even when each step is a well-behaved transaction a final pass is
still required on the snapshot in the staging area you're considering
deploying.    In short, the "harm reduction" that's feasible at submit
time is not a replacement for pre-deployment QA testing.

As odd as it may sound at first, it's also quite important to allow users to
submit files that they already know contain broken links.     There are
several why this paradoxical-sounding feature is desirable, but in a
nutshell, if you're too strict up-front, then users end up being backed
into awkward situations for no good reason.  Consider this:
  • Alice is given an assignment to work on a page name X.

  • X must link to file Y.

  • Bob is given an assignment to work on a page name Y.

  • Y must link file X.

  • Neither file X nor file Y exist when Alice and Bob are given their assignments.
If Alice and Bob had to work around a system that was *always*
strict about not allowing dead links in checkins, then they'd need
to first submit a "dummy" version of their file that violated the
constraint  "must link to X|Y"  (and they'd have to wait for the other
person to do so prior to submitting their *real* file).

The thing to note here is that even with all the pain and end-user confusion
that "ultra strict" policy would inevitably generate, the final website would
be no better off for it!   Remember: you'll be doing *final* QA/linkchecking
prior to deployment anyhow.  Most of the time,  Alice & Bob will rectify
problems like this on their own (because they've seen the alerts), but
if they fail to do so, you can catch their mistake in an automated way.
The less Alice & Bob have to struggle with  pointless rigidity, the more
time they'll have to sort things out.

The details about how the GUI will look for "submit time" and final
pre-deployment "QA" link checking is still in flux, but hopefully I've
provided enough detail to be useful.

In short, if you want some link checking now, it's trivial to set up;
otherwise, just wait a bit.

   Cheers,
   -Jon