06-15-2011 11:45 AM
06-15-2011 05:57 PM
04-14-2022 01:41 PM
Bonjour , stp est ce possible d'integrer REDIS a ALFRESCO ?
06-16-2011 07:55 AM
06-17-2011 10:32 AM
12:13:50,299 WARN [hibernate.cfg.SettingsFactory] Could not obtain connection metadataD'autant plus je pense que pour que cette topologie fonctionne correctement, il faudrait que les répertoires alf_data sur les deux serveurs soient parfaitement synchronisés.
org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (null, message from server: "Host x.x.x.x' is not allowed to connect to this MySQL server")
06-20-2011 04:41 AM
06-20-2011 12:10 PM
OS : CentOS 5.6Sur chacunes, j'ai installé Alfresco Community 3.4.d, en gardant les paramètres par défaut et ne faisant pas de bidouilles particulières, installation standart donc.
Alfresco : Community 3.4.d
DB : MySQL
Réseau IP : 192.168.0.162 et 192.168.0.161, netmask : 255.255.255.224 (Netmask correspond au réseau 192.168.0.160 ==> 192.168.0.192)
Serveur A : 192.168.0.161 = "alfresco3"
Serveur B : 192.168.0.162 = "alfresco2"
# fsid=0 signifie que c'est le repertoire root
/share/ alfresco2/255.255.255.224(rw,fsid=0,root_squash,async)
/share/alfresco-3.4.d/alf_data/ alfresco2/255.255.255.224(rw,root_squash,async)
==>[thibaut@alfresco3 ~]$ sudo mount -t nfs4 alfresco3:/alfresco-3.4.d/alf_data/ /share/alfresco-3.4.d/alf_data/
Repository configuration
repository.name=alfresco3
# Directory configuration
dir.root=./alf_data
dir.contentstore=${dir.root}/contentstore
dir.contentstore.deleted=${dir.root}/contentstore.deleted
dir.auditcontentstore=${dir.root}/audit.contentstore
# The location for lucene index files
dir.indexes=${dir.root}/lucene-indexes
# The location for index backups
dir.indexes.backup=${dir.root}/backup-lucene-indexes
# The location for lucene index locks
dir.indexes.lock=${dir.indexes}/locks
# Is the JBPM Deploy Process Servlet enabled?
# Default is false. Should not be enabled in production environments as the
# servlet allows unauthenticated deployment of new workflows.
system.workflow.deployservlet.enabled=false
# ######################################### #
# Index Recovery and Tracking Configuration #
# ######################################### #
#
# Recovery types are:
# NONE: Ignore
# VALIDATE: Checks that the first and last transaction for each store is represented in the indexes
# AUTO: Validates and auto-recovers if validation fails
# FULL: Full index rebuild, processing all transactions in order. The server is temporarily suspended.
index.recovery.mode=VALIDATE
# FULL recovery continues when encountering errors
index.recovery.stopOnError=false
index.recovery.stopOnError=false
index.recovery.maximumPoolSize=5
# Set the frequency with which the index tracking is triggered.
# For more information on index tracking in a cluster:
# http://wiki.alfresco.com/wiki/High_Availability_Configuration_V1.4_to_V2.1#Version_1.4.5.2C_2.1.1_an...
# By default, this is effectively never, but can be modified as required.
# Examples:
# Never: * * * * * ? 2099
# Once every five seconds: 0/5 * * * * ?
# Once every two seconds : 0/2 * * * * ?
# See http://www.quartz-scheduler.org/docs/tutorials/crontrigger.html
index.tracking.cronExpression=0/5 * * * * ?
index.tracking.adm.cronExpression=${index.tracking.cronExpression}
index.tracking.avm.cronExpression=${index.tracking.cronExpression}
# Other properties.
index.tracking.maxTxnDurationMinutes=10
index.tracking.reindexLagMs=1000
index.tracking.maxRecordSetSize=1000
index.tracking.maxTransactionsPerLuceneCommit=100
index.tracking.disableInTransactionIndexing=false
# Index tracking information of a certain age is cleaned out by a scheduled job.
# Any clustered system that has been offline for longer than this period will need to be seeded
# with a more recent backup of the Lucene indexes or the indexes will have to be fully rebuilt.
# Use -1 to disable purging. This can be switched on at any stage.
index.tracking.minRecordPurgeAgeDays=30
# Reindexing of missing content is by default 'never' carried out.
# The cron expression below can be changed to control the timing of this reindexing.
# Users of Enterprise Alfresco can configure this cron expression via JMX without a server restart.
# Note that if alfresco.cluster.name is not set, then reindexing will not occur.
index.reindexMissingContent.cronExpression=* * * * * ? 2099
# Change the failure behaviour of the configuration checker
system.bootstrap.config_check.strict=true
# The name of the cluster
# Leave this empty to disable cluster entry
alfresco.cluster.name=alftest
[…]
Ce qui semble important à mes yeux dans ce fichier : <?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE beans PUBLIC '-//SPRING//DTD BEAN//EN' 'http://www.springframework.org/dtd/spring-beans.dtd'>
<beans>
<!–
This file is not included in the application context by default.
If you include this file, please ensure that you review the sample
beans contained here.
–>
<bean id="localDriveContentStore" class="org.alfresco.repo.content.filestore.FileContentStore">
<constructor-arg>
<value>./alf_data/</value>
</constructor-arg>
</bean>
<bean id="networkContentStore" class="org.alfresco.repo.content.filestore.FileContentStore">
<constructor-arg>
<value>/share/alfresco-3.4.d/alf_data/</value>
</constructor-arg>
</bean>
<bean id="fileContentStore" class="org.alfresco.repo.content.replication.ReplicatingContentStore" >
<property name="primaryStore">
<ref bean="localDriveContentStore" />
</property>
<property name="secondaryStores">
<list>
<ref bean="networkContentStore" />
</list>
</property>
<property name="inbound">
<value>true</value>
</property>
<property name="outbound">
<value>true</value>
</property>
<property name="retryingTransactionHelper">
<ref bean="retryingTransactionHelper"/>
</property>
</bean>
</beans>
<ehcache>
<diskStore
path="java.io.tmpdir"/>
<!–
The 'heartbeatInterval' property is the only one used for the JGroups-enabled implementation
–>
<cacheManagerPeerProviderFactory
class="org.alfresco.repo.cache.AlfrescoCacheManagerPeerProviderFactory"
properties="heartbeatInterval=5000,
peerDiscovery=automatic,
multicastGroupAddress=230.0.0.1,
multicastGroupPort=4446"
/>
<cacheManagerPeerListenerFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
/>
<!–
To control the cache peer URLs, replace the 'cacheManagerPeerListenerFactory' with the following
and set the properties statically, in alfresco-global.properties or via java -D options.
Only the hostName needs to be set as the others have sensible defaults.
–>
[…]
[thibaut@alfresco3 ~]$ sudo lsof -nPi | grep java
java 2545 root 17u IPv6 12220 TCP *:52036 (LISTEN)
java 2545 root 38u IPv6 12222 TCP *:8080 (LISTEN)
java 2545 root 43u IPv6 13236 TCP 127.0.0.1:39121->127.0.0.1:8080 (ESTABLISHED)
java 2545 root 44u IPv6 13228 TCP 127.0.0.1:8080->127.0.0.1:39121 (ESTABLISHED)
java 2545 root 45u IPv6 12477 TCP *:8009 (LISTEN)
java 2545 root 117u IPv6 12478 TCP 127.0.0.1:8005 (LISTEN)
java 2545 root 237u IPv6 12274 TCP *:34828 (LISTEN)
java 2545 root 247u IPv6 12295 TCP 127.0.0.1:50500 (LISTEN)
java 2545 root 248u IPv6 12296 TCP *:50508 (LISTEN)
java 2545 root 251u IPv6 12342 TCP *:38940 (LISTEN)
java 2545 root 255u IPv6 12310 TCP 127.0.0.1:33378->127.0.0.1:3306 (ESTABLISHED)
java 2545 root 256u IPv6 12312 TCP 127.0.0.1:33379->127.0.0.1:3306 (ESTABLISHED)
java 2545 root 257u IPv6 12314 TCP 127.0.0.1:33380->127.0.0.1:3306 (ESTABLISHED)
java 2545 root 258u IPv6 12316 TCP 127.0.0.1:33381->127.0.0.1:3306 (ESTABLISHED)
java 2545 root 259u IPv6 12318 TCP 127.0.0.1:33382->127.0.0.1:3306 (ESTABLISHED)
java 2545 root 260u IPv6 12320 TCP 127.0.0.1:33383->127.0.0.1:3306 (ESTABLISHED)
java 2545 root 261u IPv6 12322 TCP 127.0.0.1:33384->127.0.0.1:3306 (ESTABLISHED)
[thibaut@alfresco3 ~]$ watch tail /srv/alfresco-3.4.d/tomcat/logs/catalina.out
WARN : org.alfresco.wcm.client.util.impl.GuestSessionFactoryImpl - WQS unable to connect to repository: Introuvable
WARN : org.alfresco.wcm.client.util.impl.GuestSessionFactoryImpl - WQS unable to connect to repository: Introuvable
WARN : org.alfresco.wcm.client.util.impl.GuestSessionFactoryImpl - WQS unable to connect to repository: Introuvable
WARN : org.alfresco.wcm.client.util.impl.GuestSessionFactoryImpl - WQS unable to connect to repository: Introuvable
WARN : org.alfresco.wcm.client.util.impl.GuestSessionFactoryImpl - WQS unable to connect to repository: Introuvable
WARN : org.alfresco.wcm.client.util.impl.GuestSessionFactoryImpl - WQS unable to connect to repository: Introuvable
WARN : org.alfresco.wcm.client.util.impl.GuestSessionFactoryImpl - WQS unable to connect to repository: Introuvable
WARN : org.alfresco.wcm.client.util.impl.GuestSessionFactoryImpl - WQS unable to connect to repository: Introuvable
WARN : org.alfresco.wcm.client.util.impl.GuestSessionFactoryImpl - WQS unable to connect to repository: Introuvable
WARN : org.alfresco.wcm.client.util.impl.GuestSessionFactoryImpl - WQS unable to connect to repository: Introuvable
#
# WCM QS configuration properties
#
wcmqs.api.alfresco=http://localhost:8080/alfresco
wcmqs.api.user=admin
wcmqs.api.password=*******
wcmqs.api.alfresco.cmis=%{wcmqs.api.alfresco}/service/cmis
wcmqs.api.alfresco.webscript=%{wcmqs.api.alfresco}/service/api/
#Type of asset factory to use. Either "cmis" or "webscript" (case-sensitive)
wcmqs.api.assetFactoryType=webscript
#wcmqs.api.assetFactoryType=cmis
wcmqs.api.repositoryPollMilliseconds=2000
wcmqs.api.websiteCacheSeconds=300
wcmqs.api.sectionCacheSeconds=60
[thibaut@alfresco3 ~]$ sudo mv /srv/alfresco-3.4.d/tomcat/shared/classes/alfresco/extension/replicating-content-services-context.xml /srv/alfresco-3.4.d/tomcat/shared/classes/alfresco/extension/replicating-content-services-context.xml.sample=> connexion à http://alfresco3:8080/alfresco/ fonctionne correctement.
[thibaut@alfresco3 ~]$ /etc/init.d/alfresco restart
Tags
Find what you came for
We want to make your experience in Hyland Connect as valuable as possible, so we put together some helpful links.