cancel
Showing results for 
Search instead for 
Did you mean: 

Embeded Elasticsearch limitations - audit stopped indexing events

Anton_Petrov
Confirmed Champ
Confirmed Champ

Hi, I can't find info about limitations of using embeded ES. Only warnings that it's not for production. We are still in development and using it. All worked fine, but recently noticed that since a month or so the audit stopped indexing and showing the latest events. Also the documents history is empty. The document searches and fulltext seems to work fine. I try to reindex the ES without success. My questions are:

  1. Could it be we reach some limits and installing external ES will solve the problem?
  2. What are the drawbacks of NOT using ES for audit? The audit logs still recorded, but in the /data/stream/audit/audit/P-00 folder as already mentioned by Rahul Mittal. Following some logs about reindexing results and errors, may be usefull.
2021-01-21T04:02:47,341 WARN  [http-nio-0.0.0.0-8443-exec-1] [org.nuxeo.elasticsearch.core.ElasticSearchAdminImpl] Initializing index: nuxeo, type: doc with dropIfExists flag, deleting an existing index
2021-01-21T04:02:48,352 WARN  [Nuxeo-Work-elasticSearchIndexing-3:1447442766983304.36712633] [org.nuxeo.elasticsearch.work.ScrollingIndexingWorker] Re-indexing job: /elasticSearchIndexing:1447442766983304.36712633 has submited 2391 documents in 5 bucket workers
2021-01-21T04:03:03,227 WARN  [Nuxeo-Work-elasticSearchIndexing-1:1447442802337160.2139582254] [org.nuxeo.elasticsearch.work.BucketIndexingWorker] Re-indexing job: /elasticSearchIndexing:1447442766983304.36712633 completed.
2021-01-21T04:03:06,500 WARN  [Nuxeo-Work-elasticSearchIndexing-3:1447442789095188.41826710] [org.nuxeo.elasticsearch.core.ElasticSearchIndexingImpl] Max bulk size reached 5304164, sending bulk command
2021-01-21T04:03:06,895 WARN  [Nuxeo-Work-elasticSearchIndexing-4:1447442789502075.1793258317] [org.nuxeo.elasticsearch.core.ElasticSearchIndexingImpl] Max bulk size reached 5248953, sending bulk command
2021-01-21T04:03:14,183 WARN  [Nuxeo-Work-elasticSearchIndexing-4:1447442789502075.1793258317] [org.nuxeo.elasticsearch.core.ElasticSearchIndexingImpl] Max bulk size reached 5302361, sending bulk command
2021-01-21T04:32:19,799 WARN  [http-nio-0.0.0.0-8443-exec-7] [org.nuxeo.elasticsearch.core.ElasticSearchAdminImpl] Initializing index: nuxeo, type: doc with dropIfExists flag, deleting an existing index
2021-01-21T04:32:20,099 WARN  [Nuxeo-Work-elasticSearchIndexing-1:1449214520272996.1315368393] [org.nuxeo.elasticsearch.work.ScrollingIndexingWorker] Re-indexing job: /elasticSearchIndexing:1449214520272996.1315368393 has submited 2391 documents in 5 bucket workers
2021-01-21T04:32:29,435 WARN  [Nuxeo-Work-elasticSearchIndexing-1:1449214549342258.613637217] [org.nuxeo.elasticsearch.core.ElasticSearchIndexingImpl] Max bulk size reached 5304164, sending bulk command
2021-01-21T04:32:29,752 WARN  [Nuxeo-Work-elasticSearchIndexing-3:1449214550108133.1269898187] [org.nuxeo.elasticsearch.work.BucketIndexingWorker] Re-indexing job: /elasticSearchIndexing:1449214520272996.1315368393 completed.
2021-01-21T04:32:31,694 WARN  [Nuxeo-Work-elasticSearchIndexing-2:1449214549891040.308843962] [org.nuxeo.elasticsearch.core.ElasticSearchIndexingImpl] Max bulk size reached 5248953, sending bulk command
2021-01-21T04:32:32,689 WARN  [http-nio-0.0.0.0-8443-exec-10] [org.nuxeo.elasticsearch.core.ElasticSearchAdminImpl] Initializing index: nuxeo, type: doc with dropIfExists flag, deleting an existing index
2021-01-21T04:32:32,866 WARN  [elasticsearch[nuxeoNode][clusterApplierService#updateTask][T#1]] [org.elasticsearch.action.bulk.TransportShardBulkAction] unexpected error during the primary phase for action [indices:data/write/bulk[s]], request [BulkShardRequest [[nuxeo][0]] containing [100] requests]
org.elasticsearch.index.IndexNotFoundException: no such index
        at org.elasticsearch.cluster.routing.RoutingTable.shardRoutingTable(RoutingTable.java:137) ~[elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.primary(TransportReplicationAction.java:795) ~[elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:730) ~[elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onNewClusterState(TransportReplicationAction.java:881) ~[elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onNewClusterState(ClusterStateObserver.java:303) ~[elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.clusterChanged(ClusterStateObserver.java:191) ~[elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateListeners$7(ClusterApplierService.java:492) ~[elasticsearch-6.5.3.jar:6.5.3]
        at java.util.concurrent.ConcurrentHashMap$KeySpliterator.forEachRemaining(ConcurrentHashMap.java:3527) [?:1.8.0_275]
        at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:743) [?:1.8.0_275]
        at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647) [?:1.8.0_275]
        at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateListeners(ClusterApplierService.java:489) [elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:472) [elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:416) [elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:160) [elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:624) [elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:244) [elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:207) [elasticsearch-6.5.3.jar:6.5.3]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_275]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_275]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_275]
2021-01-21T04:32:32,902 WARN  [elasticsearch[nuxeoNode][clusterApplierService#updateTask][T#1]] [org.elasticsearch.action.bulk.TransportShardBulkAction] unexpected error during the primary phase for action [indices:data/write/bulk[s]], request [BulkShardRequest [[nuxeo][0]] containing [47] requests]
org.elasticsearch.index.IndexNotFoundException: no such index
        at org.elasticsearch.cluster.routing.RoutingTable.shardRoutingTable(RoutingTable.java:137) ~[elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.primary(TransportReplicationAction.java:795) ~[elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:730) ~[elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onNewClusterState(TransportReplicationAction.java:881) ~[elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onNewClusterState(ClusterStateObserver.java:303) ~[elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.clusterChanged(ClusterStateObserver.java:191) ~[elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateListeners$7(ClusterApplierService.java:492) ~[elasticsearch-6.5.3.jar:6.5.3]
        at java.util.concurrent.ConcurrentHashMap$KeySpliterator.forEachRemaining(ConcurrentHashMap.java:3527) [?:1.8.0_275]
        at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:743) [?:1.8.0_275]
        at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647) [?:1.8.0_275]
        at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateListeners(ClusterApplierService.java:489) [elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:472) [elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:416) [elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:160) [elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:624) [elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:244) [elasticsearch-6.5.3.jar:6.5.3]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:207) [elasticsearch-6.5.3.jar:6.5.3]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_275]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_275]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_275]
2021-01-21T04:32:32,907 ERROR [Nuxeo-Work-elasticSearchIndexing-2:1449214549891040.308843962] [org.nuxeo.elasticsearch.core.ElasticSearchIndexingImpl] failure in bulk execution:
[0]: index [nuxeo], type [doc], id [dde988e8-08fc-4e17-a80e-cf7396f78b96], message [[nuxeo/VtdIlpyzQIW_mCuXlVKNPg] ElasticsearchException[Elasticsearch exception [type=index_not_found_exception, reason=no such index]]]
[1]: index [nuxeo], type [doc], id [a16a02de-fd4b-4bf0-be59-249f42d8fe01], message [[nuxeo/VtdIlpyzQIW_mCuXlVKNPg] ElasticsearchException[Elasticsearch exception [type=index_not_found_exception, reason=no such index]]]
--- More errors like this ---
1 REPLY 1

Not applicable

Hello, I have the same issue. Latest events don't display in documents history...

I have to stop/restart NUXEO/ES/DB to see them again..

What is the root cause ?

Thanks for your help