Member since
09-28-2015
73
Posts
26
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7380 | 01-20-2017 01:27 PM | |
2789 | 06-01-2016 08:24 AM | |
2944 | 05-28-2016 01:33 AM | |
2001 | 05-17-2016 03:44 PM | |
1117 | 12-22-2015 01:50 AM |
08-16-2017
02:57 AM
@Slim Thanks for your advise. You are right. The NPE is a side effect of running out of memory. It works perfectly (response under 1s with cached data) after reducing Druid Historical buffer size.
... View more
08-02-2017
03:12 AM
Also got exception in my druid historical log. 2017-08-02T03:08:47,572 INFO [groupBy_ssb_druid_[1997-09-01T00:00:00.000Z/1997-10-01T00:00:00.000Z]] io.druid.offheap.OffheapBufferGenerator - Allocating new intermediate processing buffer[77] of size[1,073,741,824]
2017-08-02T03:08:47,573 ERROR [processing-6] io.druid.query.GroupByMergedQueryRunner - Exception with one of the sequences!
java.lang.NullPointerException
at io.druid.query.aggregation.DoubleSumAggregatorFactory.factorize(DoubleSumAggregatorFactory.java:59) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.segment.incremental.OnheapIncrementalIndex.factorizeAggs(OnheapIncrementalIndex.java:222) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.segment.incremental.OnheapIncrementalIndex.addToFacts(OnheapIncrementalIndex.java:186) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.segment.incremental.IncrementalIndex.add(IncrementalIndex.java:492) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.groupby.GroupByQueryHelper$3.accumulate(GroupByQueryHelper.java:127) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.groupby.GroupByQueryHelper$3.accumulate(GroupByQueryHelper.java:119) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at com.metamx.common.guava.BaseSequence.accumulate(BaseSequence.java:67) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.ConcatSequence$1.accumulate(ConcatSequence.java:46) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.ConcatSequence$1.accumulate(ConcatSequence.java:42) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.MappingAccumulator.accumulate(MappingAccumulator.java:39) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.FilteringAccumulator.accumulate(FilteringAccumulator.java:40) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.MappingAccumulator.accumulate(MappingAccumulator.java:39) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.BaseSequence.accumulate(BaseSequence.java:67) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.MappedSequence.accumulate(MappedSequence.java:40) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.ResourceClosingSequence.accumulate(ResourceClosingSequence.java:38) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.FilteredSequence.accumulate(FilteredSequence.java:42) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.MappedSequence.accumulate(MappedSequence.java:40) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.ResourceClosingSequence.accumulate(ResourceClosingSequence.java:38) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.ConcatSequence.accumulate(ConcatSequence.java:40) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.ResourceClosingSequence.accumulate(ResourceClosingSequence.java:38) ~[java-util-0.27.10.jar:?]
at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:104) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:104) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.spec.SpecificSegmentQueryRunner$2$1.call(SpecificSegmentQueryRunner.java:87) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.spec.SpecificSegmentQueryRunner.doNamed(SpecificSegmentQueryRunner.java:171) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.spec.SpecificSegmentQueryRunner.access$400(SpecificSegmentQueryRunner.java:41) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.spec.SpecificSegmentQueryRunner$2.doItNamed(SpecificSegmentQueryRunner.java:162) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.spec.SpecificSegmentQueryRunner$2.accumulate(SpecificSegmentQueryRunner.java:80) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.CPUTimeMetricQueryRunner$1.accumulate(CPUTimeMetricQueryRunner.java:81) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at com.metamx.common.guava.Sequences$1.accumulate(Sequences.java:90) ~[java-util-0.27.10.jar:?]
at io.druid.query.GroupByMergedQueryRunner$1$1.call(GroupByMergedQueryRunner.java:120) [druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.GroupByMergedQueryRunner$1$1.call(GroupByMergedQueryRunner.java:111) [druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_121]
at io.druid.query.PrioritizedListenableFutureTask.run(PrioritizedExecutorService.java:271) [druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
... View more
08-02-2017
02:45 AM
@Slim My query json from hive explain. {"queryType":"groupBy","dataSource":"ssb_druid","granularity":"all","dimensions":[{"type":"default","dimension":"d_year"},{"type":"default","dimension":"p_category"},{"type":"default","dimension":"s_nation"}],"limitSpec":{"type":"default"},"filter":{"type":"and","fields":[{"type":"or","fields":[{"type":"selector","dimension":"d_year","value":"1997"},{"type":"selector","dimension":"d_year","value":"1998"}]},{"type":"or","fields":[{"type":"selector","dimension":"p_mfgr","value":"MFGR#1"},{"type":"selector","dimension":"p_mfgr","value":"MFGR#2"}]},{"type":"selector","dimension":"c_region","value":"AMERICA"},{"type":"selector","dimension":"s_region","value":"AMERICA"}]},"aggregations":[{"type":"doubleSum","name":"$f3","fieldName":"net_revenue"}],"intervals":["1900-01-01T00:00:00.000/3000-01-01T00:00:00.000"]}<br> And my curl output. curl -X POST 'http://host:8082/druid/v2/?pretty' -H 'Content-Type:application/json' -d @./q4.2-druid.json
{
"error" : "Unknown exception",
"errorMessage" : "Failure getting results from[http://druid_host:8083/druid/v2/] because of [Invalid type marker byte 0x3c for expected value token\n at [Source: java.io.SequenceInputStream@1f0ff6f5; line: -1, column: 1]]",
"errorClass" : "com.metamx.common.RE",
"host" : null
}
<br>
... View more
07-31-2017
05:25 AM
Hi, Follow the Hortonworks blog and this instruction, I successfully created my Hive table cube index in Druid and ran some simple queries. But I got a deserialization error when running query Q4.2. Has anyone seen this before? I am using HDP2.6.1. INFO : We are setting the hadoop caller context from HIVE_SSN_ID:152d59b7-0b88-46b2-b419-a6e457e9a06d to hive_20170731043035_22fc1b1b-6779-430d-a9c4-d31c2dee6eec
INFO : Semantic Analysis Completed
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:d_year, type:string, comment:null), FieldSchema(name:s_nation, type:string, comment:null), FieldSchema(name:p_category, type:string, comment:null), FieldSchema(name:profit, type:float, comment:null)], properties:null)
INFO : Completed compiling command(queryId=hive_20170731043035_22fc1b1b-6779-430d-a9c4-d31c2dee6eec); Time taken: 0.34 seconds
INFO : We are resetting the hadoop caller context to HIVE_SSN_ID:152d59b7-0b88-46b2-b419-a6e457e9a06d
INFO : Setting caller context to query id hive_20170731043035_22fc1b1b-6779-430d-a9c4-d31c2dee6eec
INFO : Executing command(queryId=hive_20170731043035_22fc1b1b-6779-430d-a9c4-d31c2dee6eec): select
d_year, s_nation, p_category,
sum(net_revenue) as profit
from
ssb_druid
where
c_region = 'AMERICA' and
s_region = 'AMERICA' and
(d_year = '1997' or d_year = '1998') and
(p_mfgr = 'MFGR#1' or p_mfgr = 'MFGR#2')
group by
d_year, s_nation, p_category
INFO : Resetting the caller context to HIVE_SSN_ID:152d59b7-0b88-46b2-b419-a6e457e9a06d
INFO : Completed executing command(queryId=hive_20170731043035_22fc1b1b-6779-430d-a9c4-d31c2dee6eec); Time taken: 0.009 seconds
INFO : OK
Error: java.io.IOException: org.apache.hive.druid.com.fasterxml.jackson.databind.JsonMappingException: Can not deserialize instance of java.util.ArrayList out of START_OBJECT token
at [Source: org.apache.hive.druid.com.metamx.http.client.io.AppendableByteArrayInputStream@2bf8ee93; line: -1, column: 4] (state=,code=0)
... View more
Labels:
- Labels:
-
Apache Hive
06-29-2017
01:37 AM
Thanks @Jungtaek Lim Upgrading to HDP2.6.1 and manually deleted all blobs solved the issue.
... View more
06-27-2017
10:01 AM
In streamline.log, I found one error message. Not sure whether or not it relates to the issue. INFO [09:23:44.494] [ForkJoinPool-4-worker-11] c.h.s.s.a.s.t.StormTopologyActionsImpl - Deploying Application WordCount
INFO [09:23:44.495] [ForkJoinPool-4-worker-11] c.h.s.s.a.s.t.StormTopologyActionsImpl - /usr/hdf/current/storm-client/bin/storm jar /tmp/storm-artifacts/streamline-8-WordCount/artifacts/streamline-runtime-storm-0.5.0.3.0.0.0-453.jar --jars /tmp/storm-artifacts/streamline-8-WordCount/jars/streamline-functions-282004f0-1cc7-4599-a49b-e6bde71ff3bf.jar --artifacts org.apache.kafka:kafka-clients:0.10.2.1,org.apache.storm:storm-kafka-client:1.1.0.3.0.0.0-453^org.slf4j:slf4j-log4j12^log4j:log4j^org.apache.zookeeper:zookeeper^org.apache.kafka:kafka-clients --artifactRepositories hwx-public^http://repo.hortonworks.com/content/groups/public/,hwx-private^http://nexus-private.hortonworks.com/nexus/content/groups/public/ -c nimbus.host=apac-shared0.field.hortonworks.com -c nimbus.port=6627 -c nimbus.thrift.max_buffer_size1048576 -c storm.thrift.transport=org.apache.storm.security.auth.SimpleTransportPlugin -c storm.principal.tolocal=org.apache.storm.security.auth.DefaultPrincipalToLocal org.apache.storm.flux.Flux --remote /tmp/storm-artifacts/streamline-8-WordCount.yaml
ERROR [09:24:02.988] [dw-31 - POST /api/v1/catalog/topologies/8/versions/save] c.h.s.s.c.s.StreamCatalogService - Streams should be specified.
... View more
06-27-2017
09:30 AM
Hi, I was able to deploy a simple word count application on SAM and integrate with Schema Registry, which I really liked. However, soon after the deploy finished, my Storm supervisors halted with the below error. It looks like an authorization issue, but even I disabled Ranger plugin for Storm, it stills does not work. 2017-06-27 04:25:18.253 o.a.s.d.s.Slot [INFO] STATE WAITING_FOR_BASIC_LOCALIZATION msInState: 2243 -> WAITING_FOR_BLOB_LOCALIZATION msInState: 0
2017-06-27 04:25:18.289 o.a.s.u.NimbusClient [INFO] Found leader nimbus : test.example.com:6627
2017-06-27 04:25:18.439 o.a.s.l.AsyncLocalizer [WARN] Caught Exception While Downloading (rethrowing)...
java.io.IOException: Error getting blobs
at org.apache.storm.localizer.Localizer.getBlobs(Localizer.java:471) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.AsyncLocalizer$DownloadBlobs.call(AsyncLocalizer.java:254) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.AsyncLocalizer$DownloadBlobs.call(AsyncLocalizer.java:211) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_77]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
Caused by: java.util.concurrent.ExecutionException: java.lang.NullPointerException
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:1.8.0_77]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[?:1.8.0_77]
at org.apache.storm.localizer.Localizer.getBlobs(Localizer.java:460) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
... 6 more
Caused by: java.lang.NullPointerException
at org.apache.storm.utils.Utils.canUserReadBlob(Utils.java:1076) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.Localizer.downloadBlob(Localizer.java:534) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.Localizer.access$000(Localizer.java:65) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.Localizer$DownloadBlob.call(Localizer.java:505) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.Localizer$DownloadBlob.call(Localizer.java:481) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
... 4 more
2017-06-27 04:25:18.441 o.a.s.d.s.Slot [ERROR] Error when processing event
java.util.concurrent.ExecutionException: java.io.IOException: Error getting blobs
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:1.8.0_77]
at java.util.concurrent.FutureTask.get(FutureTask.java:206) ~[?:1.8.0_77]
at org.apache.storm.localizer.LocalDownloadedResource$NoCancelFuture.get(LocalDownloadedResource.java:63) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.daemon.supervisor.Slot.handleWaitingForBlobLocalization(Slot.java:380) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.daemon.supervisor.Slot.stateMachineStep(Slot.java:275) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.daemon.supervisor.Slot.run(Slot.java:740) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
Caused by: java.io.IOException: Error getting blobs
at org.apache.storm.localizer.Localizer.getBlobs(Localizer.java:471) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.AsyncLocalizer$DownloadBlobs.call(AsyncLocalizer.java:254) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.AsyncLocalizer$DownloadBlobs.call(AsyncLocalizer.java:211) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_77]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
Caused by: java.util.concurrent.ExecutionException: java.lang.NullPointerException
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:1.8.0_77]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[?:1.8.0_77]
at org.apache.storm.localizer.Localizer.getBlobs(Localizer.java:460) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.AsyncLocalizer$DownloadBlobs.call(AsyncLocalizer.java:254) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.AsyncLocalizer$DownloadBlobs.call(AsyncLocalizer.java:211) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_77]
at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_77]
Caused by: java.lang.NullPointerException
at org.apache.storm.utils.Utils.canUserReadBlob(Utils.java:1076) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.Localizer.downloadBlob(Localizer.java:534) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.Localizer.access$000(Localizer.java:65) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.Localizer$DownloadBlob.call(Localizer.java:505) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.Localizer$DownloadBlob.call(Localizer.java:481) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_77]
at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_77]
2017-06-27 04:25:18.441 o.a.s.u.Utils [ERROR] Halting process: Error when processing an event
java.lang.RuntimeException: Halting process: Error when processing an event
at org.apache.storm.utils.Utils.exitProcess(Utils.java:1774) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.daemon.supervisor.Slot.run(Slot.java:774) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
2017-06-27 04:25:18.444 o.a.s.d.s.Supervisor [INFO] Shutting down supervisor 6c074218-7ac0-45d8-b1b9-52e3404bbf91
2017-06-27 08:32:26.030 o.a.s.z.Zookeeper [INFO] Staring ZK Curator
... View more
Labels:
- Labels:
-
Cloudera DataFlow (CDF)
06-19-2017
11:46 AM
Thanks @Dan Chaffelson Changing Nifi protocol port to 9089 works.
... View more
06-19-2017
11:07 AM
Thanks @Dan Chaffelson Executed the scripts without error. Was able to start SAM and import an application. Cool!
... View more
06-19-2017
06:27 AM
Hi, When I click the Nifi start button on Ambari, it would become green for a second, but immediately become red. I got the below error in nifi-app.log. It complains 'Address already in use', but I confirmed no process on the node was using 9090 port. Does anyone happen to experience this and know the resolution? 2017-06-19 06:15:14,304 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext@27bfa57f{/,file:///var/lib/nifi/work/jetty/nifi-web-error-1.2.0.3.0.0.0-453.war/webapp/,AVAILABLE}{/var/lib/nifi/work/nar/framework/nifi-framework-nar-1.2.0.3.0.0.0-453.nar-unpacked/META-INF/bundled-dependencies/nifi-web-error-1.2.0.3.0.0.0-453.war}
2017-06-19 06:15:14,326 INFO [main] o.eclipse.jetty.server.AbstractConnector Started ServerConnector@11636d43{HTTP/1.1,[http/1.1]}{hdf.example.com:9090}
2017-06-19 06:15:14,327 INFO [main] org.eclipse.jetty.server.Server Started @39995ms
2017-06-19 06:15:15,528 INFO [main] org.apache.nifi.web.server.JettyServer Loading Flow...
2017-06-19 06:15:15,541 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2017-06-19 06:15:15,541 INFO [Leader Election Notification Thread-1] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@e11427 has been interrupted; no longer leader for role 'Cluster Coordinator'
2017-06-19 06:15:15,541 INFO [Leader Election Notification Thread-1] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@e11427 This node is no longer leader for role 'Cluster Coordinator'
2017-06-19 06:15:15,541 INFO [Leader Election Notification Thread-1] o.apache.nifi.controller.FlowController This node is no longer the elected Active Cluster Coordinator
2017-06-19 06:15:15,556 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=true] stopped and closed
2017-06-19 06:15:15,557 INFO [main] o.a.n.c.c.h.AbstractHeartbeatMonitor Heartbeat Monitor stopped
2017-06-19 06:15:15,558 INFO [main] o.apache.nifi.controller.FlowController Initiated immediate shutdown of flow controller...
2017-06-19 06:15:16,395 INFO [main] o.apache.nifi.controller.FlowController Controller has been terminated successfully.
2017-06-19 06:15:16,409 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
java.lang.Exception: Unable to load flow due to: org.apache.nifi.lifecycle.LifeCycleStartException: Failed to start Flow Service due to: java.net.BindException: Address already in use
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:809)
at org.apache.nifi.NiFi.<init>(NiFi.java:160)
at org.apache.nifi.NiFi.main(NiFi.java:267)
Caused by: org.apache.nifi.lifecycle.LifeCycleStartException: Failed to start Flow Service due to: java.net.BindException: Address already in use
at org.apache.nifi.controller.StandardFlowService.start(StandardFlowService.java:312)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:799)
... 2 common frames omitted
Caused by: java.net.BindException: Address already in use
at java.net.PlainSocketImpl.socketBind(Native Method)
at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
at java.net.ServerSocket.bind(ServerSocket.java:375)
at java.net.ServerSocket.<init>(ServerSocket.java:237)
at java.net.ServerSocket.<init>(ServerSocket.java:128)
at org.apache.nifi.io.socket.SocketUtils.createServerSocket(SocketUtils.java:108)
at org.apache.nifi.io.socket.SocketListener.start(SocketListener.java:85)
at org.apache.nifi.cluster.protocol.impl.SocketProtocolListener.start(SocketProtocolListener.java:90)
at org.apache.nifi.cluster.protocol.impl.NodeProtocolSenderListener.start(NodeProtocolSenderListener.java:64)
at org.apache.nifi.controller.StandardFlowService.start(StandardFlowService.java:303)
... 3 common frames omitted
2017-06-19 06:15:16,411 INFO [Thread-1] org.apache.nifi.NiFi Initiating shutdown of Jetty web server...
2017-06-19 06:15:16,423 INFO [Thread-1] o.eclipse.jetty.server.AbstractConnector Stopped ServerConnector@11636d43{HTTP/1.1,[http/1.1]}{hdf.example.com:9090}
2017-06-19 06:15:16,424 INFO [Thread-1] org.eclipse.jetty.server.session Stopped scavenging
... View more
Labels:
- Labels:
-
Apache NiFi
-
Cloudera DataFlow (CDF)