Member since
09-28-2015
73
Posts
26
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5547 | 01-20-2017 01:27 PM | |
1250 | 06-01-2016 08:24 AM | |
1066 | 05-28-2016 01:33 AM | |
1194 | 05-17-2016 03:44 PM | |
575 | 12-22-2015 01:50 AM |
08-16-2017
02:57 AM
@Slim Thanks for your advise. You are right. The NPE is a side effect of running out of memory. It works perfectly (response under 1s with cached data) after reducing Druid Historical buffer size.
... View more
08-02-2017
03:12 AM
Also got exception in my druid historical log. 2017-08-02T03:08:47,572 INFO [groupBy_ssb_druid_[1997-09-01T00:00:00.000Z/1997-10-01T00:00:00.000Z]] io.druid.offheap.OffheapBufferGenerator - Allocating new intermediate processing buffer[77] of size[1,073,741,824]
2017-08-02T03:08:47,573 ERROR [processing-6] io.druid.query.GroupByMergedQueryRunner - Exception with one of the sequences!
java.lang.NullPointerException
at io.druid.query.aggregation.DoubleSumAggregatorFactory.factorize(DoubleSumAggregatorFactory.java:59) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.segment.incremental.OnheapIncrementalIndex.factorizeAggs(OnheapIncrementalIndex.java:222) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.segment.incremental.OnheapIncrementalIndex.addToFacts(OnheapIncrementalIndex.java:186) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.segment.incremental.IncrementalIndex.add(IncrementalIndex.java:492) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.groupby.GroupByQueryHelper$3.accumulate(GroupByQueryHelper.java:127) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.groupby.GroupByQueryHelper$3.accumulate(GroupByQueryHelper.java:119) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at com.metamx.common.guava.BaseSequence.accumulate(BaseSequence.java:67) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.ConcatSequence$1.accumulate(ConcatSequence.java:46) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.ConcatSequence$1.accumulate(ConcatSequence.java:42) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.MappingAccumulator.accumulate(MappingAccumulator.java:39) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.FilteringAccumulator.accumulate(FilteringAccumulator.java:40) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.MappingAccumulator.accumulate(MappingAccumulator.java:39) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.BaseSequence.accumulate(BaseSequence.java:67) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.MappedSequence.accumulate(MappedSequence.java:40) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.ResourceClosingSequence.accumulate(ResourceClosingSequence.java:38) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.FilteredSequence.accumulate(FilteredSequence.java:42) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.MappedSequence.accumulate(MappedSequence.java:40) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.ResourceClosingSequence.accumulate(ResourceClosingSequence.java:38) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.ConcatSequence.accumulate(ConcatSequence.java:40) ~[java-util-0.27.10.jar:?]
at com.metamx.common.guava.ResourceClosingSequence.accumulate(ResourceClosingSequence.java:38) ~[java-util-0.27.10.jar:?]
at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:104) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.MetricsEmittingQueryRunner$1.accumulate(MetricsEmittingQueryRunner.java:104) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.spec.SpecificSegmentQueryRunner$2$1.call(SpecificSegmentQueryRunner.java:87) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.spec.SpecificSegmentQueryRunner.doNamed(SpecificSegmentQueryRunner.java:171) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.spec.SpecificSegmentQueryRunner.access$400(SpecificSegmentQueryRunner.java:41) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.spec.SpecificSegmentQueryRunner$2.doItNamed(SpecificSegmentQueryRunner.java:162) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.spec.SpecificSegmentQueryRunner$2.accumulate(SpecificSegmentQueryRunner.java:80) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.CPUTimeMetricQueryRunner$1.accumulate(CPUTimeMetricQueryRunner.java:81) ~[druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at com.metamx.common.guava.Sequences$1.accumulate(Sequences.java:90) ~[java-util-0.27.10.jar:?]
at io.druid.query.GroupByMergedQueryRunner$1$1.call(GroupByMergedQueryRunner.java:120) [druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at io.druid.query.GroupByMergedQueryRunner$1$1.call(GroupByMergedQueryRunner.java:111) [druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_121]
at io.druid.query.PrioritizedListenableFutureTask.run(PrioritizedExecutorService.java:271) [druid-processing-0.9.2.2.6.1.0-129.jar:0.9.2.2.6.1.0-129]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
... View more
08-02-2017
02:45 AM
@Slim My query json from hive explain. {"queryType":"groupBy","dataSource":"ssb_druid","granularity":"all","dimensions":[{"type":"default","dimension":"d_year"},{"type":"default","dimension":"p_category"},{"type":"default","dimension":"s_nation"}],"limitSpec":{"type":"default"},"filter":{"type":"and","fields":[{"type":"or","fields":[{"type":"selector","dimension":"d_year","value":"1997"},{"type":"selector","dimension":"d_year","value":"1998"}]},{"type":"or","fields":[{"type":"selector","dimension":"p_mfgr","value":"MFGR#1"},{"type":"selector","dimension":"p_mfgr","value":"MFGR#2"}]},{"type":"selector","dimension":"c_region","value":"AMERICA"},{"type":"selector","dimension":"s_region","value":"AMERICA"}]},"aggregations":[{"type":"doubleSum","name":"$f3","fieldName":"net_revenue"}],"intervals":["1900-01-01T00:00:00.000/3000-01-01T00:00:00.000"]}<br> And my curl output. curl -X POST 'http://host:8082/druid/v2/?pretty' -H 'Content-Type:application/json' -d @./q4.2-druid.json
{
"error" : "Unknown exception",
"errorMessage" : "Failure getting results from[http://druid_host:8083/druid/v2/] because of [Invalid type marker byte 0x3c for expected value token\n at [Source: java.io.SequenceInputStream@1f0ff6f5; line: -1, column: 1]]",
"errorClass" : "com.metamx.common.RE",
"host" : null
}
<br>
... View more
07-31-2017
05:25 AM
Hi, Follow the Hortonworks blog and this instruction, I successfully created my Hive table cube index in Druid and ran some simple queries. But I got a deserialization error when running query Q4.2. Has anyone seen this before? I am using HDP2.6.1. INFO : We are setting the hadoop caller context from HIVE_SSN_ID:152d59b7-0b88-46b2-b419-a6e457e9a06d to hive_20170731043035_22fc1b1b-6779-430d-a9c4-d31c2dee6eec
INFO : Semantic Analysis Completed
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:d_year, type:string, comment:null), FieldSchema(name:s_nation, type:string, comment:null), FieldSchema(name:p_category, type:string, comment:null), FieldSchema(name:profit, type:float, comment:null)], properties:null)
INFO : Completed compiling command(queryId=hive_20170731043035_22fc1b1b-6779-430d-a9c4-d31c2dee6eec); Time taken: 0.34 seconds
INFO : We are resetting the hadoop caller context to HIVE_SSN_ID:152d59b7-0b88-46b2-b419-a6e457e9a06d
INFO : Setting caller context to query id hive_20170731043035_22fc1b1b-6779-430d-a9c4-d31c2dee6eec
INFO : Executing command(queryId=hive_20170731043035_22fc1b1b-6779-430d-a9c4-d31c2dee6eec): select
d_year, s_nation, p_category,
sum(net_revenue) as profit
from
ssb_druid
where
c_region = 'AMERICA' and
s_region = 'AMERICA' and
(d_year = '1997' or d_year = '1998') and
(p_mfgr = 'MFGR#1' or p_mfgr = 'MFGR#2')
group by
d_year, s_nation, p_category
INFO : Resetting the caller context to HIVE_SSN_ID:152d59b7-0b88-46b2-b419-a6e457e9a06d
INFO : Completed executing command(queryId=hive_20170731043035_22fc1b1b-6779-430d-a9c4-d31c2dee6eec); Time taken: 0.009 seconds
INFO : OK
Error: java.io.IOException: org.apache.hive.druid.com.fasterxml.jackson.databind.JsonMappingException: Can not deserialize instance of java.util.ArrayList out of START_OBJECT token
at [Source: org.apache.hive.druid.com.metamx.http.client.io.AppendableByteArrayInputStream@2bf8ee93; line: -1, column: 4] (state=,code=0)
... View more
Labels:
- Labels:
-
Apache Hive
06-29-2017
01:37 AM
Thanks @Jungtaek Lim Upgrading to HDP2.6.1 and manually deleted all blobs solved the issue.
... View more
06-27-2017
10:01 AM
In streamline.log, I found one error message. Not sure whether or not it relates to the issue. INFO [09:23:44.494] [ForkJoinPool-4-worker-11] c.h.s.s.a.s.t.StormTopologyActionsImpl - Deploying Application WordCount
INFO [09:23:44.495] [ForkJoinPool-4-worker-11] c.h.s.s.a.s.t.StormTopologyActionsImpl - /usr/hdf/current/storm-client/bin/storm jar /tmp/storm-artifacts/streamline-8-WordCount/artifacts/streamline-runtime-storm-0.5.0.3.0.0.0-453.jar --jars /tmp/storm-artifacts/streamline-8-WordCount/jars/streamline-functions-282004f0-1cc7-4599-a49b-e6bde71ff3bf.jar --artifacts org.apache.kafka:kafka-clients:0.10.2.1,org.apache.storm:storm-kafka-client:1.1.0.3.0.0.0-453^org.slf4j:slf4j-log4j12^log4j:log4j^org.apache.zookeeper:zookeeper^org.apache.kafka:kafka-clients --artifactRepositories hwx-public^http://repo.hortonworks.com/content/groups/public/,hwx-private^http://nexus-private.hortonworks.com/nexus/content/groups/public/ -c nimbus.host=apac-shared0.field.hortonworks.com -c nimbus.port=6627 -c nimbus.thrift.max_buffer_size1048576 -c storm.thrift.transport=org.apache.storm.security.auth.SimpleTransportPlugin -c storm.principal.tolocal=org.apache.storm.security.auth.DefaultPrincipalToLocal org.apache.storm.flux.Flux --remote /tmp/storm-artifacts/streamline-8-WordCount.yaml
ERROR [09:24:02.988] [dw-31 - POST /api/v1/catalog/topologies/8/versions/save] c.h.s.s.c.s.StreamCatalogService - Streams should be specified.
... View more
06-27-2017
09:30 AM
Hi, I was able to deploy a simple word count application on SAM and integrate with Schema Registry, which I really liked. However, soon after the deploy finished, my Storm supervisors halted with the below error. It looks like an authorization issue, but even I disabled Ranger plugin for Storm, it stills does not work. 2017-06-27 04:25:18.253 o.a.s.d.s.Slot [INFO] STATE WAITING_FOR_BASIC_LOCALIZATION msInState: 2243 -> WAITING_FOR_BLOB_LOCALIZATION msInState: 0
2017-06-27 04:25:18.289 o.a.s.u.NimbusClient [INFO] Found leader nimbus : test.example.com:6627
2017-06-27 04:25:18.439 o.a.s.l.AsyncLocalizer [WARN] Caught Exception While Downloading (rethrowing)...
java.io.IOException: Error getting blobs
at org.apache.storm.localizer.Localizer.getBlobs(Localizer.java:471) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.AsyncLocalizer$DownloadBlobs.call(AsyncLocalizer.java:254) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.AsyncLocalizer$DownloadBlobs.call(AsyncLocalizer.java:211) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_77]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
Caused by: java.util.concurrent.ExecutionException: java.lang.NullPointerException
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:1.8.0_77]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[?:1.8.0_77]
at org.apache.storm.localizer.Localizer.getBlobs(Localizer.java:460) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
... 6 more
Caused by: java.lang.NullPointerException
at org.apache.storm.utils.Utils.canUserReadBlob(Utils.java:1076) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.Localizer.downloadBlob(Localizer.java:534) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.Localizer.access$000(Localizer.java:65) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.Localizer$DownloadBlob.call(Localizer.java:505) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.Localizer$DownloadBlob.call(Localizer.java:481) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
... 4 more
2017-06-27 04:25:18.441 o.a.s.d.s.Slot [ERROR] Error when processing event
java.util.concurrent.ExecutionException: java.io.IOException: Error getting blobs
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:1.8.0_77]
at java.util.concurrent.FutureTask.get(FutureTask.java:206) ~[?:1.8.0_77]
at org.apache.storm.localizer.LocalDownloadedResource$NoCancelFuture.get(LocalDownloadedResource.java:63) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.daemon.supervisor.Slot.handleWaitingForBlobLocalization(Slot.java:380) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.daemon.supervisor.Slot.stateMachineStep(Slot.java:275) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.daemon.supervisor.Slot.run(Slot.java:740) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
Caused by: java.io.IOException: Error getting blobs
at org.apache.storm.localizer.Localizer.getBlobs(Localizer.java:471) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.AsyncLocalizer$DownloadBlobs.call(AsyncLocalizer.java:254) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.AsyncLocalizer$DownloadBlobs.call(AsyncLocalizer.java:211) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_77]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
Caused by: java.util.concurrent.ExecutionException: java.lang.NullPointerException
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:1.8.0_77]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[?:1.8.0_77]
at org.apache.storm.localizer.Localizer.getBlobs(Localizer.java:460) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.AsyncLocalizer$DownloadBlobs.call(AsyncLocalizer.java:254) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.AsyncLocalizer$DownloadBlobs.call(AsyncLocalizer.java:211) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_77]
at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_77]
Caused by: java.lang.NullPointerException
at org.apache.storm.utils.Utils.canUserReadBlob(Utils.java:1076) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.Localizer.downloadBlob(Localizer.java:534) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.Localizer.access$000(Localizer.java:65) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.Localizer$DownloadBlob.call(Localizer.java:505) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.localizer.Localizer$DownloadBlob.call(Localizer.java:481) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_77]
at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_77]
2017-06-27 04:25:18.441 o.a.s.u.Utils [ERROR] Halting process: Error when processing an event
java.lang.RuntimeException: Halting process: Error when processing an event
at org.apache.storm.utils.Utils.exitProcess(Utils.java:1774) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
at org.apache.storm.daemon.supervisor.Slot.run(Slot.java:774) ~[storm-core-1.1.0.2.6.0.3-8.jar:1.1.0.2.6.0.3-8]
2017-06-27 04:25:18.444 o.a.s.d.s.Supervisor [INFO] Shutting down supervisor 6c074218-7ac0-45d8-b1b9-52e3404bbf91
2017-06-27 08:32:26.030 o.a.s.z.Zookeeper [INFO] Staring ZK Curator
... View more
Labels:
- Labels:
-
Cloudera DataFlow (CDF)
06-19-2017
11:46 AM
Thanks @Dan Chaffelson Changing Nifi protocol port to 9089 works.
... View more
06-19-2017
11:07 AM
Thanks @Dan Chaffelson Executed the scripts without error. Was able to start SAM and import an application. Cool!
... View more
06-19-2017
06:27 AM
Hi, When I click the Nifi start button on Ambari, it would become green for a second, but immediately become red. I got the below error in nifi-app.log. It complains 'Address already in use', but I confirmed no process on the node was using 9090 port. Does anyone happen to experience this and know the resolution? 2017-06-19 06:15:14,304 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext@27bfa57f{/,file:///var/lib/nifi/work/jetty/nifi-web-error-1.2.0.3.0.0.0-453.war/webapp/,AVAILABLE}{/var/lib/nifi/work/nar/framework/nifi-framework-nar-1.2.0.3.0.0.0-453.nar-unpacked/META-INF/bundled-dependencies/nifi-web-error-1.2.0.3.0.0.0-453.war}
2017-06-19 06:15:14,326 INFO [main] o.eclipse.jetty.server.AbstractConnector Started ServerConnector@11636d43{HTTP/1.1,[http/1.1]}{hdf.example.com:9090}
2017-06-19 06:15:14,327 INFO [main] org.eclipse.jetty.server.Server Started @39995ms
2017-06-19 06:15:15,528 INFO [main] org.apache.nifi.web.server.JettyServer Loading Flow...
2017-06-19 06:15:15,541 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2017-06-19 06:15:15,541 INFO [Leader Election Notification Thread-1] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@e11427 has been interrupted; no longer leader for role 'Cluster Coordinator'
2017-06-19 06:15:15,541 INFO [Leader Election Notification Thread-1] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@e11427 This node is no longer leader for role 'Cluster Coordinator'
2017-06-19 06:15:15,541 INFO [Leader Election Notification Thread-1] o.apache.nifi.controller.FlowController This node is no longer the elected Active Cluster Coordinator
2017-06-19 06:15:15,556 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=true] stopped and closed
2017-06-19 06:15:15,557 INFO [main] o.a.n.c.c.h.AbstractHeartbeatMonitor Heartbeat Monitor stopped
2017-06-19 06:15:15,558 INFO [main] o.apache.nifi.controller.FlowController Initiated immediate shutdown of flow controller...
2017-06-19 06:15:16,395 INFO [main] o.apache.nifi.controller.FlowController Controller has been terminated successfully.
2017-06-19 06:15:16,409 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
java.lang.Exception: Unable to load flow due to: org.apache.nifi.lifecycle.LifeCycleStartException: Failed to start Flow Service due to: java.net.BindException: Address already in use
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:809)
at org.apache.nifi.NiFi.<init>(NiFi.java:160)
at org.apache.nifi.NiFi.main(NiFi.java:267)
Caused by: org.apache.nifi.lifecycle.LifeCycleStartException: Failed to start Flow Service due to: java.net.BindException: Address already in use
at org.apache.nifi.controller.StandardFlowService.start(StandardFlowService.java:312)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:799)
... 2 common frames omitted
Caused by: java.net.BindException: Address already in use
at java.net.PlainSocketImpl.socketBind(Native Method)
at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
at java.net.ServerSocket.bind(ServerSocket.java:375)
at java.net.ServerSocket.<init>(ServerSocket.java:237)
at java.net.ServerSocket.<init>(ServerSocket.java:128)
at org.apache.nifi.io.socket.SocketUtils.createServerSocket(SocketUtils.java:108)
at org.apache.nifi.io.socket.SocketListener.start(SocketListener.java:85)
at org.apache.nifi.cluster.protocol.impl.SocketProtocolListener.start(SocketProtocolListener.java:90)
at org.apache.nifi.cluster.protocol.impl.NodeProtocolSenderListener.start(NodeProtocolSenderListener.java:64)
at org.apache.nifi.controller.StandardFlowService.start(StandardFlowService.java:303)
... 3 common frames omitted
2017-06-19 06:15:16,411 INFO [Thread-1] org.apache.nifi.NiFi Initiating shutdown of Jetty web server...
2017-06-19 06:15:16,423 INFO [Thread-1] o.eclipse.jetty.server.AbstractConnector Stopped ServerConnector@11636d43{HTTP/1.1,[http/1.1]}{hdf.example.com:9090}
2017-06-19 06:15:16,424 INFO [Thread-1] org.eclipse.jetty.server.session Stopped scavenging
... View more
Labels:
- Labels:
-
Apache NiFi
-
Cloudera DataFlow (CDF)
06-19-2017
05:16 AM
After manually executed /usr/hdf/current/streamline/bootstrap/sql/postgresql/create_tables.sql, I was able to start SAM successfully.
... View more
06-19-2017
04:20 AM
After re-configured SAM against Postgresql 9.6, this error is gone. However, I got another error while starting SAM. Exception in thread "main" com.hortonworks.streamline.storage.exception.StorageException: com.google.common.util.concurrent.UncheckedExecutionException: com.hortonworks.streamline.storage.exception.StorageException: org.postgresql.util.PSQLException: ERROR: relation "topology_version" does not exist
Position: 15
at com.hortonworks.streamline.storage.cache.impl.GuavaCache.get(GuavaCache.java:72)
at com.hortonworks.streamline.storage.cache.impl.GuavaCache.get(GuavaCache.java:41)
at com.hortonworks.streamline.storage.CacheBackedStorageManager.get(CacheBackedStorageManager.java:74)
at com.hortonworks.streamline.streams.catalog.service.StreamCatalogService.getTopologyVersionInfo(StreamCatalogService.java:245)
at com.hortonworks.streamline.streams.service.StreamsModule.setupPlaceholderTopologyVersionInfo(StreamsModule.java:202)
at com.hortonworks.streamline.streams.service.StreamsModule.setupPlaceholderEntities(StreamsModule.java:198)
at com.hortonworks.streamline.streams.service.StreamsModule.getResources(StreamsModule.java:104)
at com.hortonworks.streamline.webservice.StreamlineApplication.registerResources(StreamlineApplication.java:293)
at com.hortonworks.streamline.webservice.StreamlineApplication.run(StreamlineApplication.java:100)
at com.hortonworks.streamline.webservice.StreamlineApplication.run(StreamlineApplication.java:74)
at io.dropwizard.cli.EnvironmentCommand.run(EnvironmentCommand.java:43)
at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:85)
at io.dropwizard.cli.Cli.run(Cli.java:75)
at io.dropwizard.Application.run(Application.java:79)
at com.hortonworks.streamline.webservice.StreamlineApplication.main(StreamlineApplication.java:78)
... View more
06-16-2017
08:54 AM
1 Kudo
Hi, I installed HDF3.0 on an existing HDP2.6.0 cluster using Ambari 2.5.1. Installation was succeed, but failed to start SAM. My configuration uses Postgresql for SAM. I got the below error. Does anyone happen to see this before? Exception in thread "main" com.hortonworks.streamline.storage.exception.StorageException: org.postgresql.util.PSQLException: ERROR: syntax error at or near "ON"
Position: 112
at com.hortonworks.streamline.storage.impl.jdbc.provider.sql.factory.AbstractQueryExecutor$QueryExecution.executeUpdate(AbstractQueryExecutor.java:223)
at com.hortonworks.streamline.storage.impl.jdbc.provider.sql.factory.AbstractQueryExecutor.executeUpdate(AbstractQueryExecutor.java:180)
at com.hortonworks.streamline.storage.impl.jdbc.provider.postgresql.factory.PostgresqlExecutor.insertOrUpdateWithUniqueId(PostgresqlExecutor.java:150)
at com.hortonworks.streamline.storage.impl.jdbc.provider.postgresql.factory.PostgresqlExecutor.insertOrUpdate(PostgresqlExecutor.java:69)
at com.hortonworks.streamline.storage.impl.jdbc.JdbcStorageManager.addOrUpdate(JdbcStorageManager.java:81)
at com.hortonworks.streamline.storage.cache.writer.StorageWriteThrough.addOrUpdate(StorageWriteThrough.java:37)
at com.hortonworks.streamline.storage.CacheBackedStorageManager.addOrUpdate(CacheBackedStorageManager.java:68)
at com.hortonworks.streamline.streams.catalog.service.StreamCatalogService.addOrUpdateTopologyVersionInfo(StreamCatalogService.java:262)
at com.hortonworks.streamline.streams.service.StreamsModule.setupPlaceholderTopologyVersionInfo(StreamsModule.java:210)
at com.hortonworks.streamline.streams.service.StreamsModule.setupPlaceholderEntities(StreamsModule.java:198)
at com.hortonworks.streamline.streams.service.StreamsModule.getResources(StreamsModule.java:104)
at com.hortonworks.streamline.webservice.StreamlineApplication.registerResources(StreamlineApplication.java:293)
at com.hortonworks.streamline.webservice.StreamlineApplication.run(StreamlineApplication.java:100)
at com.hortonworks.streamline.webservice.StreamlineApplication.run(StreamlineApplication.java:74)
at io.dropwizard.cli.EnvironmentCommand.run(EnvironmentCommand.java:43)
at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:85)
at io.dropwizard.cli.Cli.run(Cli.java:75)
at io.dropwizard.Application.run(Application.java:79)
at com.hortonworks.streamline.webservice.StreamlineApplication.main(StreamlineApplication.java:78)
Caused by: org.postgresql.util.PSQLException: ERROR: syntax error at or near "ON"
Position: 112
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2455)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2155)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:288)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:430)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:356)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:168)
at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:135)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
at com.hortonworks.streamline.storage.impl.jdbc.provider.sql.factory.AbstractQueryExecutor$QueryExecution.executeUpdate(AbstractQueryExecutor.java:221)
... 18 more
... View more
Labels:
- Labels:
-
Cloudera DataFlow (CDF)
03-14-2017
01:07 PM
@Predrag Minovic On my real HDP cluster, Tableau was able to connect to Spark 1.6 Thriftserver, but failed fetching table metadata. Have you ever seen this before? The table default.crimes exist and I can query it using beeline either via HS2 or Spark TS.
... View more
03-10-2017
05:10 AM
I tried both the Hive and Spark driver, neither works. Also configured port forwarding. What version did you test on your real cluster?
... View more
03-09-2017
07:47 AM
Hi, How to connect to Spark ThriftServer from Tableau on HDP2.5 sandbox? I installed both the Hortonworks Hive ODBC Driver and SparkSQL ODBC Driver. Tried using both the drivers connecting to both Spark 1.6 and Spark 2 ThriftServer, but all failed. I am trying on Mac OS X. Attaching failure screenshot.
... View more
Labels:
- Labels:
-
Apache Spark
01-26-2017
10:23 AM
1 Kudo
@dbains, @mthiele, @Daniel Kozlowski, Thank you. It works when created as kafka user. To summarize all the steps -- from Ambari configs, to creating topic, granting permission and testing on kafka console producer/consumer scripts, I created this article Step by Step Recipe for Securing Kafka with Kerberos. Hope it saves others' time 🙂
... View more
01-26-2017
10:14 AM
7 Kudos
I found it is a little tricky to get started with a Kerberos enabled Kafka cluster. I created this step by step recipe for securing Kafka with Kerberos, sending and receiving data on console. This is tested on HDP2.5.0 and Ambari 2.4.1. Enabled Kerberos using the Ambari Kerberos setup wizard under Admin -- Kerberos menu. On Ambari Kafka Config UI, change "listeners" property to "PLAINTEXTSASL://localhost:6667". Restart Kafka as requested by Ambari. Create a test topic in Kafka. Must use the kafka service user to do this. $ cd /usr/hdp/current/kafka-broker/bin
$ sudo su kafka
$ kinit -k -t /etc/security/keytabs/kafka.service.keytab kafka/ip-10-0-1-130.ap-northeast-1.compute.internal
$ ./kafka-topics.sh --zookeeper ip-10-0-1-130.ap-northeast-1.compute.internal:2181 --create --topic foo --partitions 1 --replication-factor 1
Created topic "bar".
Grant permission to user. This can be done using Kafka native ACL mechanism or Apache Ranger. In the example, we use Kafka ACL. User bob needs to be existing in KDC. # Grant user bob as producer on topic foo
./kafka-acls.sh --authorizer-properties zookeeper.connect=ip-10-0-1-130.ap-northeast-1.compute.internal:2181 \
--add --allow-principal User:bob \
--producer --topic foo
Adding ACLs for resource `Topic:foo`:
User:bob has Allow permission for operations: Describe from hosts: *
User:bob has Allow permission for operations: Write from hosts: *
Adding ACLs for resource `Cluster:kafka-cluster`:
User:bob has Allow permission for operations: Create from hosts: *
Current ACLs for resource `Topic:foo`:
User:bob has Allow permission for operations: Describe from hosts: *
User:bob has Allow permission for operations: Write from hosts: *
# Grant user bob as consumer
./kafka-acls.sh --authorizer-properties zookeeper.connect=ip-10-0-1-130.ap-northeast-1.compute.internal:2181 \
--add --allow-principal User:bob \
--consumer --topic foo --group *
Adding ACLs for resource `Topic:foo`:
User:bob has Allow permission for operations: Read from hosts: *
User:bob has Allow permission for operations: Describe from hosts: *
Adding ACLs for resource `Group:connect-distributed.sh`:
User:bob has Allow permission for operations: Read from hosts: *
Current ACLs for resource `Topic:foo`:
User:bob has Allow permission for operations: Read from hosts: *
User:bob has Allow permission for operations: Describe from hosts: *
User:bob has Allow permission for operations: Write from hosts: *
Current ACLs for resource `Group:connect-distributed.sh`:
User:bob has Allow permission for operations: Read from hosts: *
Confirm the above works using the kafka console producer and consumer scripts. # Switch to bob user and log in to KDC.
$ kinit bob
# Start console producer
$ ./kafka-console-producer.sh --broker-list ip-10-0-1-130.ap-northeast-1.compute.internal:6667 --topic foo --security-protocol PLAINTEXTSASL
# On another terminal, start console consumer
./kafka-console-consumer.sh --zookeeper ip-10-0-1-130.ap-northeast-1.compute.internal:2181 --topic foo --security-protocol PLAINTEXTSASL
{metadata.broker.list=ip-10-0-1-130.ap-northeast-1.compute.internal:6667, request.timeout.ms=30000, client.id=console-consumer-57797, security.protocol=PLAINTEXTSASL}
# Type something on the producer terminal, it should appears on the console terminal immediately.
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- How-ToTutorial
- Kafka
- Kerberos
Labels:
01-23-2017
02:44 AM
I created the topic using a normal user (not the 'kafka' service user). Do I need to use 'kafka' user to create the topic?
... View more
01-23-2017
02:43 AM
I created the topic using a normal user (not the 'kafka' service user). Do I need to use 'kafka' user to create the topic? Below is my server.properties. advertised.listeners=PLAINTEXTSASL://ip-10-0-0-149.ap-northeast-1.compute.internal:6667
authorizer.class.name=org.apache.ranger.authorization.kafka.authorizer.RangerKafkaAuthorizer
auto.create.topics.enable=true
auto.leader.rebalance.enable=true
compression.type=producer
controlled.shutdown.enable=true
controlled.shutdown.max.retries=3
controlled.shutdown.retry.backoff.ms=5000
controller.message.queue.size=10
controller.socket.timeout.ms=30000
default.replication.factor=1
delete.topic.enable=false
external.kafka.metrics.exclude.prefix=kafka.network.RequestMetrics,kafka.server.DelayedOperationPurgatory,kafka.server.BrokerTopicMetrics.BytesRejectedPerSec
external.kafka.metrics.include.prefix=kafka.network.RequestMetrics.ResponseQueueTimeMs.request.OffsetCommit.98percentile,kafka.network.RequestMetrics.ResponseQueueTimeMs.request.Offsets.95percentile,kafka.network.RequestMetrics.ResponseSendTimeMs.request.Fetch.95percentile,kafka.network.RequestMetrics.RequestsPerSec.request
fetch.purgatory.purge.interval.requests=10000
kafka.ganglia.metrics.group=kafka
kafka.ganglia.metrics.host=localhost
kafka.ganglia.metrics.port=8671
kafka.ganglia.metrics.reporter.enabled=true
kafka.metrics.reporters=org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter
kafka.timeline.metrics.host=ip-10-0-0-229.ap-northeast-1.compute.internal
kafka.timeline.metrics.maxRowCacheSize=10000
kafka.timeline.metrics.port=6188
kafka.timeline.metrics.protocol=http
kafka.timeline.metrics.reporter.enabled=true
kafka.timeline.metrics.reporter.sendInterval=5900
kafka.timeline.metrics.truststore.password=bigdata
kafka.timeline.metrics.truststore.path=/etc/security/clientKeys/all.jks
kafka.timeline.metrics.truststore.type=jks
leader.imbalance.check.interval.seconds=300
leader.imbalance.per.broker.percentage=10
listeners=PLAINTEXTSASL://ip-10-0-0-149.ap-northeast-1.compute.internal:6667
log.cleanup.interval.mins=10
log.dirs=/kafka-logs
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.bytes=-1
log.retention.hours=168
log.roll.hours=168
log.segment.bytes=1073741824
message.max.bytes=1000000
min.insync.replicas=1
num.io.threads=8
num.network.threads=3
num.partitions=1
num.recovery.threads.per.data.dir=1
num.replica.fetchers=1
offset.metadata.max.bytes=4096
offsets.commit.required.acks=-1
offsets.commit.timeout.ms=5000
offsets.load.buffer.size=5242880
offsets.retention.check.interval.ms=600000
offsets.retention.minutes=86400000
offsets.topic.compression.codec=0
offsets.topic.num.partitions=50
offsets.topic.replication.factor=3
offsets.topic.segment.bytes=104857600
port=6667
principal.to.local.class=kafka.security.auth.KerberosPrincipalToLocal
producer.purgatory.purge.interval.requests=10000
queued.max.requests=500
replica.fetch.max.bytes=1048576
replica.fetch.min.bytes=1
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.lag.max.messages=4000
replica.lag.time.max.ms=10000
replica.socket.receive.buffer.bytes=65536
replica.socket.timeout.ms=30000
security.inter.broker.protocol=PLAINTEXTSASL
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
socket.send.buffer.bytes=102400
super.users=User:kafka
zookeeper.connect=ip-10-0-0-149.ap-northeast-1.compute.internal:2181
zookeeper.connection.timeout.ms=25000
zookeeper.session.timeout.ms=30000
zookeeper.set.acl=true
zookeeper.sync.time.ms=2000
... View more
01-22-2017
11:49 AM
Hi, After enabled Kerberos using Ambari, I got problem creating topics in Kafka using the kafka-topics.sh script. The topic was created, but its status is wrong without leader. It seems the topic is created with PLAINTEXT, while there is only PLAINTEXTSASL broker in the cluster after enabled Kerberos. The only configuration change I made is to chagne broker listener from 'PLAINTEXT://localhost:6667' to 'PLAINTEXTSASL://localhost:6667'. As posted in this question, I also changed the kafka-topics.sh to make it work with Kerberos. I am using HDP2.5.3. $ ./kafka-topics.sh --zookeeper ip-10-0-0-149.ap-northeast-1.compute.internal --create --partitions 1 --replication-factor 1 --topic mytopic
Created topic "mytopic".
$ ./kafka-topics.sh --zookeeper ip-10-0-0-149.ap-northeast-1.compute.internal --describe --topic mytopic
Topic:mytopic PartitionCount:1 ReplicationFactor:1 Configs:
Topic: mytopic Partition: 0 Leader: none Replicas: 1001 Isr:
... View more
Labels:
- Labels:
-
Apache Kafka
01-20-2017
01:27 PM
4 Kudos
Solved by the below workaround. This looks like a bug in kafka-topics.sh. 1. Add KAFKA_CLIENT_KERBEROS_PARAMS before executing actual TopicCommand if running in a Kerberos enabled cluster. $ cat kafka-topics.sh
# check if kafka_jaas.conf in config , only enable client_kerberos_params in secure mode.
KAFKA_HOME="$(dirname $(cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd ))"
KAFKA_JAAS_CONF=$KAFKA_HOME/config/kafka_jaas.conf
if [ -f $KAFKA_JAAS_CONF ]; then
export KAFKA_CLIENT_KERBEROS_PARAMS="-Djava.security.auth.login.config=$KAFKA_HOME/config/kafka_client_jaas.conf"
fi
exec $(dirname $0)/kafka-run-class.sh kafka.admin.TopicCommand "$@"
2. Use Zookeeper server FQDN instead of localhost in command line. $ kinit
$ ./kafka-topics.sh --zookeeper ip-10-0-0-149.ap-northeast-1.compute.internal:2181 --create --topic foo --partitions 1 --replication-factor 1
Created topic "foo".
... View more
01-20-2017
12:32 PM
Also tried the below, didn't work... $ export JVMFLAGS="-Djava.security.auth.login.config=/etc/kafka/conf/kafka_client_jaas.conf"
$ ./kafka-topics.sh --zookeeper localhost:2181 --create --topic foo --partitions 1 --replication-factor 1
... View more
01-20-2017
12:00 PM
After enabled Kerberos using Ambari wizard, Kafka scripts does not work. Is there any additional configurations to make it work? I am using HDP 2.5.3. $ kinit
$ ./kafka-topics.sh --zookeeper localhost:2181 --create --topic foo --partitions 1 --replication-factor 1
[2017-01-20 11:54:59,482] WARN Could not login: the client is being asked for a password, but the Zookeeper client code does not currently support obtaining a password from the user. Make sure that the client is configured to use a ticket cache (using the JAAS configuration setting 'useTicketCache=true)' and restart the client. If you still get this message after that, the TGT in the ticket cache has expired and must be manually refreshed. To do so, first determine if you are using a password or a keytab. If the former, run kinit in a Unix shell in the environment of the user who is running this Zookeeper client using the command 'kinit <princ>' (where <princ> is the name of the client's Kerberos principal). If the latter, do 'kinit -k -t <keytab> <princ>' (where <princ> is the name of the Kerberos principal, and <keytab> is the location of the keytab file). After manually refreshing your cache, restart this client. If you continue to see this message after manually refreshing your cache, ensure that your KDC host's clock is in sync with this host's clock. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2017-01-20 11:54:59,484] WARN SASL configuration failed: javax.security.auth.login.LoginException: No password provided Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn)
Exception in thread "main" org.I0Itec.zkclient.exception.ZkAuthFailedException: Authentication failure
at org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:946)
at org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:923)
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1230)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:156)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:130)
at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:75)
at kafka.utils.ZkUtils$.apply(ZkUtils.scala:57)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:54)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
... View more
Labels:
- Labels:
-
Apache Kafka
12-16-2016
09:22 AM
1 Kudo
Solved this by setting hadoop.proxyuser.root.hosts=*. For some reason, the HDFS request to create the directory was sent from host where neither Ambari Server nor HS2 is running. Not sure why but change this setting solved the issue.
... View more
12-16-2016
06:30 AM
Do not know how to fix it.. Also checked NN log, no error occurred.
... View more
12-16-2016
06:27 AM
I have same issue on HDP 2.5 & Ambari 2.4.0.1. I have created all the necessary HDFS directories and grant proper permission, but a simple 'show tables' query just doesn't work. Digging into HDFS logs, I found Ambari Hive View didn't create staging directory under /user/admin/hive/jobs. It should create hive-job-6-2016-12-16_06-15 directory before trying to write the hql file. $ tail -f hdfs-audit.log | grep '/user/admin'
2016-12-16 06:15:55,156 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.0.178 cmd=getfileinfo src=/user/admin/hive/jobs/hive-job-6-2016-12-16_06-15 dst=null perm=null proto=webhdfs This error happens after I enabled Ranger plugin for Hive. I also have another working Ambari Hive View on HDC. It creates the staging directories and hql properly. $ tail -f hdfs-audit.log | grep '/user/admin'
2016-12-16 06:17:29,003 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.207 cmd=getfileinfo src=/user/admin dst=null perm=null proto=webhdfs
2016-12-16 06:17:31,148 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.207 cmd=getfileinfo src=/user/admin/.AUTO_HIVE_INSTANCE.defaultSettings dst=null perm=null proto=webhdfs
2016-12-16 06:17:35,474 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.207 cmd=getfileinfo src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17 dst=null perm=null proto=webhdfs
2016-12-16 06:17:35,486 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.119 cmd=create src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/query.hql dst=null perm=admin:hdfs:rw-r--r-- proto=rpc
2016-12-16 06:17:35,509 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.120 cmd=create src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/logs dst=null perm=admin:hdfs:rw-r--r-- proto=rpc
2016-12-16 06:17:35,522 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.207 cmd=getfileinfo src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/query.hql dst=null perm=null proto=webhdfs
2016-12-16 06:17:35,523 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.207 cmd=open src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/query.hql dst=null perm=null proto=webhdfs
2016-12-16 06:17:35,527 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.119 cmd=open src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/query.hql dst=null perm=null proto=rpc
2016-12-16 06:17:35,582 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.207 cmd=getfileinfo src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/query.hql dst=null perm=null proto=webhdfs
2016-12-16 06:17:35,583 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.207 cmd=open src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/query.hql dst=null perm=null proto=webhdfs
2016-12-16 06:17:35,587 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.119 cmd=open src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/query.hql dst=null perm=null proto=rpc
2016-12-16 06:17:35,590 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.207 cmd=getfileinfo src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/query.hql dst=null perm=null proto=webhdfs
2016-12-16 06:17:35,593 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.207 cmd=getfileinfo src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/query.hql dst=null perm=null proto=webhdfs
2016-12-16 06:17:35,765 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.207 cmd=getfileinfo src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/logs dst=null perm=null proto=webhdfs
2016-12-16 06:17:35,769 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.119 cmd=open src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/logs dst=null perm=null proto=rpc
2016-12-16 06:17:35,771 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.207 cmd=getfileinfo src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/logs dst=null perm=null proto=webhdfs
2016-12-16 06:17:35,774 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.207 cmd=getfileinfo src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/logs dst=null perm=null proto=webhdfs
2016-12-16 06:17:35,803 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.207 cmd=getfileinfo src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/logs dst=null perm=null proto=webhdfs
2016-12-16 06:17:35,807 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.120 cmd=open src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/logs dst=null perm=null proto=rpc
2016-12-16 06:17:35,810 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.207 cmd=getfileinfo src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/logs dst=null perm=null proto=webhdfs
2016-12-16 06:17:35,812 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.207 cmd=getfileinfo src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/logs dst=null perm=null proto=webhdfs
2016-12-16 06:17:45,915 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.207 cmd=getfileinfo src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/logs dst=null perm=null proto=webhdfs
2016-12-16 06:17:45,919 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.120 cmd=open src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/logs dst=null perm=null proto=rpc
2016-12-16 06:17:45,921 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.207 cmd=getfileinfo src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/logs dst=null perm=null proto=webhdfs
2016-12-16 06:17:45,923 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/10.0.1.207 cmd=getfileinfo src=/user/admin/hive/jobs/hive-job-19-2016-12-16_06-17/logs dst=null perm=null proto=webhdfs
... View more
12-09-2016
12:15 AM
Hi, Does HDC support adding admin user? The use case is to allow individual admin to use their own credential to create/delete clusters.
... View more
11-29-2016
02:59 PM
Thanks, @Ankit Singhal This solved the issue.
... View more
11-28-2016
02:36 PM
Both shipped from HDP2.5, should be the same.
... View more