Member since
04-22-2016
931
Posts
46
Kudos Received
26
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1855 | 10-11-2018 01:38 AM | |
| 2217 | 09-26-2018 02:24 AM | |
| 2246 | 06-29-2018 02:35 PM | |
| 2920 | 06-29-2018 02:34 PM | |
| 6097 | 06-20-2018 04:30 PM |
02-06-2018
08:34 PM
Moderators: please either correct this post or delete it , its giving wrong information . This post is saying sqoop incremental import is possible into hive ORC table which is incorrect and currently not supported .
... View more
09-20-2017
03:48 PM
I want to write the kafa topics to hdfs , I am getting the messages in the nifi queue (I can view them in nifi) but the putHDFS processor is not writing them and throwing an error (see the attached screenshot)
... View more
Labels:
08-16-2017
04:47 PM
my publisher and subscriber are working fine but I am getting this warning on subscription [root@hadoop1 ~]# kafka-console-consumer.sh --zookeeper hadoop1:2181 --topic mytopic --from-beginning --security-protocol SASL_PLAINTEXT
[2017-08-16 12:33:12,956] WARN SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/usr/hdp/current/kafka-broker/config/kafka_client_jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn)
[2017-08-16 12:33:13,101] WARN SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/usr/hdp/current/kafka-broker/config/kafka_client_jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn)
[2017-08-16 12:33:13,120] WARN SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/usr/hdp/current/kafka-broker/config/kafka_client_jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn)
[2017-08-16 12:33:13,375] WARN The TGT cannot be renewed beyond the next expiry date: Thu Aug 17 11:51:54 EDT 2017.This process will not be able to authenticate new SASL connections after that time (for example, it will not be able to authenticate a new connection with a Kafka Broker). Ask your system administrator to either increase the 'renew until' time by doing : 'modprinc -maxrenewlife null ' within kadmin, or instead, to generate a keytab for null. Because the TGT's expiry cannot be further extended by refreshing, exiting refresh thread now. (org.apache.kafka.common.security.kerberos.KerberosLogin)
[2017-08-16 12:33:13,388] WARN SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/usr/hdp/current/kafka-broker/config/kafka_client_jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn)
{metadata.broker.list=hadoop1:6667, request.timeout.ms=30000, client.id=console-consumer-17860, security.protocol=SASL_PLAINTEXT}
testing kafka messages under kerberos ..Sami aug'17
this is the second line from the publisher
... View more
Labels:
07-27-2017
02:41 PM
this issue was resolved by downloading the elastic search for hadoop version 5.5.0
... View more
07-27-2017
01:46 PM
I am using HDP2.5 and I have /usr/hdp/current/hive-client/lib/elasticsearch-hadoop-2.2.0.jar when i try to insert into a hive elasticsearch table i am getting this error. The hadoop version seems to be 2.7 [root@hadoop1 lib]# hadoop version
Hadoop 2.7.3.2.5.3.0-37
HiveException: org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Unsupported/Unknown Elasticsearch version 5.5.0
... View more
Labels:
07-27-2017
01:42 PM
this issue is fixed by installing elastic search on all the cluster nodes.
... View more
07-26-2017
09:21 PM
its a 5 node cluster , I am using the curl command where elasticsearch is installed.
... View more
07-26-2017
04:00 PM
I have created a external table but insert is failing : CREATE EXTERNAL TABLE pa_lane_txn_es (
txn_id BIGINT,
ext_plaza_id STRING,
toll_amt_collected BIGINT,
toll_amt_full BIGINT,
ext_lane_id STRING)
STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
TBLPROPERTIES('es.resource' = 'lane_txn/txn_id','es.mapping.id'='txn_id','es.nodes' = '127.0.0.1:9200');
curl can get to elastic search port 9200 fine [root@hadoop1 ~]# curl 127.0.0.1:9200
{
"name" : "node1",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "OXNZn6ImQpK8zlF61FXWiw",
"version" : {
"number" : "5.5.0",
"build_hash" : "260387d",
"build_date" : "2017-06-30T23:16:05.735Z",
"build_snapshot" : false,
"lucene_version" : "6.6.0"
},
"tagline" : "You Know, for Search"
} ERROR ERROR : Vertex failed, vertexName=Map 1, vertexId=vertex_1499980645886_0006_1_00, diagnostics=[Task failed, taskId=task_1499980645886_0006_1_00_000001, diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"txn_id":2.0891216818E10,"ext_plaza_id":"500092","toll_amt_collected":109.0,"toll_amt_full":109.0,"ext_lane_id":"09"}
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:173)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:347)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:194)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:185)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:185)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:181)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"txn_id":2.0891216818E10,"ext_plaza_id":"500092","toll_amt_collected":109.0,"toll_amt_full":109.0,"ext_lane_id":"09"}
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:91)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:325)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
... 14 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"txn_id":2.0891216818E10,"ext_plaza_id":"500092","toll_amt_collected":109.0,"toll_amt_full":109.0,"ext_lane_id":"09"}
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:565)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:83)
... 17 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[127.0.0.1:9200]]
... View more
Labels:
07-25-2017
09:36 PM
1 Kudo
this fixed it Beeline version 1.2.1000.2.5.3.0-37 by Apache Hive
0: jdbc:hive2://hadoop2:10000/default> add JAR hdfs:///tmp/elasticsearch-hadoop-2.2.0.jar
0: jdbc:hive2://hadoop2:10000/default> ;
INFO : converting to local hdfs:///tmp/elasticsearch-hadoop-2.2.0.jar
INFO : Added [/tmp/0ec20cad-4ed8-4174-9504-f3b24d285542_resources/elasticsearch-hadoop-2.2.0.jar] to class path
INFO : Added resources: [hdfs:///tmp/elasticsearch-hadoop-2.2.0.jar]
No rows affected (0.096 seconds)
0: jdbc:hive2://hadoop2:10000/default> list JAR;
+-------------------------------------------------------------------------------------+--+
|
... View more