Member since
06-03-2016
66
Posts
21
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3297 | 12-03-2016 08:51 AM | |
1767 | 09-15-2016 06:39 AM | |
1972 | 09-12-2016 01:20 PM | |
2278 | 09-11-2016 07:04 AM | |
1889 | 09-09-2016 12:19 PM |
10-11-2016
04:28 AM
Thanks for the reply Ayub Pathan. 1:- command to create topic ./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test and it is created. 2:- When kerberos was enabled i dont have the permission to create topic in kafka. But then I disabled kerberos from cluster without any issues. And I am able to create the topic. 3:- No.Ranger is not enabled in my cluster. And i have already gone through the link that you have mentioned, but I didnt get any solution as there was no proper explanation that how he solved that issue. Please suggest me. Mohan.V
... View more
10-10-2016
03:11 PM
Hi all, Trying to produce messages but getting the below errors. [2016-10-10 20:22:10,947] ERROR Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: test11 (kafka.producer.async.DefaultEventHandler)
[2016-10-10 20:22:11,049] WARN Error while fetching metadata [{TopicMetadata for topic test11 ->
No partition metadata for topic test11 due to kafka.common.TopicAuthorizationException}] for topic [test11]: class kafka.common.TopicAuthorizationException (kafka.producer.BrokerPartitionInfo)
[2016-10-10 20:22:11,051] WARN Error while fetching metadata [{TopicMetadata for topic test11 ->
No partition metadata for topic test11 due to kafka.common.TopicAuthorizationException}] for topic [test11]: class kafka.common.TopicAuthorizationException (kafka.producer.BrokerPartitionInfo)
[2016-10-10 20:22:11,051] ERROR Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: test11 (kafka.producer.async.DefaultEventHandler)
[2016-10-10 20:22:11,153] WARN Error while fetching metadata [{TopicMetadata for topic test11 -> No partition metadata for topic test11 due to kafka.common.TopicAuthorizationException}] for topic [test11]: class kafka.common.TopicAuthorizationException (kafka.producer.BrokerPartitionInfo) [2016-10-10 20:22:11,154] ERROR Failed to send requests for topics test11 with correlation ids in [0,8] (kafka.producer.async.DefaultEventHandler)
[2016-10-10 20:22:11,155] ERROR Error in handling batch of 1 events (kafka.producer.async.ProducerSendThread)
kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:91)
at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
at scala.collection.immutable.Stream.foreach(Stream.scala:547)
at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45) I have disabled the kerberos. But I guess Kafka was still trying to get authorise from Kerbors. Please suggest me. Thank you. Mohan.V
... View more
Labels:
- Labels:
-
Apache Kafka
09-22-2016
10:54 AM
thanks for your valuable suggestion Pierre Villard. Its done.
... View more
09-22-2016
07:26 AM
2 Kudos
Hi All I am trying to get the tweets using nifi and store those into a local file. I have used the following configurations to get the tweets using GetTwitter process Twitter Endpoint :- Filter Endpoint
given all twitter keys
Languages :- en
Terms to Filter On :- facebook,wipro,google Trying to put the tweets into a file using PutFile Configurations that i have used are, Directory:- /root/tweets
Conflict Resolution Strategy:- fail
Create Missing Directories :-true
Maximum File Count :- 1
Last Modified Time :-
Permissions:-rw-r--r--
Owner:-root
Group:- root I am able to get the tweet but that is ONE per ONE JSON FILE. If I increase the Maximum File Count to 20 then it creates 20 json files and each file contains only one tweet. But I want it to be store all the tweets in single json file. In this https://community.hortonworks.com/questions/42149/using-nifi-to-collect-tweets-into-one-large-file.html they have mentioned to use MergeContent processor. But I didnt get it how to use MergeContent exactly as I am completely new to NIFI. Please sugget me how to use. Do i have to use it after the Putfile or Before PutFile. please help. Mohan.V
... View more
Labels:
- Labels:
-
Apache NiFi
09-20-2016
05:17 AM
Santhoshi G refer this link, you may get the solution https://community.hortonworks.com/questions/15495/amabari-server-212-setup-error-while-creating-data.html
... View more
09-15-2016
06:39 AM
2 Kudos
I got it on my own I think it is because of the difference versions that i have used in my script. When i used the same versions of elephant bird then it worked fine for me as suggested by @gkeys. script:- REGISTER elephant-bird-core-4.1.jar
REGISTER elephant-bird-hadoop-compat-4.1.jar
REGISTER elephant-bird-pig-4.1.jar
REGISTER json-simple-1.1.1.jar
twitter = LOAD 'sample.json' USING com.twitter.elephantbird.pig.load.JsonLoader();
extracted =foreach twitter generate (chararray)$0#'created_at' as created_at,(chararray)$0#'id' as id,(chararray)$0#'id_str' as id_str,(chararray)$0#'text' as text,(chararray)$0#'source' as source,com.twitter.elephantbird.pig.piggybank.JsonStringToMap($0#'entities') as entities,(boolean)$0#'favorited' as favorited,(long)$0#'favorite_count' as favorite_count,(long)$0#'retweet_count' as retweet_count,(boolean)$0#'retweeted' as retweeted,com.twitter.elephantbird.pig.piggybank.JsonStringToMap($0#'place') as place;
dump extracted; And it worked fine.
... View more
09-12-2016
01:38 PM
@gkeys please try to look in to this.and suggest me where am i missing. https://community.hortonworks.com/questions/56017/pig-to-elasesticsearch-stringindexoutofboundsexcep.html
... View more
09-12-2016
01:35 PM
thanks @gkeys. I have followed the doc, and it worked. as i said you are the best.:):)
... View more
09-12-2016
01:20 PM
1 Kudo
thanks for your reply Artem Ervits. I think it is because of the difference versions that i have used in my script. When i used the same versions of elephant bird then it worked fine for me as suggested by @gkeys. script:- REGISTER elephant-bird-core-4.1.jar
REGISTER elephant-bird-hadoop-compat-4.1.jar
REGISTER elephant-bird-pig-4.1.jar
REGISTER json-simple-1.1.1.jar
twitter = LOAD 'sample.json' USING com.twitter.elephantbird.pig.load.JsonLoader();
extracted = foreach twitter generate (chararray)$0#'created_at' as created_at,(chararray)$0#'id' as id,(chararray)$0#'id_str' as id_str,(chararray)$0#'text' as text,(chararray)$0#'source' as source,com.twitter.elephantbird.pig.piggybank.JsonStringToMap($0#'entities') as entities,(boolean)$0#'favorited' as favorited,(long)$0#'favorite_count' as favorite_count,(long)$0#'retweet_count' as retweet_count,(boolean)$0#'retweeted' as retweeted,com.twitter.elephantbird.pig.piggybank.JsonStringToMap($0#'place') as place;
dump extracted;
And it worked fine.
... View more
09-12-2016
06:41 AM
2 Kudos
I am trying to store the pig output by using elephant bird LzoJsonStorage() But it didn't worked. sample:- {"in_reply_to_user_id_str":null,"coordinates":null,"text":"\u0627\u0627\u0627\u0627\u0627\u062d \u0627\u0644\u0627\u062c\u0648\u0627\u0621 \u0628\u062a\u0627\u0639\u062a \u0633\u0643\u0633 \u062d\u0627\u0627\u0627\u0627\u0627\u0631\u0631 \u0645\u0646\u0648 \u0627\u0644\u0641\u062d\u0644 \u0627\u0644\u0644\u064a \u064a\u0628\u064a \u0627\u0633\u0648\u064a \u0644\u0647 \u0641\u0648\u0644\u0648 \u064a\u0633\u0648\u064a \u0631\u062a\u0648\u064a\u062a","created_at":"Thu Apr 12 17:38:47 +0000 2012","favorited":false,"contributors":null,"in_reply_to_screen_name":null,"source":"\u003Ca href=\"http:\/\/blackberry.com\/twitter\" rel=\"nofollow\"\u003ETwitter for BlackBerry\u00ae\u003C\/a\u003E","retweet_count":0,"in_reply_to_user_id":null,"in_reply_to_status_id":null,"id_str":"190494185374220289","entities":{"hashtags":[],"user_mentions":[],"urls":[]},"geo":null,"retweeted":false,"place":null,"truncated":false,"in_reply_to_status_id_str":null,"user":{"created_at":"Tue Apr 10 11:43:10 +0000 2012","notifications":null,"profile_use_background_image":true,"profile_background_image_url_https":"https:\/\/si0.twimg.com\/images\/themes\/theme1\/bg.png","url":null,"contributors_enabled":false,"geo_enabled":false,"profile_text_color":"333333","followers_count":5,"profile_image_url_https":"https:\/\/si0.twimg.com\/profile_images\/2084863527\/Screen-120409-224940_normal.jpg","profile_image_url":"http:\/\/a0.twimg.com\/profile_images\/2084863527\/Screen-120409-224940_normal.jpg","listed_count":0,"profile_background_image_url":"http:\/\/a0.twimg.com\/images\/themes\/theme1\/bg.png","description":"\u0627\u0628\u064a \u0633\u0643\u0633 \u0631\u0631\u0648\u0639\u0647 \u0645\u0639 \u0641\u062d\u0644 ","screen_name":"Ga7bah_sex","profile_link_color":"0084B4","location":"\u0627\u0631\u0636 \u0627\u0644\u0633\u0643\u0633","default_profile":true,"show_all_inline_media":false,"is_translator":false,"statuses_count":5,"profile_background_color":"C0DEED","id_str":"550121247","follow_request_sent":null,"lang":"ar","profile_background_tile":false,"protected":false,"profile_sidebar_fill_color":"DDEEF6","name":"\u0642\u062d\u0628\u0647 \u0648\u0627\u0628\u064a \u0632\u0628 ","default_profile_image":false,"time_zone":null,"friends_count":8,"id":550121247,"following":null,"verified":false,"utc_offset":null,"favourites_count":0,"profile_sidebar_border_color":"C0DEED"},"id":190494185374220289} Script:- REGISTER elephant-bird-core-4.1.jar
REGISTER elephant-bird-hadoop-compat-4.1.jar
REGISTER elephant-bird-pig-4.1.jar
REGISTER json-simple-1.1.1.jar
REGISTER google-collections-1.0.jar
REGISTER hadoop-lzo-0.4.14.jar
REGISTER piggybank-0.12.0.jar
twitter = LOAD 'sample.json' USING com.twitter.elephantbird.pig.load.JsonLoader();
extracted = foreach twitter generate (chararray)$0#'created_at' as created_at,(chararray)$0#'id' as id,(chararray)$0#'id_str' as id_str,(chararray)$0#'text' as text,(chararray)$0#'source' as source;
STORE extracted into 'tweets' using com.twitter.elephantbird.pig.store.LzoJsonStorage();
Error While trying to store in tweets:- java.lang.Exception: java.lang.RuntimeException: native-lzo library not available
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.lang.RuntimeException: native-lzo library not available
at com.hadoop.compression.lzo.LzoCodec.createCompressor(LzoCodec.java:165)
at com.hadoop.compression.lzo.LzopCodec.createOutputStream(LzopCodec.java:50)
at com.twitter.elephantbird.util.LzoUtils.getIndexedLzoOutputStream(LzoUtils.java:75)
at com.twitter.elephantbird.mapreduce.output.LzoTextOutputFormat.getRecordWriter(LzoTextOutputFormat.java:24)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:81)
at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.<init>(ReduceTask.java:540)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:614)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
at By the error I guess lzo library is not available. Please suggest me how can I resolve this. Mohan.V
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Pig