Created 07-10-2017 11:13 AM
Hello guys,
I'm following this Twitter streaming spark tutorial
However, instead of Spark I'm using Flume and Hive to store the data.
The problem is that in the tutorial we create 2 functions: one in scala and the other in python.
And when I use the hive interpreter to access the data %hive it doens't recognize the functions created before.
Is there any way to make a bridge between Scala and/or Python so that Hive can recognize these 2 functions?
/* declaring a function in Scala */ def sentiment(s:String) : String = { val positive = Array("like", "love", "good", "great", "happy", "cool", "the", "one", "that") val negative = Array("hate", "bad", "stupid", "is") var st = 0; val words = s.split(" ") positive.foreach(p => words.foreach(w => if(p==w) st = st+1 ) ) negative.foreach(p=> words.foreach(w=> if(p==w) st = st-1 ) ) if(st>0) "positivie" else if(st<0) "negative" else "neutral" } sqlc.udf.register("sentiment", sentiment _)
%pyspark #declaring a function in Python import re def wordcount(a): return len(re.split("\W+",a)) sqlContext.registerFunction("wordcount", wordcount)
Many thanks in advance.
Best regards
Created 07-10-2017 03:36 PM
One option is to implement these functions as a Hive UDF (written in python).
For example, your new python function (my_py_udf.py) would look something like this:
import sys for line in sys.stdin: createdAt, screenName, text = line.replace('\n',' ').split('\t') positive = set(["love", "good", "great", "happy", "cool", "best", "awesome", "nice"]) negative = set(["hate", "bad", "stupid"]) words = text.split() word_count = len(words) positive_matches = [1 for word in words if word in positive] negative_matches = [-1 for word in words if word in negative] st = sum(positive_matches) + sum(negative_matches) if st > 0: print '\t'.join([text, 'positive', str(word_count)]) elif st < 0: print '\t'.join([text, 'negative', str(word_count)]) else: print '\t'.join([text, 'neutral', str(word_count)])
NOTE: This function combines both of your previous functions into one (since you can calculate wordcount and sentiment in one function).
To call this UDF within Hive, run Hive code similar to this:
ADD FILE /home/hive/my_py_udf.py; SELECT TRANSFORM (createdAt, screenName, text) USING 'python my_py_udf.py' AS text, sentiment, word_count FROM tweets;
Hope this helps!
Created 07-20-2017 04:13 PM
Here's a test, that should help to determine if your syntax is off or if your environment is misconfigured.
First, create a test Hive table and populate it with data:
CREATE TABLE IF NOT EXISTS testtable (id string, text string) STORED AS ORC; INSERT INTO TABLE testtable VALUES ('1111', 'The service was great, and the agent was very helpful'), ('2222', 'I enjoyed the event but the food was terrible'), ('3333', 'Unhappy with the organization of the event')
Then create a file called "my_py_udf.py" (as shown below). It can be placed anywhere, but in my example I placed it at /tmp/my_py_udf.py.
import sys for line in sys.stdin: id, text = line.replace('\n',' ').split('\t') positive = set(["love", "good", "great", "happy", "cool", "best", "awesome", "nice", "helpful", "enjoyed"]) negative = set(["hate", "bad", "stupid", "terrible", "unhappy"]) words = text.split() word_count = len(words) positive_matches = [1 for word in words if word in positive] negative_matches = [-1 for word in words if word in negative] st = sum(positive_matches) + sum(negative_matches) if st > 0: print '\t'.join([text, 'positive', str(word_count)]) elif st < 0: print '\t'.join([text, 'negative', str(word_count)]) else: print '\t'.join([text, 'neutral', str(word_count)])
Then from within Hive, execute the following commands:
ADD FILE my_py_udf.py; SELECT TRANSFORM (id, text) USING 'python my_py_udf.py' AS (text, sentiment, word_count) FROM testtable;
Your resulting output should look like this:
The service was great, and the agent was very helpful positive 10 I enjoyed the event but the food was terrible neutral 9 Unhappy with the organization of the event neutral 7
Created 07-24-2017 10:50 AM
I was able to reproduce the code you've gave me. It runs ok.
The problem with the twitter table persists.
If I add the Json serde in the query I get error processing row, if not, it hangs a lot of time and returns map operator run initialized . I think I have to add the Serde and therefore the problem is not from here.
Here's the code:
import sys for line in sys.stdin: text = line.split('\t') positive = set(["love", "good", "great", "happy", "cool", "best", "awesome", "nice", "helpful", "enjoyed"]) negative = set(["hate", "bad", "stupid", "terrible", "unhappy"]) words = text.split() word_count = len(words) positive_matches = [1 for word in words if word in positive] negative_matches = [-1 for word in words if word in negative] st = sum(positive_matches) + sum(negative_matches) if st > 0: print ('\t'.join([text, 'positive', str(word_count)])) elif st < 0: print ('\t'.join([text, 'negative', str(word_count)])) else: print ('\t'.join([text, 'neutral', str(word_count)]))
I was able to run the tweets table with this test:
import sys for line in sys.stdin: print ('\t'.join([line]))
ADD JAR /tmp/json-serde-1.3.8-jar-with-dependencies.jar; ADD FILE /tmp/teste.py; SELECT TRANSFORM (text) USING 'python teste.py' FROM tweets;
Created 07-24-2017 12:21 PM
Nice! Glad to see that you got it working. If you upgrade to Spark 2.x, then you should not need to add the serde (just something to keep in mind). If you're all set, can you please mark thread as accepted. Thanks!
Created 07-24-2017 01:39 PM
Only got your 2nd example working... that to check if the prolem was with the syntax or system misconfiguration.
With the function you gave me i cannot check the tweets table, it gives the row error.
Can you please check this code one last time?
import sys for line in sys.stdin: id, text = line.replace('\n',' ').split('\t') positive = set(["love", "good", "great", "happy", "cool", "best", "awesome", "nice", "helpful", "enjoyed"]) negative = set(["hate", "bad", "stupid", "terrible", "unhappy"]) words = text.split() word_count = len(words) positive_matches = [1 for word in words if word in positive] negative_matches = [-1 for word in words if word in negative] st = sum(positive_matches) + sum(negative_matches) if st > 0: print ('\t'.join([text, 'positive', str(word_count)])) elif st < 0: print ('\t'.join([text, 'negative', str(word_count)])) else: print ('\t'.join([text, 'neutral', str(word_count)]))
Best regards
Created 07-25-2017 01:31 PM
Found the error:
createdAt, screenName, text = line.replace('\n',' ').split('\t')
It only works when I have only 1 variable. With more than 1 it crashes.
Is there any alternative to the split('\t') ?
Created 07-31-2017 04:14 PM
After some days I've reached the conclusion that the problem must be on the Json Serde because when I upload your table into Hive it works ok.
I'm currently using
json-serde-1.3.8-jar-with-dependencies.jar ...
Many thanks in advance.
Best regards
Created 08-01-2017 02:42 AM
Thanks for the update! Hortonworks is currently on HDP 2.6, so if you have the option, it sounds like it would be beneficial to upgrade.
Also as a quick reference, the most recent Hortonworks Sandbox can be downloaded from here: https://hortonworks.com/downloads/#sandbox
Here's a link to the documentation as well: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/index.html
Created 08-01-2017 09:00 AM
I don't see how updating HDP can solve my problem.
I think the problem maybe of the serde when creating the table.
I'm currently using this:
ADD JAR hdfs://192.168.0.73:8020/user/admin/oozie-workflows/lib/json-serde-1.3.8-jar-with-dependencies.jar; CREATE EXTERNAL TABLE tweets ( id bigint, created_at string, source STRING, favorited BOOLEAN, retweeted_status STRUCT< text:STRING, user:STRUCT<screen_name:STRING,name:STRING>, retweet_count:INT>, entities STRUCT< urls:ARRAY<STRUCT<expanded_url:STRING>>, user_mentions:ARRAY<STRUCT<screen_name:STRING,name:STRING>>, hashtags:ARRAY<STRUCT<text:STRING>>>, lang string, retweet_count int, text string, user STRUCT< screen_name:STRING, name:STRING, friends_count:INT, followers_count:INT, statuses_count:INT, verified:BOOLEAN, utc_offset:INT, time_zone:STRING> ) PARTITIONED BY (datehour int) ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe' WITH SERDEPROPERTIES ( "ignore.malformed.json" = "true") LOCATION 'hdfs://192.168.0.73:8020/user/flume/tweets'
Created 08-03-2017 12:28 AM
It could solve the problem because serdes are build-in and updated along with Hive updates. It works fine for me in recent versions of HDP, so that is why I wanted to mention it.
I saw that you opened another question specifically for the Hive serde issue.
Your original question "How to make hive queries include scala and python functions" is answered as part of my posts, so when you get a chance, could you please accept the best answer. There are a lot of responses in this thread, so that may help someone else out.
I do have one other thought to debug your JSON serde error. It could be that the way that you stored JSON within Hive is incorrect. If that is the case, then when you try to execute a python UDF against that Hive record, it isn't able to find the right structure. If you execute a "select *" against your hive table, how does the output look?
Created 08-04-2017 08:43 AM
Hello @Dan Zaratsian and many thanks once more.
Sorry you're right, I've forgotten to select best answer.
I can make queries in the hive table with this serde, except when I query the text column.
For instance, if I use your script with ID and Lang columns it runs smoothly. If I query the text column, it gives error.
I'll try to update HDP to version 2.6.1.0 and then let you know.
Best regards