Member since
04-25-2016
579
Posts
609
Kudos Received
111
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2938 | 02-12-2020 03:17 PM | |
| 2140 | 08-10-2017 09:42 AM | |
| 12496 | 07-28-2017 03:57 AM | |
| 3443 | 07-19-2017 02:43 AM | |
| 2534 | 07-13-2017 11:42 AM |
06-13-2016
01:18 PM
@dalin qin seems your original issue has been resolved, could you please select the best answer among the thread so that other user get benefit while referrring this thread.
... View more
06-13-2016
06:00 AM
@dalin qin yes, you are right here as I told you earlier in the thread that there is difference in versions of hadoop jars(hdp) and Spark running on the cluster. the phoenix jar issue is a different issue which can be addressed in phoenix community.
... View more
06-12-2016
05:27 PM
@dalin qin it looks that phoenix-client jar is missing here, could you please try adding it with your submit options like this spark-shell --master yarn-client --jars /usr/hdp/current/phoenix-client/phoenix-client.jar,/usr/hdp/current/phoenix-client/lib/phoenix-spark-4.4.0.2.4.0.0-169.jar
... View more
06-12-2016
08:22 AM
2 Kudos
@Vinay Reddy NaguruSaprk is far ahead in terms of perfomance but it still need to address some of the concern like memory management. for some use case mapreduce is preferred over spark e.g. ETL calculations where result sets are vast and may exceed total RAM of hadoop cluster,mapreduce can out perform Spark for this situation.Iterative machine learning where spark is not able to manage memory more proficiently is ideal use case for mapreduce.
but spark is evolving so fast and trying address these concerns so i think it is not very distant when spark will replace MR completely but for now they can coexists in the cluster.
... View more
06-12-2016
04:55 AM
1 Kudo
seems there is difference in versions of hadoop jars(hdp) and Spark running on the cluster. are you running vanilla spark on cluster?
... View more
06-11-2016
08:55 AM
3 Kudos
seems you dont have delete on alert_notice table can you check it with mysql in mysql prompt use information_schema select * from TABLE_PRIVILEGES;
... View more
06-10-2016
06:04 PM
2 Kudos
try increasing nimbus.thrift.max_buffer_size from default 1MB to 2-5M and see if you are able to see expected.nimbus.thrift.max_buffer_size is the maximum buffer size thrift should use when reading messages.
... View more
06-10-2016
11:22 AM
1 Kudo
nice article..
... View more
06-10-2016
10:27 AM
2 Kudos
it seems that you are hitting https://issues.apache.org/jira/browse/HIVE-12349 can you try to run your query after setting hive.optimize.index.filter=false
... View more