Member since
02-02-2016
583
Posts
518
Kudos Received
98
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4188 | 09-16-2016 11:56 AM | |
| 1748 | 09-13-2016 08:47 PM | |
| 6940 | 09-06-2016 11:00 AM | |
| 4170 | 08-05-2016 11:51 AM | |
| 6244 | 08-03-2016 02:58 PM |
05-19-2016
01:17 PM
@Alex Raj Sorry, I don't see any workaround therefore you probably need to upgrade the hive version to 1.2.
... View more
05-19-2016
01:03 PM
1 Kudo
@Alex Raj May be you are hitting bug https://issues.apache.org/jira/browse/HIVE-9770.
... View more
05-19-2016
12:59 PM
@Smart Solutions which HDP version you are using? there was a bug related to same in HDP 2.3
... View more
05-19-2016
12:34 PM
2 Kudos
@Smart Solutions I don't think we can run multiple instances of spark thrift from Ambari, Its better to run one instance from Ambari and another from command line. cd $SPARK_HOME ./sbin/start-thriftserver.sh --master yarn-client --executor-memory 512m --hiveconf hive.server2.thrift.port=100015 &
... View more
05-19-2016
11:21 AM
2 Kudos
@R Wys This is not an issue since you are using "select *" which doesn't require any kind of computation therefore Mapreduce framework is smart enough to figure out when reducer tasks is required as per provided operators.
... View more
05-19-2016
10:13 AM
1 Kudo
@alain TSAFACK i don't think sqoop has any different way to import views, it should work same as for tables. Can you please let us know what issue you are facing i.e any error messages? You can also use free form queries to import the view data. Thanks
... View more
05-19-2016
09:23 AM
@nejm hadjmbarek
You can achieve this in two ways. 1. Create a wrapper shell script and call "pig <pig script path>" inside it. After that you can create an Unix cron entry to schedule it as per your requirement. 2. Another way is through Oozie scheduler, for that you can either create a pig action and along with recurring a coordinator service(see below link) or you can also create an Oozie shell action and call the same wrapper shell script inside your Oozie shell action( point 1). http://rogerhosto.com/apache-oozie-shell-script-example/ https://oozie.apache.org/docs/3.2.0-incubating/WorkflowFunctionalSpec.html#a3.2.3_Pig_Action http://blog.cloudera.com/blog/2013/01/how-to-schedule-recurring-hadoop-jobs-with-apache-oozie/ Thanks
... View more
05-18-2016
08:06 PM
2 Kudos
@Smart Solutions Below is an official doc for spark tuning on YARN, https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_spark-guide/content/ch_tuning-spark.html Generally we see people creates queues to segregate resources b/w different department groups within company or on the basis of number of applications like ETL, real time and so on. Therefore it depends on what your use case is and how you are going to share the cluster resources b/w groups/application. For Spark thrift its better to have single instance within a cluster unless you have 100's of thrift clients running and submitting jobs at same time.
... View more
05-18-2016
05:46 PM
Can you please remove one "/" from "//hbase-unsecure" and try again? @Avraha Zilberman
... View more
05-18-2016
05:31 PM
@Avraha Zilberman Why your confs have double slash( "//") in path everywhere?
... View more