Member since
02-02-2016
583
Posts
518
Kudos Received
98
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3173 | 09-16-2016 11:56 AM | |
1354 | 09-13-2016 08:47 PM | |
5341 | 09-06-2016 11:00 AM | |
3083 | 08-05-2016 11:51 AM | |
5166 | 08-03-2016 02:58 PM |
03-28-2016
04:09 PM
2 Kudos
Please try below command, its working fine on my cluster. sqoop import --connect jdbc:mysql://hostname/classicmodels --username root --password xxx --query 'select count(*) as cnt from customers where $CONDITIONS' --m 1 --target-dir /tmp/count1 --driver com.mysql.jdbc.Driver
... View more
03-22-2016
03:31 PM
Ok, can you also try using hive aux path?
... View more
03-22-2016
08:25 AM
1 Kudo
First check whether spark is running in your hdp env then check if you are able to access 7077 VM port from your dev env through "telnet ip 7077", probably firewall or network settings might be the culprit. Also it's better to check the spark UI and find out the URL mentioned there and please make sure we use exactly same URL while connecting i.e spark://hostname:7077
... View more
03-22-2016
08:15 AM
1 Kudo
You could try adding both the "mongo-hadoop-hive.jar" and "mongo-hadoop-core.jar" to the hive.aux.jars.path setting in your hive-site.xml. Or You can simply add those jars in your hive shell like, hive> add "somepath/mongo-hadoop-hive.jar" hive> add "somepath/mongo-hadoop-core.jar"
... View more
03-21-2016
10:46 AM
1 Kudo
Hoping you have completed all the pre-requisites to run spark on Mesos, however please follow below if you haven't done yet. http://spark.apache.org/docs/latest/running-on-mesos.html#connecting-spark-to-mesos Regarding spark + Mesos and Tableau connection, I believe you need a SparkSql thrift server so that Tableau can directly connect to the thrift port. Morever you can start your thrift server like below. $SPARK_HOME/sbin/start-thriftserver.sh --master mesos://host:port --deploy-mode cluster --executor-memory 5G Note: You also need spark ODBC driver at Tableau client side to connect to the Thrift server, you can download it from Here
... View more
03-17-2016
05:02 AM
1 Kudo
Yes, Please follow these steps and let me know if you still face same issue. Also kindly mention your HDP version. hive> set orc.compress.size=4096; hive> set hive.exec.orc.default.stripe.size=268435456; hive> your create table DDL; hive> load data query in orc table; hive> you select query;
... View more
03-16-2016
02:36 PM
4 Kudos
Looks like your table has ORC format so can you please try to set below properties and try it again? set orc.compress.size=4096 set hive.exec.orc.default.stripe.size=268435456
... View more
03-16-2016
01:30 PM
@David Tam The same conf's should work for local mode also, initially it made for YARN only then later it applicable for local mode also. As I said earlier that it's better to try it on spark 1.6 version. Please refer this Jira and it's Pull requests :- https://issues.apache.org/jira/browse/SPARK-11821
... View more
03-16-2016
07:06 AM
3 Kudos
In general, Zookeeper doesn't actually required huge drives because it will only store metadata information for many services, I have seen customer using 100G to 250G of partition size for zookeeper data directory and logs which is fine of many cluster deployment. Moreover administrator need to set configuration for automatic purging policy of snapshots and logs directories so that we don't end up by filling all the local storage. Please refer below doc for more info. http://zookeeper.apache.org/doc/trunk/zookeeperAdmin.html
... View more
- « Previous
- Next »