Member since
09-25-2015
230
Posts
276
Kudos Received
39
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
24896 | 07-05-2016 01:19 PM | |
8291 | 04-01-2016 02:16 PM | |
2075 | 02-17-2016 11:54 AM | |
5575 | 02-17-2016 11:50 AM | |
12533 | 02-16-2016 02:08 AM |
11-26-2015
11:33 PM
As far as I understood this R interpreter PR is not sharing same SparkContext yet. I've already created a notebook that declares a function in Scala, another in Python and use SQL interpreter to call both functions in a single statement along with a custom java hive-udf. If we could add a function in R, it would be really nice.
... View more
11-26-2015
02:05 AM
@Michael Miklavcic check last answer, sorry I tried to post as comment, but it wasnt possible due to max character limit in comments.
... View more
11-26-2015
02:04 AM
1 Kudo
@Michael Miklavcic looks like hive/hadoop scripts always defines max heap size with ambari setting. I debugged /usr/hdp/2.3.2.0-2950/hadoop/bin/hadoop.distro and got commands below, where you can change -Xmx and define your desired amount of memory. export CLASSPATH=/usr/hdp/2.3.2.0-2950/hadoop/conf:/usr/hdp/2.3.2.0-2950/hadoop/lib/*:/usr/hdp/2.3.2.0-2950/hadoop/.//*:/usr/hdp/2.3.2.0-2950/hadoop-hdfs/./:/usr/hdp/2.3.2.0-2950/hadoop-hdfs/lib/*:/usr/hdp/2.3.2.0-2950/hadoop-hdfs/.//*:/usr/hdp/2.3.2.0-2950/hadoop-yarn/lib/*:/usr/hdp/2.3.2.0-2950/hadoop-yarn/.//*:/usr/hdp/2.3.2.0-2950/hadoop-mapreduce/lib/*:/usr/hdp/2.3.2.0-2950/hadoop-mapreduce/.//*:/usr/hdp/2.3.2.0-2950/atlas/hook/hive/*:/usr/hdp/2.3.2.0-2950/hive-hcatalog/share/hcatalog/hive-hcatalog-core-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive-hcatalog/share/hcatalog/hive-hcatalog-server-extensions-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive-hcatalog/share/webhcat/java-client/hive-webhcat-java-client-1.2.1.2.3.2.0-2950.jar:/usr/hdp/current/hive-client/conf:/usr/hdp/2.3.2.0-2950/hive/lib/accumulo-core-1.7.0.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/accumulo-fate-1.7.0.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/accumulo-start-1.7.0.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/accumulo-trace-1.7.0.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/activation-1.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/ant-1.9.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/ant-launcher-1.9.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/antlr-2.7.7.jar:/usr/hdp/2.3.2.0-2950/hive/lib/antlr-runtime-3.4.jar:/usr/hdp/2.3.2.0-2950/hive/lib/apache-log4j-extras-1.2.17.jar:/usr/hdp/2.3.2.0-2950/hive/lib/asm-commons-3.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/asm-tree-3.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/avro-1.7.5.jar:/usr/hdp/2.3.2.0-2950/hive/lib/bonecp-0.8.0.RELEASE.jar:/usr/hdp/2.3.2.0-2950/hive/lib/calcite-avatica-1.2.0.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/calcite-core-1.2.0.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/calcite-linq4j-1.2.0.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/commons-cli-1.2.jar:/usr/hdp/2.3.2.0-2950/hive/lib/commons-codec-1.4.jar:/usr/hdp/2.3.2.0-2950/hive/lib/commons-collections-3.2.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/commons-compiler-2.7.6.jar:/usr/hdp/2.3.2.0-2950/hive/lib/commons-compress-1.4.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/commons-dbcp-1.4.jar:/usr/hdp/2.3.2.0-2950/hive/lib/commons-httpclient-3.0.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/commons-io-2.4.jar:/usr/hdp/2.3.2.0-2950/hive/lib/commons-lang-2.6.jar:/usr/hdp/2.3.2.0-2950/hive/lib/commons-logging-1.1.3.jar:/usr/hdp/2.3.2.0-2950/hive/lib/commons-math-2.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/commons-pool-1.5.4.jar:/usr/hdp/2.3.2.0-2950/hive/lib/commons-vfs2-2.0.jar:/usr/hdp/2.3.2.0-2950/hive/lib/curator-client-2.6.0.jar:/usr/hdp/2.3.2.0-2950/hive/lib/curator-framework-2.6.0.jar:/usr/hdp/2.3.2.0-2950/hive/lib/curator-recipes-2.6.0.jar:/usr/hdp/2.3.2.0-2950/hive/lib/datanucleus-api-jdo-3.2.6.jar:/usr/hdp/2.3.2.0-2950/hive/lib/datanucleus-core-3.2.10.jar:/usr/hdp/2.3.2.0-2950/hive/lib/datanucleus-rdbms-3.2.9.jar:/usr/hdp/2.3.2.0-2950/hive/lib/derby-10.10.2.0.jar:/usr/hdp/2.3.2.0-2950/hive/lib/eclipselink-2.5.2-M1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/eigenbase-properties-1.1.5.jar:/usr/hdp/2.3.2.0-2950/hive/lib/geronimo-annotation_1.0_spec-1.1.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/geronimo-jaspic_1.0_spec-1.0.jar:/usr/hdp/2.3.2.0-2950/hive/lib/geronimo-jta_1.1_spec-1.1.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/groovy-all-2.1.6.jar:/usr/hdp/2.3.2.0-2950/hive/lib/gson-2.2.4.jar:/usr/hdp/2.3.2.0-2950/hive/lib/guava-14.0.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hamcrest-core-1.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-accumulo-handler-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-accumulo-handler.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-ant-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-ant.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-beeline-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-beeline.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-cli-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-cli.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-common-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-common.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-contrib-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-contrib.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-exec-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-exec.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-hbase-handler-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-hbase-handler.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-hwi-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-hwi.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-jdbc-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-jdbc-1.2.1.2.3.2.0-2950-standalone.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-jdbc.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-metastore-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-metastore.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-serde-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-serde.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-service-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-service.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-shims-0.20S-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-shims-0.23-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-shims-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-shims-common-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-shims-common.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-shims.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-shims-scheduler-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-shims-scheduler.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-testutils-1.2.1.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/hive-testutils.jar:/usr/hdp/2.3.2.0-2950/hive/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.3.2.0-2950/hive/lib/httpclient-4.4.jar:/usr/hdp/2.3.2.0-2950/hive/lib/httpcore-4.4.jar:/usr/hdp/2.3.2.0-2950/hive/lib/httpmime-4.2.5.jar:/usr/hdp/2.3.2.0-2950/hive/lib/ivy-2.4.0.jar:/usr/hdp/2.3.2.0-2950/hive/lib/janino-2.7.6.jar:/usr/hdp/2.3.2.0-2950/hive/lib/javax.persistence-2.1.0.jar:/usr/hdp/2.3.2.0-2950/hive/lib/jcommander-1.32.jar:/usr/hdp/2.3.2.0-2950/hive/lib/jdo-api-3.0.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/jetty-all-7.6.0.v20120127.jar:/usr/hdp/2.3.2.0-2950/hive/lib/jetty-all-server-7.6.0.v20120127.jar:/usr/hdp/2.3.2.0-2950/hive/lib/jline-2.12.jar:/usr/hdp/2.3.2.0-2950/hive/lib/joda-time-2.5.jar:/usr/hdp/2.3.2.0-2950/hive/lib/jpam-1.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/json-20090211.jar:/usr/hdp/2.3.2.0-2950/hive/lib/jsr305-3.0.0.jar:/usr/hdp/2.3.2.0-2950/hive/lib/jta-1.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/junit-4.11.jar:/usr/hdp/2.3.2.0-2950/hive/lib/libfb303-0.9.2.jar:/usr/hdp/2.3.2.0-2950/hive/lib/libthrift-0.9.2.jar:/usr/hdp/2.3.2.0-2950/hive/lib/log4j-1.2.16.jar:/usr/hdp/2.3.2.0-2950/hive/lib/mail-1.4.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/maven-scm-api-1.4.jar:/usr/hdp/2.3.2.0-2950/hive/lib/maven-scm-provider-svn-commons-1.4.jar:/usr/hdp/2.3.2.0-2950/hive/lib/maven-scm-provider-svnexe-1.4.jar:/usr/hdp/2.3.2.0-2950/hive/lib/mysql-connector-java.jar:/usr/hdp/2.3.2.0-2950/hive/lib/netty-3.7.0.Final.jar:/usr/hdp/2.3.2.0-2950/hive/lib/noggit-0.6.jar:/usr/hdp/2.3.2.0-2950/hive/lib/ojdbc6.jar:/usr/hdp/2.3.2.0-2950/hive/lib/opencsv-2.3.jar:/usr/hdp/2.3.2.0-2950/hive/lib/oro-2.0.8.jar:/usr/hdp/2.3.2.0-2950/hive/lib/paranamer-2.3.jar:/usr/hdp/2.3.2.0-2950/hive/lib/parquet-hadoop-bundle-1.6.0.jar:/usr/hdp/2.3.2.0-2950/hive/lib/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:/usr/hdp/2.3.2.0-2950/hive/lib/plexus-utils-1.5.6.jar:/usr/hdp/2.3.2.0-2950/hive/lib/ranger-hive-plugin-0.5.0.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/ranger-plugins-audit-0.5.0.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/ranger-plugins-common-0.5.0.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/ranger-plugins-cred-0.5.0.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/ranger_solrj-0.5.0.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hive/lib/regexp-1.3.jar:/usr/hdp/2.3.2.0-2950/hive/lib/servlet-api-2.5.jar:/usr/hdp/2.3.2.0-2950/hive/lib/snappy-java-1.0.5.jar:/usr/hdp/2.3.2.0-2950/hive/lib/ST4-4.0.4.jar:/usr/hdp/2.3.2.0-2950/hive/lib/stax-api-1.0.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/stringtemplate-3.2.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/super-csv-2.2.0.jar:/usr/hdp/2.3.2.0-2950/hive/lib/tempus-fugit-1.1.jar:/usr/hdp/2.3.2.0-2950/hive/lib/velocity-1.5.jar:/usr/hdp/2.3.2.0-2950/hive/lib/xz-1.0.jar:/usr/hdp/2.3.2.0-2950/hive/lib/zookeeper-3.4.6.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/spark/lib/spark-assembly-1.4.1.2.3.2.0-2950-hadoop2.7.1.2.3.2.0-2950.jar::/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar:/etc/hbase/conf:/usr/hdp/2.3.2.0-2950/hbase/lib/hbase-common-1.1.2.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hbase/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.3.2.0-2950/hbase/lib/hbase-server-1.1.2.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hbase/lib/netty-all-4.0.23.Final.jar:/usr/hdp/2.3.2.0-2950/hbase/lib/metrics-core-2.2.0.jar:/usr/hdp/2.3.2.0-2950/hbase/lib/hbase-protocol-1.1.2.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hbase/lib/hbase-client-1.1.2.2.3.2.0-2950.jar:/usr/hdp/2.3.2.0-2950/hbase/lib/hbase-hadoop-compat-1.1.2.2.3.2.0-2950.jar::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.2.0-2950/tez/*:/usr/hdp/2.3.2.0-2950/tez/lib/*:/usr/hdp/2.3.2.0-2950/tez/conf
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91.x86_64/bin/java -Xmx3000m -Dhdp.version=2.3.2.0-2950 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.3.2.0-2950 -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.2.0-2950/hadoop -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.3.2.0-2950/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.2.0-2950/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -XX:MaxPermSize=512m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.3.2.0-2950/hive/lib/hive-cli-1.2.1.2.3.2.0-2950.jar org.apache.hadoop.hive.cli.CliDriver --hiveconf hive.aux.jars.path=file:///usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar
you can also check heap space allocated using: jmap -heap YOUR-HIVE-CLIENT-PID
... View more
11-25-2015
06:30 PM
@Michael Miklavcic check hive.mapjoin.localtask.max.memory.usage, it's the percentage of memory dedicated to local mapjoin task.
... View more
11-25-2015
04:28 PM
@Michael Miklavcic you have to increate tez container size: hive.tez.container.size and hive.tez.java.opts (should be 80% of container size) to have more memory available. Then, you can increase hive.auto.convert.join.noconditionaltask.size to automatically convert mapjoins or set hive.ignore.mapjoin.hint=false and use mapjoin hine (select /*+ MAPJOIN(dimension_table_name) */ ...)
... View more
11-25-2015
11:52 AM
2 Kudos
@Ryan Templeton is your table partitioned? If I understood right, you must execute analyse command for each partition.
... View more
11-24-2015
05:09 AM
1 Kudo
@Kuldeep Kulkarni ambari restart is not needed, I tested and updated configs appears immediately. You are right about HDFS restart, also mentioned as step #2.
... View more
11-24-2015
12:49 AM
8 Kudos
Spark SQL comes with a nice feature called: "JDBC to other Databases", but, it practice, it's JDBC federation feature. "It can be used to create data frames from jdbc databases using scala/python, but it also works directly with Spark SQL Thrift server and allow us to query external JDBC tables seamless like other hive/spark tables." Example below using sandbox 2.3.2 and spark 1.5.1 TP (https://hortonworks.com/hadoop-tutorial/apache-spark-1-5-1-technical-preview-with-hdp-2-3/). This feature works with spark-submit, spark-shell, zeppelin, spark-sql client and spark sql thift server. In this post two examples: #1 using spark-sql thrift server, #2 using spark-shell. Example #1 using Spark SQL Thrift Server 1- Run Spark SQL Thrift Server with mysql jdbc driver: sudo -u spark /usr/hdp/2.3.2.1-12/spark/sbin/start-thriftserver.sh --hiveconf hive.server2.thrift.port=10010 --jars "/usr/share/java/mysql-connector-java.jar" 2- Open beeline and connect to Spark SQL Thrift Server: beeline -u "jdbc:hive2://localhost:10010/default" -n admin 3- Create a jdbc federated table pointing to existing mysql database, using beeline: CREATE TABLE mysql_federated_sample
USING org.apache.spark.sql.jdbc
OPTIONS (
driver "com.mysql.jdbc.Driver",
url "jdbc:mysql://localhost/hive?user=hive&password=hive",
dbtable "TBLS"
);
describe mysql_federated_sample;
select * from mysql_federated_sample;
select count(1) from mysql_federated_sample;
Example #2 using Spark shell, scala code and data frames: 1- Open spark-shell with mysql jdbc driver spark-shell --jars "/usr/share/java/mysql-connector-java.jar" 2- Create a data frame pointing to mysql table val jdbcDF = sqlContext.read.format("jdbc").options(
Map(
"driver" -> "com.mysql.jdbc.Driver",
"url" -> "jdbc:mysql://localhost/hive?user=hive&password=hive",
"dbtable" -> "TBLS"
)
).load()
jdbcDF.show
See other spark jdbc examples / troubleshooting here: https://community.hortonworks.com/questions/1942/spark-to-phoenix.html
... View more
Labels:
11-23-2015
07:30 PM
2 Kudos
@Kuldeep Kulkarni another workaround: 1- Execute commands below: /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p AMBARI-PASSWORD delete localhost CLUSTER-NAME hdfs-site "dfs.namenode.rpc-address"
/var/lib/ambari-server/resources/scripts/configs.sh -u admin -p AMBARI-PASSWORD delete localhost CLUSTER-NAME hdfs-site "dfs.namenode.http-address"
/var/lib/ambari-server/resources/scripts/configs.sh -u admin -p AMBARI-PASSWORD delete localhost CLUSTER-NAME hdfs-site "dfs.namenode.https-address"
2- Restart HDFS
... View more
11-23-2015
05:24 PM
1 Kudo
Thank you @vshukla Removing following properties from /etc/spark/conf/spark-defaults.conf solved the issue: #spark.history.provider org.apache.spark.deploy.yarn.history.YarnHistoryProvider
#spark.history.ui.port 18080
#spark.yarn.historyServer.address sandbox.hortonworks.com:18080
#spark.yarn.services org.apache.spark.deploy.yarn.history.YarnHistoryService PS: I installed spark in sandbox 2.3.2 and followed your instructions (http://hortonworks.com/hadoop-tutorial/apache-spark-1-5-1-technical-preview-with-hdp-2-3/). It would be nice if you could update spark 1.5.1 tutorial as others users are having same issue @Ali Bajwa.
... View more