Member since
04-11-2016
535
Posts
148
Kudos Received
77
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7581 | 09-17-2018 06:33 AM | |
1857 | 08-29-2018 07:48 AM | |
2791 | 08-28-2018 12:38 PM | |
2173 | 08-03-2018 05:42 AM | |
2026 | 07-27-2018 04:00 PM |
06-23-2016
08:32 AM
@rahul jain You can use Hive view from Ambari and run queries on the Hive table. As first step, hive table needs to be created on top the HDFS file. Thanks and Regards, Sindhu
... View more
06-22-2016
01:42 PM
2 Kudos
@alain TSAFACK You can load the data from csv file to a temp hive table with same structure as orc table, then insert the data into orc table as: insert into table table_orc as select * from table_textfile; Thanks and Regards, Sindhu
... View more
06-21-2016
10:05 AM
2 Kudos
@Michel Sumbul CBO is mainly for optimization decisions which reduces the cost of query execution and is independent of storage formats like ORC. Below is some of decisions based on CBO:
How to order Join What algorithm to use for a given Join Should the intermediate result be persisted or should it be recomputed on operator failure. The degree of parallelism at any operator (specifically number of reducers to use). Semi Join selection For details, please refer to below link: https://cwiki.apache.org/confluence/display/Hive/Cost-based+optimization+in+Hive Thanks and Regards, Sindhu
... View more
06-21-2016
07:16 AM
2 Kudos
@ARUNKUMAR RAMASAMY MySQL might be rejecting connections to the extract the data from the tables from remote host. We need to grant privileges for the IP's of the data nodes at the database end as below: GRANTALL PRIVILEGES ON*.*TO'user'@'ipadress' Thanks and Regards, Sindhu
... View more
06-21-2016
06:43 AM
@ARUNKUMAR RAMASAMY The communication between the datanodes and mysql needs to be open. Make sure telnet <mysql_server> <port> works on all the nodes in the cluster. Also, need to verify the bind address at mysql end to verify the connectivity. You can refer to below link for more debugging at mysql end: http://stackoverflow.com/questions/2121829/com-mysql-jdbc-exceptions-jdbc4-communicationsexceptioncommunications-link-fail Hope this helps. Thanks and Regards, Sindhu
... View more
06-20-2016
09:17 AM
1 Kudo
@Pradeep Bhadani Run mysql manual install on the machine itself as: rpm -e --nodeps mysql-libs Hope this helps. Thanks and Regards, Sindhu
... View more
06-20-2016
06:14 AM
Could you please share the steps that resolved the issue and mark as best answer? Thanks, Sindhu
... View more
06-17-2016
06:51 AM
1 Kudo
@Simran Kaur Seems like TotalRecords is a keyword. Try using TotalRecords_1 and see if it helps. Thanks and Regards, Sindhu
... View more
06-16-2016
04:59 PM
1 Kudo
@khushi kalra You can also use RJDBC as below to connect to Hive: library("DBI") library("rJava") library("RJDBC") hive.class.path = list.files(path=c("/usr/hdp/current/hive-client/lib"), pattern="jar", full.names=T); hadoop.lib.path = list.files(path=c("/usr/hdp/current/hive-client/lib"), pattern="jar", full.names=T); hadoop.class.path = list.files(path=c("/usr/hdp/2.4.0.0-169/hadoop"), pattern="jar", full.names=T); cp = c(hive.class.path, hadoop.lib.path, hadoop.class.path, "/usr/hdp/2.4.0.0-169/hadoop-mapreduce/hadoop-mapreduce-client-core.jar") .jinit(classpath=cp) drv <- JDBC("org.apache.hive.jdbc.HiveDriver","hive-jdbc.jar",identifier.quote="`") url.dbc <- paste0("jdbc:hive2://ironhide.hdp.local:10000/default"); conn <- dbConnect(drv, url.dbc, "hive", “redhat"); log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. dbListTables(conn); Thanks and Regards, Sindhu
... View more
06-16-2016
07:43 AM
2 Kudos
@Roberto Sancho Please refer to below Hortonworks blog with inputs to improve Hive query performance: http://hortonworks.com/blog/5-ways-make-hive-queries-run-faster/ Hope this helps. Thanks and Regards, Sindhu
... View more