Member since
04-11-2016
535
Posts
148
Kudos Received
77
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7577 | 09-17-2018 06:33 AM | |
1853 | 08-29-2018 07:48 AM | |
2790 | 08-28-2018 12:38 PM | |
2171 | 08-03-2018 05:42 AM | |
2025 | 07-27-2018 04:00 PM |
06-15-2016
06:18 AM
2 Kudos
@Ethan Hsieh
Looks like hive metastore is using mysql in your case, add the mysql client jar to <atlas package>/bridge/hive/. That should work. Ideally, import-hive.sh should use hive classpath so that all hive dependencies are included. Currently, we bundle hive dependencies as well and hence this issue if hive uses non-default driver. Details: https://issues.apache.org/jira/browse/ATLAS-96 Hope this helps. Thanks and Regards, Sindhu
... View more
06-14-2016
02:14 PM
2 Kudos
@Venkat Chinnari The issue seems to be with cast from text to parquet. Try creating a sample table say table3 without serde properties but just 'stored as parquet' and check if insert overwrite works. Thanks and Regards, Sindhu
... View more
06-14-2016
12:07 PM
5 Kudos
@Shihab The temp tables are created during the application run as intermediate data. These intermediate tables will not be removed in case the application fails and cleanup does not happen. Please check if applications are running which is generating data. Meanwhile, you can also try compressing the intermediate data by setting the property "hive.exec.compress.intermediate" as true in hive-site.xml. The related compression codec and other options are determined from Hadoop configuration variables mapred.output.compress*. Hope this helps. Thanks and Regards, Sindhu
... View more
06-10-2016
10:39 AM
@Tajinderpal Singh You can refer to below Spark documentation: http://spark.apache.org/docs/latest/streaming-kafka-integration.html Thanks and Regards, Sindhu
... View more
06-10-2016
10:29 AM
@a kumar This can be relate to Hive Jira 12349:
https://issues.apache.org/jira/browse/HIVE-12349 Can you please share the query being run? Thanks and Regards, Sindhu
... View more
06-10-2016
10:26 AM
@Varun Kumar Chepuri To initialize Metastore is to initialize the metastore database. Refer to the below link for manual configuration: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_installing_manually_book/content/set_up_hive_hcat_configuration_files.html Hope this helps. Thanks and Regards, Sindhu
... View more
06-10-2016
09:15 AM
1 Kudo
@alain TSAFACK You can also make use of --query option during sqoop import to cast the smalldatetime to timestamp: sqoop import ...other options.... --query "select cast(col1 as datetime) from table_name" Hope this helps. Thanks and Regards, Sindhu
... View more
06-09-2016
11:40 AM
3 Kudos
@Pierre Villard Sqoop internally using yarn jobs for extracting data from HDFS. ORC is regarding as better performance for read even with Hive: You can refer to below link for details: http://www.slideshare.net/StampedeCon/choosing-an-hdfs-data-storage-format-avro-vs-parquet-and-more-stampedecon-2015 Hope this helps. Thanks and Regards, Sindhu
... View more
06-09-2016
10:17 AM
@suresh krish Can you please let me know the structure of the Hive table and alter being used? Thanks and Regards, Sindhu
... View more
06-08-2016
01:49 PM
@bandhu gupta Can you please share the complete error along with sqoop command being used? The issue might be when HIVE_HOME/HCAT_HOME is not set as Sqoop will use HIVE_HOME/HCAT_HOME to find hive libs, which are needed in hive import as Parquet file. Thanks and Regards, Sindhu
... View more
- « Previous
- Next »