Member since
09-23-2015
800
Posts
898
Kudos Received
185
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 7563 | 08-12-2016 01:02 PM | |
| 2763 | 08-08-2016 10:00 AM | |
| 3776 | 08-03-2016 04:44 PM | |
| 7352 | 08-03-2016 02:53 PM | |
| 1903 | 08-01-2016 02:38 PM |
02-24-2016
01:19 PM
1 Kudo
Create a file /scripts/myLogCleaner.sh ( or whatever ) add the following command ( which deletes all files having a log in the name and are older than a day ) find /tmp/hive -name *log* -mtime +1 -exec rm {} \; and crontab it. crontab -e 0 0 * * * /scripts/myLogCleaner.sh This will start the cleaner every day at midnight. ( obviously just one out of approximately 3 million different ways to do it 🙂 ) Edit: ah not the logs of the hive CLI but the scratch dir of hive. That makes it a bit harder since there is no hadoop find. Weird that it grows so big it should clean up after itself unless the command line interface or task gets killed.
... View more
02-24-2016
11:15 AM
1 Kudo
Install it with skipped dependencies as a workaround? If the symlink is created he might just work But I agree safer to wait.
... View more
02-24-2016
10:48 AM
1 Kudo
You could try with -skip-broken, if yum requires the package but its not available in CentOS7 anymore. Your simlink might mean it actual works in reality and -skip-broken would tell yum that all is well. Just theory though.
... View more
02-24-2016
10:45 AM
1 Kudo
Yes you can https://community.hortonworks.com/articles/594/connecting-eclipse-to-hive.html
... View more
02-24-2016
10:04 AM
Definitely something that needs to be checked out but as a potential workaround:
http://unix.stackexchange.com/questions/177414/upgrading-centos-6-6-to-7 Problem #1 : yum not working - could not find libsasl2.so.2 The upgrade process resulted in me having /usr/lib64/libsasl2.so.3 . just create a symlink in /usr/lib64 named libsasl2.so.2 pointing to /usr/lib64/libsasl2.so.3 via: ln -s /usr/lib64/libsasl2.so.3 /usr/lib64/libsasl2.so.2
... View more
02-24-2016
09:23 AM
1 Kudo
AAAAH I see below that you want to write to local filesystem. Why would you do that? No this will not work. If you want to unload to local filesystem Teradata provides client and unload utilities for that. If you use Sqoop you want to use MapReduce to store data in HDFS. So no this will not work.
... View more
02-24-2016
09:00 AM
Not really. You mean as a persisted storage layer under hibernate and ejbs correct? Hive wouldn't work well for this since it's not an oltp database. It is a wareshoue.. So that would leave hbase most likely with Apache Phoenix. I Googled it a bit and focused on hibernate because that seems to be the most popular recently and did not find a connector for Phoenix. Doesn't mean it's not possible to write one. Googled a bit more and there is Hibernate OGM for NoSQL stores as well. Unfortunately it currently does not support HBase. http://hibernate.org/ogm/ So the two possibilities would be to write an extension for OGM for HBase or rewrite a connector for Apache Phoenix. I wrote one for Netezza a while back and it should not be terribly difficult, although the Phoenix syntax has some differences to standard SQL ( UPSERT instead of INSERT ... )
... View more
02-23-2016
09:29 PM
2 Kudos
Does this file exist in HDFS? Is it possible that no MapReduce application is running? hdfs://hdp/apps/2.3.2.0-2950/mapreduce/mapreduce.tar.gz If it doesn't your cluster might have a problem. You can normally find the same file in the local installation as well /usr/hdp/2.3.2.0-2950/hadoop/mapreduce.tar.gz Putting it into hdfs at the required location might help however its likely that other files also are not located correctly ( tez libs etc. )
... View more
02-22-2016
02:27 PM
You can look into the Tez UI in Ambari or in the hiveserver2 log. The query is not in yarn.
... View more
02-22-2016
09:21 AM
2 Kudos
Can you check the setting for dfs.datanode.failed.volumes.tolerated in your environment. Default is 0 which is a bit restrictive. Normally 1 or even 2 ( on datanodes with high disc density ) make more operational sense. Then your datanode will start and you can take care of the discs.
... View more