Member since
09-02-2016
523
Posts
89
Kudos Received
42
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2339 | 08-28-2018 02:00 AM | |
2189 | 07-31-2018 06:55 AM | |
5105 | 07-26-2018 03:02 AM | |
2463 | 07-19-2018 02:30 AM | |
5914 | 05-21-2018 03:42 AM |
07-16-2017
10:34 PM
@saranvisa Thank you very much for your reply. Yes, the links are pointing to the old CDH version Hive. And also I found like all the files inside the alternatives are pointing to Old versions CDH (5.4.8 ). is there any way to change all the links are pointing to the new CDH version in a single attempt. And also I like to know the cause of this Issue. Thank you, J.Ganesh Kumar.
... View more
07-15-2017
09:38 PM
@Msdhan You Welcome :))
... View more
07-12-2017
01:53 PM
@SpoorthyB Can you execute your query directly in Impala/Hive? it may need detailed analysis, becuase 1. If your answer is yes for my above question, the similar issue "Resource temporarily unavailable error message from Impala shell " has already been discussed in this link https://community.cloudera.com/t5/Interactive-Short-cycle-SQL/Resource-temporarily-unavailable-error-message-from-Impala-shell/m-p/27473#M874 2. if your answer is no then it might be an issue with ODBC (or) your network capacity issue for the table with huge size.
... View more
07-12-2017
01:20 PM
The query executes with map reduce engine and I get the desired result. The error happens when I switch to spark engine.
... View more
07-11-2017
11:58 PM
Hi yueyang, Can you please share your impyla source code and the error you are getting? That will help me understand why it failed. Thanks
... View more
07-02-2017
07:33 PM
@Freakabhi You can consider few more points before choose one of the approach, like... 1. Number of records: approach 1 is fine for very huge records and approach 2 is ok for the less records 2. How to handle the issue if something goes wrong? : The 4th step in approach 2 deletes base table and recreate with new data. Consider you have noticed an issue with data after couple of days, how do you get deleted base_table? if you have answer then go for approach 2 3. Approach 3: You are choosing approach 1 because Hbase supports updates but hive does not support updates (I guess this is your understanding). Yes your understand was correct with old hive version. But Update is available in starting Hive 0.14 https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Update
... View more
06-23-2017
11:44 AM
1 Kudo
@VincentSF oh ok got it... Go to CM -> Yarn -> Configuration -> search for "yarn.nodemanager.resource.memory-mb" it will show you the memory restriction that you set for each node (it will get configuration from yarn-site.xml) You can tweak this 'little'. Note: 1. The memory is common for all the services. so you cannot use all the memory for Yarn alone. Also don't increase the memroy for the above setting too much because it may create memory overlap issue accross the services. So may be you can set aprox 50% of total memory but again it is depends upon the memory utilization by other services. Since you have 183 nodes, the 50% is not common for all the nodes, it will change case by case 2. Also when you increase your memory on each node, it is not recommended to increase more than yarn.scheduler.maximum-allocation-mb Hope this will give some idea
... View more
06-09-2017
08:26 AM
@sungsik2 Pls refer this link, it may help you https://community.cloudera.com/t5/Cloudera-Manager-Installation/Error-JAVA-HOME-is-not-set-and-Java-could-not-be-found/td-p/18974
... View more
06-05-2017
08:49 AM
@SatishS Hiverserver2 Introduced in Hive version 0.11 https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients
... View more
06-05-2017
01:39 AM
I was able to solve this problem. I was getting the concept wrong. So, what I did was -- 1. I created keytab file for my current user i.e "cloudera" 2. The Jaas.conf should have that keytab, not the hdfs keytab. 3. I had to add this user, hdfs user in the yarn allowed hosts through CM. Also, I added solr user too there. 4. kinit the cloudera user 5. Put the required files in the /user/cloudera directory in hdfs. 6. Run the job.
... View more