Member since
01-25-2017
396
Posts
28
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
365 | 10-19-2023 04:36 PM | |
3618 | 12-08-2018 06:56 PM | |
4482 | 10-05-2018 06:28 AM | |
16999 | 04-19-2018 02:27 AM | |
17021 | 04-18-2018 09:40 AM |
06-15-2017
03:03 PM
Seems also this option is not working at all: impala-shell -i xxxx -q request_pool=new_pool; select... tried to run impala-shell -i xxxx -q request_pool=new_pool;mpala-shell -i xxxx -q select ... impala-shell -i xxxx -q "request_pool=new_pool" " select ..." impala-shell -i xxxx -q "request_pool=new_pool"; -q " select ..."
... View more
06-15-2017
02:26 PM
No. This is the higher placement rule: 1 Use the pool specified at runtime and create the pool if it does not exist. Edit 2 Use the pool root.[username], only if the pool exists. Edit 3 Use the pool root.default.
... View more
06-15-2017
01:59 PM
Sure i have a placement rule: Use the pool specified at run time and create the pool if it does not exist. Also me getting that the pool set but the queries still running at the default pool. Did you get it resolved?
... View more
06-14-2017
04:09 PM
@MSharma Did you find a solution for this? i'm still stuck with it
... View more
06-14-2017
04:05 PM
Hi, I'm using set request_pool= to run an impala query with specific resource pool, but with no success. When i connect to impala-shell and run set request_pool=xxx, I get REQUEST_POOL set to xxx, after running my query i see it still running under the default pool. How i can also pass this if i want to run the impala-shell with variables like impala-shell -i xxxx -q yyyy., Can i add to this commnad the set request_pool. Thanks in advance
... View more
Labels:
- Labels:
-
Apache Impala
06-14-2017
03:20 PM
is it fail at the Map phase or the reducer? MR jobs running with 6 GB for map and 12 G for reducer and fail, need a review. How much data the MR is running on? how many MR jobs do you have? 18K mappers also indicate that you have a lot of small files in your HDFS. your cluster consumption of ~300Vcores and 3T memory is meaning that each Vcore is running with an average of 10 G memory, how much nodes you have? from your stat seems your jobs are memory intenstive rather than vcores which this can reflects mainly on Spark jobs and you can manage this to get the optimal Vcores and memory for each Spark job. i suspect that you have a specific crazy job and prefer you if you are using Cloudera manager to search it in the application, or try to run the jobs under pools and then identify the problmatic ones.
... View more
06-12-2017
08:47 AM
Hi Guys, is there is an alternative to the --jars option of spark-submit in the spark notebook in Hue?
... View more
06-12-2017
08:46 AM
1 Kudo
Hi Guys, is there is an alternative to the --jars option of spark-submit in the spark notebook in Hue?
... View more
06-10-2017
05:58 AM
I have the same issue, and didn't finish a solution for this. For now i added a cron that restarted the ntpd service at all the server each hour. This issue prevent me from going with Kudu to production as it doesn't make since to do the restart for 50 nodes each time.
... View more
05-11-2017
01:20 PM
1 Kudo
@mageru9 https://www.cloudera.com/documentation/enterprise/release-notes/topics/cm_rn_known_issues.html
... View more