Member since
01-18-2019
10
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1207 | 01-22-2020 02:56 AM |
01-22-2020
02:56 AM
1 Kudo
Hello, In older versions of ambari, HDP 2.6.5 we've seen behavior that ambari uses the default cert/trustore pairs used by java instead of the values specified in Ambari UI. Please try adding the cert to the default java cert store. Also have you tried running ambari-server setup-security and specifying the cert path. Also, on HDP 3.1 we've noticed that the node cert should be the only cert in the store for the correct cert to be extracted by Ambari. I.e. a store with all the node certs will not allow Ambari to extract the correct cert for the corresponding node. HTH Best, Lyubomir
... View more
01-22-2020
12:21 AM
Hi @mike_bronson7 Please try to type ls / after running the shell. Best, Lyubomir
... View more
01-21-2020
11:40 PM
Hi @Prakashcit Please check the hiveserver/hiveserver2.log and send us the error message if any when grepping for the user making the select. If there is a hive or hdfs permission issue it should be visible in the log. Best, Lyubomir
... View more
01-20-2020
04:28 AM
Hello, Please try the 'full path' to the table, i.e.: select * from dbname.tablename Or use dbname; (switch to said db) select * from tablename; (select from table in db selected above) Best, Lyubomir
... View more
01-17-2020
04:36 AM
Hi Sudhnidra, Please take a look at: https://blog.cloudera.com/yarn-fairscheduler-preemption-deep-dive/ https://blog.cloudera.com/untangling-apache-hadoop-yarn-part-3-scheduler-concepts/ https://clouderatemp.wpengine.com/blog/2016/06/untangling-apache-hadoop-yarn-part-4-fair-scheduler-queue-basics/ What type of FairScheduler are you using: Steady FairShare Instantaneous FairShare What is the weight of the default queue you are submitting your apps to? Best, Lyubomir
... View more
01-17-2020
01:59 AM
Hi Sudhinra, Thank you for the update. Can you share the SparkConf you use for your applications; The following settings should work for small resource apps (Note dynamic allocation is disabled): conf = (SparkConf().setAppName("simple")
.set("spark.shuffle.service.enabled", "false")
.set("spark.dynamicAllocation.enabled", "false")
.set("spark.cores.max", "1")
.set("spark.executor.instances","2")
.set("spark.executor.memory","200m")
.set("spark.executor.cores","1") From: https://stackoverflow.com/questions/44581585/warn-cluster-yarnscheduler-initial-job-has-not-accepted-any-resources PS: Share the number of cores available on your nodes, spark.executor.cores should not be higher than number of cores available on each node. Also, are you running spark in cluster or client mode? HTH Best, Lyubomir
... View more
01-10-2020
03:54 AM
Hi @ssk26 From my perspictive you are limiting your default queue to use at minimum 1024MB 0vCores and at maximum 8196MB 0vCores. In both cases no cores are set - when you try to run a job it requires to run with 1024MB memory and 1vCores - it then fails to allocate the 1vCore due to 0vCore min/max restriction and it sends 'exceeds maximum AM resources allowed' That's why I think the issue is with the core utilization and not with memory. HTH Best, Lyubomir
... View more
01-08-2020
06:43 AM
Hello, In your screenshot the <queueMaxResourcesDefault> Is set to 8192 mb, 0vcore And your job requires at least 1vcore as seen in the Diagnostics section. Please try increasing the vcore size in <queueMaxResourcesDefault> and try to run the job again. Best, Lyubomir
... View more
01-08-2020
06:34 AM
Hello, Are you using AD or OpenLDAP. Although related for an older version, please check this thread: http://mail-archives.apache.org/mod_mbox/hive-user/201308.mbox/%3CCAHxLZBX1OrUgY4RJCd6TkZ1xrV2ekKLb9XU1Cj=vgXuPay7DCA@mail.gmail.com%3E HTH
... View more
01-08-2020
06:22 AM
The provided work-around is tested and works on HDP 3.1; Keep in mind in our case Zeppelin reverts to default interpreter settings after restart of the service;
... View more