Member since
08-16-2016
642
Posts
131
Kudos Received
68
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4029 | 10-13-2017 09:42 PM | |
| 7610 | 09-14-2017 11:15 AM | |
| 3943 | 09-13-2017 10:35 PM | |
| 6167 | 09-13-2017 10:25 PM | |
| 6724 | 09-13-2017 10:05 PM |
06-05-2018
05:37 AM
Of course I wondered what @KamaJinny was saying so I figured others may as well. Here is the translation via google translate:
... View more
06-03-2018
10:35 PM
Can you share your config.ini how you added multiple entries there..... please......
... View more
04-16-2018
12:11 PM
user mapred:hadoop for logs in /tmp
... View more
04-11-2018
05:18 AM
use this conf in hive-site.xml <property> <name>hive.async.log.enabled</name> <value>false</value> </property>
... View more
03-21-2018
05:20 AM
Hi All, I am also facing the same issue. after enabling the kerberos, impala services are not started. Tried to start manually, but failed. in UI, i am able to see all the 3 instances in started state, but when i am running impala-shell command i am getting the error. the java processes for impala are also not present. Please help me in troubleshooting the issue. my hostname is also in small letters.... Thanks in advance...
... View more
03-09-2018
04:57 AM
1 Kudo
For other people reading this in 2018 and beyond NB https://issues.apache.org/jira/browse/HIVE-9452 and https://issues.apache.org/jira/browse/HIVE-17234. Essentially AFAIK development for an HBase backed metastore has stalled.
... View more
02-15-2018
09:49 PM
We are tryng the same thing. However when we execute our code, it asks for the Kereros Credentials. Is there anyway to authenticate solr user using Keytab files?
... View more
02-05-2018
10:07 PM
Hi, referring to the last step, do you encounter the Permission denied error when doing scp? sudo scp user@cluster:/etc/hadoop/conf/* /etc/hadoop/conf I managed to copy all the files inside /conf except for container-executor.cfg which shows the message in terminal below: scp: /etc/hadoop/conf/container-executor.cfg: Permission denied
... View more
01-30-2018
04:45 AM
Sorry this is nearly a year later, but the behavior you're seeing is likely that spark.executor.instances and --num-executors no longer disables dynamic allocation in 2.X, we now have to explicitly set spark.dynamicAllocation.enabled to false otherwise it just uses the value in the aforementioned properties as the initial executors but still continues to use dynamic allocation to scale the count up and down. That may or may not explain your situation as you mentioned playing with some of those properties. Additionally the remaining ~13,000 tasks you describe doesn't necessarily mean that there are 13,000 pending tasks, a large portion of those could be for future stages that depend on the current stage and when you're seeing the number of executors reduced it's likely that the current stage was not using all the available executors and they reached the idle limit and were released. You will want to explicitly disable dynamic allocation if you want a static number of executors, and likely want to review if there's a low task count at the time of "decay" and look at the data to figure out why which could potentially be resolved by simply performing a repartition of the RDD/DS/DF in the stage that has the low level of partitions. Alternatively there could be resource management configuration or perhaps an actual bug related to the behavior but I would start with the assumption that it's related to the config, data, or partitioning.
... View more