Member since
08-16-2016
642
Posts
131
Kudos Received
68
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3433 | 10-13-2017 09:42 PM | |
6196 | 09-14-2017 11:15 AM | |
3178 | 09-13-2017 10:35 PM | |
5101 | 09-13-2017 10:25 PM | |
5736 | 09-13-2017 10:05 PM |
02-02-2023
03:32 AM
@45, as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
03-17-2021
05:21 PM
1 Kudo
This error shows up if you have selected Sentry/Ranger as dependencies but not checked true for the below config (i.e. did not enable Kerberos) kerberos.auth.enable
... View more
10-01-2020
04:29 AM
Hbase stores data as a sorted map by keys. HBase is considered a persistent, multidimensional, sorted map, where each cell is indexed by a row key and column key (family and qualifier). A rowkey, which is immutable and uniquely defines a row, usually spans multiple HFiles. Rowkeys are treated as byte arrays (byte[]) and are stored in a sorted order in the multi-dimensional sorted map. If you look for a row_key, Hbase is able to identify the node where this data is present. Hadoop runs its computation on the same node where the key is present and hence the performance with technologies like Spark is really good. This is called data localization.
... View more
02-12-2020
05:18 AM
Hi . any solution you found for same . i having the same issue the accessing the hive through python. Thanks HadoopHelp
... View more
01-05-2020
06:33 AM
Hi, This parameter spark.executor.memory (or) spark.yarn.executor.memoryOverhead can be set in Spark submit command or you can set it Advanced configurations. Thanks AKR
... View more
12-04-2019
08:35 AM
Do you have a documentation for this
... View more
07-25-2019
10:00 AM
is a bit late but i post the solution that worked for me. the problem was the hostnames, impala with kerberos wants the hostnames in lowercase.
... View more
06-14-2019
03:39 AM
The Spark 2 now is the only Spark that is supported by CDH 6.x so I am not sure you will get any reply here. Is there any reason you are still in Spark 1.6.x?
... View more
06-06-2019
11:28 PM
yarn logs -applicationId <application master ID> should help. It occurs typically due improper container memory allocation and physical memory availability on the cluster.
... View more