Member since
04-06-2016
47
Posts
7
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6000 | 12-02-2016 10:20 PM | |
4646 | 11-23-2016 08:59 PM | |
1160 | 07-26-2016 03:11 AM |
10-04-2019
04:19 PM
In case, you get below error, make sure you use Nifi host FQDN in API call and NOT IP address. Also, make sure DNS is configured correctly. <body><h2>HTTP ERROR 401</h2>
<p>Problem accessing /nifi-api/access/kerberos. Reason:
<pre> Unauthorized</pre>
... View more
02-17-2017
12:11 PM
try increasing heap size for metastore. Make sure DB connection is working fine.
... View more
02-16-2017
05:59 AM
@rahul gulati Which version of Ambari? Is your cluster kerberized? Is Ambari SSL enabled? Is this on local Ambari Cluster? Can you share all setting from your FileView
... View more
12-02-2016
10:20 PM
1 Kudo
@Manish Gupta try adding hive-metastore.jar as well in squirrel jar list.
... View more
11-23-2016
08:59 PM
Depends on what arguments you are providing to the hash function. If your argument values are unique, you would most likely get unique value from hash. Keep in mind hive hash function return int (which is 32bit) so you may see -ve numbers as well. You can use something like reflect('java.util.UUID','randomUUID') to generate uniqueID or comeup with some unique code. I would not suggest using hash function, if you want to generate unique ids.
... View more
08-03-2016
03:49 AM
Try running set; command. It should display all the values for all the variables in current session.
... View more
07-29-2016
04:27 PM
Hi Upendra, The recommendation is to use VARCHAR and Integer Types (TINYINT, SMALLINT, INT, BIGINT) where ever possible instead of using String. In hive String is treated as VARCHAR(32762). So, if you have data that is not more than 50 characters in length, using string here will have some overhead. Same for integer types. Hope this helps.
... View more
07-28-2016
01:45 AM
Just starting to understand Spark memory management on yarn and got few questions that I thought would be better to ask experts here. 1. Is there a way to restrict max size that users can use for Spark executor and Driver when submitting jobs on Yarn cluster? 2. What the best practice around determining number of executor required for a job? Is there a max limit that users can be restricted to? 3. How RM handles resource allocation if most of the resources are consumed by Spark jobs in a queue? How preemption is handled?
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache YARN
07-26-2016
03:11 AM
2 Kudos
You can deploy Master Slave KDC. That will provide HA. I have done this before. You can setup replication between master and slave. http://www.tldp.org/HOWTO/Kerberos-Infrastructure-HOWTO/server-replication.html HTH
... View more
07-14-2016
02:40 PM
If this fixed your issue, can you accept this as Answer. It would help others in community.
... View more