Member since
06-07-2016
923
Posts
322
Kudos Received
115
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4082 | 10-18-2017 10:19 PM | |
| 4336 | 10-18-2017 09:51 PM | |
| 14836 | 09-21-2017 01:35 PM | |
| 1838 | 08-04-2017 02:00 PM | |
| 2418 | 07-31-2017 03:02 PM |
08-24-2016
02:06 AM
Hi @Prashanthi B How do you know it's not able to read credential cache? It appears that may be no kinit was done. Can you please provide steps you take to get to the point you get this error? What does the following command returns on the same machine where you get this error? klist-A
... View more
08-23-2016
03:53 PM
@Kumar Veerappan Without Kerberos, pretty much anyone can access your cluster. Your list of users who can access cluster is anyone who has access to the linux machines where cluster is running.
... View more
08-20-2016
01:57 AM
@venkat v There are two simple steps if you are using command line. 1. Do a kinit using a principal who has access to hadoop. Assuming a principal name "hdp@REALM.COM". kinit -k -t <hdp.keytab file> hdp@REALM.COM -> if you are using a keytab to login kinit hdp@REALM.COM -> if you are using principal and its password to login. 2. Run ls command to see the list of files or cat to view the contents. hadoop hdfs dfs -ls /user/hdp (or a different location within hdfs) If you don't like command line then 3. Setup HUE to work with Kerberos and browse your files from HUE.
... View more
08-19-2016
03:54 PM
1 Kudo
@Kumar Veerappan First the easy part. Let's assume Kerberos is enabled. Run "listprincs" using "kadmin" to find the service principals. Without LDAP and Kerberos enabled these are the users who have access to your cluster. If Kerberos is not enabled, then pretty much all users on your cluster machines should be able to access your cluster.
... View more
08-18-2016
04:21 PM
2 Kudos
@vpemawat I think this is a perfect use case to consider offloading your older data into Hadoop (HDP). Have you considered that?
... View more
08-18-2016
06:35 AM
3 Kudos
@ripunjay godhani This is easier done at install time. After installation, your best bet is to create symbolic links. Please see the following thread. https://community.hortonworks.com/questions/4329/log-file-location-is-there-a-way-to-change-varlog.html
... View more
08-18-2016
02:11 AM
1 Kudo
@linou zhang Are you a supported Hortonworks customer? If yes, then you should not do this and reach out to Hortonworks support. You should not apply patch. Hortonworks support team keeps a track of which customers are on what patch so future help can be provided based on what version and additional patches you have. If you are not a Hortonworks customer, I still don't recommend but you can try applying the patch in dev/test and see if it resolves your problem.
... View more
08-18-2016
01:46 AM
@narender pasunooti If you have a mac have you updated your /etc/hosts file? On windows the file is c:\Windows\System32\Drivers\etc\hosts. Anyways you can try the following if Ambari server is running. <ipaddress>:8080 or ddd.xx.xxxxx.com:8080 You need add port number.
... View more
08-17-2016
11:21 PM
1 Kudo
@sunny malik Which HBase version are you using? Default split policy for HBase 0.94 and above is not based on size. It is "IncreasingToUpperBoundRegionSplitPolicy". Assuming this is your split policy, and given your regions are less than 3.5GB, what you are seeing is expected behavior. The default split policy for HBase 0.94 and trunk is IncreasingToUpperBoundRegionSplitPolicy, which does more aggressive splitting based on the number of regions hosted in the same region server. The split policy uses the max store file size based on Min (R^2 * “hbase.hregion.memstore.flush.size”, “hbase.hregion.max.filesize”), where R is the number of regions of the same table hosted on the same regionserver. So for example, with the default memstore flush size of 128MB and the default max store size of 10GB, the first region on the region server will be split just after the first flush at 128MB. As number of regions hosted in the region server increases, it will use increasing split sizes: 512MB, 1152MB, 2GB, 3.2GB, 4.6GB, 6.2GB, etc. After reaching 9 regions, the split size will go beyond the configured “hbase.hregion.max.filesize”, at which point, 10GB split size will be used from then on. Please see the following link. http://hortonworks.com/blog/apache-hbase-region-splitting-and-merging/
... View more
08-17-2016
06:00 AM
@Harini Yadav Can you try the following. Basically disables UDP traffic to Kerberos. in krb5.conf udp_preference_limit = 1 in kdc.conf kdc_tcp_ports = 750,88
... View more