Member since
03-15-2018
27
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
208 | 05-16-2018 07:51 AM |
07-25-2018
09:46 AM
We must perform Namenode HA before hdfs federation. What are the advantages and conceptual reasons for this?
... View more
Labels:
07-23-2018
01:54 PM
Files View could not open after HDFS Federation. Is setting up separate views for each nameservices respectively the only solution? Also, after creating the directory, do we have to mount it always?
... View more
Labels:
07-17-2018
12:28 PM
@Michael Bronson Document has recommended partition size for / and /var. /var mostly has your logs which can usually take up a lot of space. AFAIK swap should be disabled and swappiness should be 0.
... View more
07-10-2018
11:19 PM
Hi @Michael Bronson You can calculate the requirements based upon the amount and type of data using the following guide: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.3/bk_cluster-planning/bk_cluster-planning.pdf
... View more
07-10-2018
11:04 PM
@Lian Jiang I did authenticate Knox using PAM. I created an ACL to give read access only to knox user on /etc/shadow file. Alternatively, you can try creating a link to the /etc/shadow file and give read access on that link. Links that I referred to: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_security/content/setting_up_pam_authentication.html https://www.ibm.com/support/knowledgecenter/en/SSPT3X_4.2.5/com.ibm.swg.im.infosphere.biginsights.admin.doc/doc/admin_knox_ldap_pam.html
... View more
06-04-2018
06:57 PM
@Vinay K In a one-way trust between a trusted domain (AD Domain) and a trusting domain (MIT KDC), users or computers in the trusted domain can access resources in the trusting domain. However, users in the trusting domain cannot access resources in the trusted domain. So basically you tell your MIT KDC to trust the users in the AD to access resources in your cluster. Service access happens the same way as for MIT KDC users. Service will ask Kerberos to authenticate, if that user is authenticated to use that service, Kerberos will check the domain of the user and accordingly if that user is from a trusted domain, Kerberos will ask the AD/LDAP to authenticate and if AD authenticates, Kerberos trusts that user and so does your service.
... View more
06-02-2018
12:39 PM
Well, the configuration files were correct, but the environment was not set properly. Checked hbase env on both nodes and found a difference. Update with the following properties in ambari and it worked: export LD_LIBRARY_PATH=::/usr/hdp/2.6.3.0-235/hadoop/lib/native/Linux-amd64-64:/usr/lib/hadoop/lib/native/Linux-amd64-64:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.3.0-235/hadoop/lib/native
export HADOOP_HOME=/usr/hdp/2.6.3.0-235/hadoop
export HADOOP_CONF_DIR=/usr/hdp/2.6.3.0-235/hadoop/etc/hadoop
... View more
06-01-2018
09:45 AM
@schhabra I checked hbase-site.xml, hdfs-site.xml and core-site.xml. They are exactly same on both nodes.
... View more
05-31-2018
02:06 PM
1 Kudo
@Krishna Pandey Thanks. It worked. Need to give read permission on /etc/shadow to user Knox. Better if we create ACLs for it.
... View more
05-31-2018
01:07 PM
@Krishna Pandey Yes, the permissions to the topology file were not correct. But now I'm getting this error HTTP/1.1 401 Unauthorized
Date: Thu, 31 May 2018 13:07:02 GMT
Set-Cookie: rememberMe=deleteMe; Path=/gateway/pamtest; Max-Age=0; Expires=Wed, 30-May-2018 13:07:04 GMT
WWW-Authenticate: BASIC realm="application"
Content-Length: 0
Server: Jetty(9.2.15.v20160210)
The cluster is kerberized as well.
... View more
05-31-2018
12:10 PM
@Krishna Pandey Linux distro is Centos 7. I tried with PAM Authentication. I am getting HTTP 404 error.
... View more
05-30-2018
02:11 PM
We have start demo LDAP to access services using Knox gateway. But I want to access those services using my Unix/Posix users, which are already created.
... View more
Labels:
05-28-2018
07:00 AM
After Namenode HA, 2 out of my 3 Region Servers in HBase are not coming up. I looked at the logs and found that it is throwing unknown host exception for name service. 2018-05-24 08:48:29,551 INFO [regionserver/atlhashdn02.hashmap.net/192.166.4.37:16020] regionserver.HRegionServer: STOPPED: Failed initialization
2018-05-24 08:48:29,552 ERROR [regionserver/atlhashdn02.hashmap.net/192.166.4.37:16020] regionserver.HRegionServer: Failed init
java.lang.IllegalArgumentException: java.net.UnknownHostException: clusterha
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:411)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:311)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:688)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:629)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:159)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2761)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2795)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2777)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:386)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:179)
at org.apache.hadoop.hbase.wal.DefaultWALProvider.init(DefaultWALProvider.java:97)
at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:148)
at org.apache.hadoop.hbase.wal.WALFactory.<init>(WALFactory.java:180)
at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1648)
at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1381)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:917)
at java.lang.Thread.run(Thread.java:745)
... View more
Labels:
05-25-2018
03:46 PM
1 Kudo
If you're planning to implement security features provided, like Kerberos, Ranger and Knox, then I'd recommend you to go with HDP 2.5 because in HDP 2.6, there are a number of changes that still need to be documented and you'd spend most of your time on unnecessary troubleshooting.
... View more
05-21-2018
12:55 PM
@heta desai Also, since ambari users are not synced very regularly, you can also set up a periodic cron job that syncs all user again.
... View more
05-21-2018
07:41 AM
@heta desai Well, you may have to do that manually. For a scripting alternative, please have a look at this link.
... View more
05-20-2018
05:53 AM
Hi @heta desai. On the node where ranger usersync is installed, please check connectivity to ldap using following commands: //For LDAP
ldapsearch -W -H ldap://<FQDN of LDAP/AD> -D binduser@example.net -b "dc=example,dc=net"
//For LDAPS
ldapsearch -W -H ldaps://<FQDN of LDAP/AD> -D binduser@example.net -b "dc=example,dc=net"
If you're successfully able to connect, then just restart the Ranger Usersync and users will be synced.
... View more
05-20-2018
05:39 AM
Hi @heta desai . When you sync LDAP users in Ambari, it (Ambari) saves the data into its database. So, when you delete LDAP users it doesn't reflect on the UI as they are not deleted from Ambari's database.
... View more
05-18-2018
08:42 AM
@Mike Wong I'd recommend disabling selinux and rebooting the machines. After that look into hdfs logs make sure hdfs is up with no alerts. Try restarting hdfs. All other services would come up then.
... View more
05-18-2018
07:12 AM
@Mike Wong Please check your /etc/hosts on all nodes. And also if selinux is disabled using getenforce command.
... View more
05-16-2018
07:51 AM
Well, the issue has been solved. It seems like a bug in HDP 2.6. After setting up one-way trust, you need to remove [domain_realm] and [capaths] from your krb5.conf. Also, check for spnego keytabs that they are properly created with entries for all encryption types and are present on every node.
... View more
05-14-2018
09:03 AM
@mqureshi What if I set up 4 zookeeper node? The quorum would be ceil (4/2) = 2, but shouldn't it be 3 (n+1)/2.
... View more
04-23-2018
12:36 PM
@Rajkumar Singh I did the same thing but I'm getting either HTTP Error 401 or 404 or certificate error. The cluster I'm testing this on is also Kerberized.
... View more
04-18-2018
05:54 AM
I have got a cluster with Ranger, Ranger KMS, KNOX, and Kerberos (MIT KDC). I've also got HA for Namenode, RM, HiveServer2, Oozie, HBase and Ranger. I've also set up a one-way trust to AD using https://community.hortonworks.com/articles/59635/one-way-trust-mit-kdc-to-active-directory.html After setting up the trust, I am able to get tickets for AD users, but my services on cluster start showing error (Mostly UI not accessible). When I run service check, I get the following error: <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
<title>Error 401 Authentication required</title>
</head>
<body><h2>HTTP ERROR 401</h2>
<p>Problem accessing /webhdfs/v1/user/ambari-qa. Reason:
<pre> Authentication required</pre></p><hr /><i><small>Powered by Jetty://</small></i><br/>
<br/>
<br/>
While Rest of the services are fine; Yarn, Hive, Oozie, Ambari Infra and Spark 2 throws the above error on service check.
... View more
04-06-2018
07:31 AM
I am installing Apache Griffin on my HDP cluster. I followed steps on this link . Apache Griffin requires spark URI link to submit the spark jobs. # spark-admin
# spark.uri=http://10.149.247.156:28088
# spark.uri=http://10.9.246.187:8088 What link should I set?
... View more
Labels:
03-15-2018
01:49 PM
Unexpected driver error occurred while connecting to database
Can't get Kerberos realm
Cannot locate default realm
Cannot locate default realm This is the error I got, and I added only hive jdbc standalone jar.
... View more