Member since
03-09-2016
91
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1123 | 10-26-2018 09:52 AM |
10-30-2019
03:29 AM
Hi, Please check for the Heap memory of the Spark history server. Based on the load size, the memory can be increased. Thanks AK
... View more
05-15-2018
12:23 PM
1 Kudo
Note: First made your topology file. Please find an attached example. knox-topology-file.xml knox-ad-ldap-upgraded-docus.pdf Above PDF file covered all practical concepts and some theory part. Step 1:- Install Knox on edge node or any node on the cluster. Step 2:- Start Knox service from Ambari,make sure your Ambari Server is already sync with LDAP. Step3:- Search your LDAP Server via below command ldapsearch -W -H ldap://ad2012.ansari.net -D binduser@ansari.net -b "dc=ansari,dc=net" ldapsearch -W -H ldaps://ad2012.ansari.net -D binduser@ansari.net -b "dc=ansari,dc=net" Step 4:- Create a master password for Knox: /usr/hdp/current/knox-server/data/security/keystores/gateway.jks /usr/hdp/2.6.4.0-91/knox/bin/knoxcli.sh create-master --force enter password then verify it Note:- (2.6.4.0-91 is my HDP versions select your hdp version /usr/hdp/XXXXXXX/) Step 5: Validate your topology file (your cluster name and toplogy file name should be same):- /usr/hdp/2.6.0.3-8/knox/bin/knoxcli.sh validate-topology --cluster walhdp Stpe 6: Validate your auth users:- sudo /usr/hdp/2.6.4.0-91/knox/bin/knoxcli.sh --d system-user-auth-test --cluster walhdp Step 7:- Change all below property and restart required services:- HDFS:- Core-site.xml: hadoop.proxyuser.knox.groups=* hadoop.proxyuser.knox.hosts=* HIVE:- webhcat.proxyuser.knox.groups=* webhcat.proxyuser.knox.hosts=* hive.server2.allow.user.substitution=true hive.server2.transport.mode=http hive.server2.thrift.http.port=10001 hive.server2.thrift.http.path=cliservice Oozie oozie.service.ProxyUserService.proxyuser.knox.groups=* oozie.service.ProxyUserService.proxyuser.knox.hosts=* Step 7 :- Try to access HDFS list status:- curl -vvv -i -k -u binduser -X GET https://hdp-node1.ansari.net:8443/gateway/walhdp/webhdfs/v1?op=LISTSTATUS curl -vvv -i -k -u binduser -X GET https://namenodehost:8443/gateway/walhdp(clustername)/webhdfs/v1?op=LISTSTATUS Step 8:- Try to access hive beeline !connect jdbc:hive2://hdp node1.ansari.net:8443/;ssl=true;sslTrustStore=/home/faheem/gateway.jks;trustStorePassword=bigdata;transportMode=http;httpPath=gateway/walhdp/hive entery username: binduser password for binduser: XXXXXXXXXX Step 9: To access Web UI’s via knox using below lines:- Ambari Ui access https://ambari-server-fqdn-or ambari-server-ip:8443/gateway/walhdp/ambari/ HDFS UI's access https://namenode-fqdn:8443/gateway/walhdp/hdfs/ HBase access https://hbase-master-fqdn:8443/gateway/walhdp/hbase/webui/ YARN UI's https://yarn-master-fqdn:8443/gateway/walhdp/yarn/cluster/apps/RUNNING Resource Manager:- https://resource-manager-fqdn:8443/gateway/walhdp/resourcemanager/v1/cluster curl -ivk -u binduser:Ansari123 " https://hdp-node3.ansari.net:8443/gateway/walhdp/resourcemanager/v1/cluster" curl -ivk -u binduser:Ansari123" https://localhost:8443/gateway/walhdp/resourcemanager/v1/cluster" Ranger Web UI's https://ranger-admin-fqdn:8443/gateway/walhdp/ranger/index.html OOzie UI's https://oozie-server-fqdn:8443/gateway/walhdp/oozie/ Zeppline https://zeppline-fqdn:8443/gateway/walhdp/zeppelin/ Thanks Ansari Faheem Ahmed HDPCA Certified
... View more
Labels:
10-25-2017
07:33 AM
@ANSARI FAHEEM AHMED Can you please let us know how you created that user & HDFS Directory (The exact command that you used) Or you used some other tool / Java code to do that? Or if you used any AD/LDAP to sync the users?
... View more
08-28-2017
03:39 PM
Thanks for the reply, But I want to change ssh session. I am configured ssh with root account but now i have to change to Centos account. It's possible to change or not?
... View more
07-29-2017
11:19 PM
Thanks a lot Jay SenSharma
... View more
06-16-2017
06:53 PM
https://hbase.apache.org/book.html#trouble.log.gc https://hbase.apache.org/book.html#gcpause https://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
... View more
12-09-2016
08:01 PM
3 Kudos
@ANSARI FAHEEM AHMED I had written few blogs on performance tuning. Please have a look at below articles. http://crazyadmins.com/tune-hadoop-cluster-to-get-maximum-performance-part-1/ http://crazyadmins.com/tune-hadoop-cluster-to-get-maximum-performance-part-2/
... View more
10-26-2016
08:07 PM
Thanks a lot , find it and and solve it ?
... View more
09-30-2016
10:03 AM
Dear Gerd, I have set the above line in my.cnf file and restart mysqld but have same issue [root@hdp etc]# vi my.cnf
[root@hdp etc]# sudo systemctl restart mysqld [root@hdp /]# ambari-server start Using python /usr/bin/python2
Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start....................
Ambari Server 'start' completed successfully. [root@hdp /]# ambari-server status Using python /usr/bin/python2
Ambari-server status
Ambari Server not running. Stale PID File at: /var/run/ambari-server/ambari-server.pid
... View more
07-22-2016
07:05 AM
you can check this $hadoop dfsadmin -report as below . You can check without "ROOT" user as well. :~> hadoop dfsadmin -report Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. Configured Capacity: 95930 (87.50 TB)
Present Capacity: 95869819 (87.93 TB) DFS Remaining: 37094235 (33.37 TB) DFS Used: 587755833 (53.56 TB) DFS Used%: 61.31%
Under replicated blocks: 0 Blocks with corrupt replicas: 5 Missing blocks: 0
------------------------------------------------- report: Access denied for user "username". Superuser privilege is required
:~>
... View more