Member since 
    
	
		
		
		03-09-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                91
            
            
                Posts
            
        
                3
            
            
                Kudos Received
            
        
                1
            
            
                Solution
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1523 | 10-26-2018 09:52 AM | 
			
    
	
		
		
		10-26-2018
	
		
		09:52 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Sampath Kumar, don't think you got any error to configuring the HA in kerberized cluster. Just take care of the steps which we execute while configuring the namenode HA. Ambari will take care of your kerberos related options.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-15-2018
	
		
		12:23 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							     Note: First made your topology file. Please find an attached example. knox-topology-file.xml  knox-ad-ldap-upgraded-docus.pdf  Above PDF file covered all practical concepts and some theory part.  Step 1:- Install Knox on edge node or any node on the cluster.  Step 2:- Start Knox service from Ambari,make sure your Ambari Server is already sync with LDAP.  Step3:- Search your LDAP Server via below command  ldapsearch -W -H ldap://ad2012.ansari.net -D binduser@ansari.net -b "dc=ansari,dc=net"  ldapsearch -W -H ldaps://ad2012.ansari.net -D binduser@ansari.net -b "dc=ansari,dc=net"    Step 4:- Create a master password for Knox:  /usr/hdp/current/knox-server/data/security/keystores/gateway.jks  /usr/hdp/2.6.4.0-91/knox/bin/knoxcli.sh create-master --force  enter password then verify it   Note:-  (2.6.4.0-91 is my HDP versions select your hdp version /usr/hdp/XXXXXXX/)  Step 5: Validate your topology file (your cluster name and toplogy file name should be same):-    /usr/hdp/2.6.0.3-8/knox/bin/knoxcli.sh validate-topology --cluster walhdp  Stpe 6: Validate your auth users:-  sudo /usr/hdp/2.6.4.0-91/knox/bin/knoxcli.sh --d system-user-auth-test --cluster walhdp  Step 7:- Change all below property and restart required services:-  HDFS:- Core-site.xml:    hadoop.proxyuser.knox.groups=*  hadoop.proxyuser.knox.hosts=*    HIVE:-    webhcat.proxyuser.knox.groups=*  webhcat.proxyuser.knox.hosts=*  hive.server2.allow.user.substitution=true  hive.server2.transport.mode=http  hive.server2.thrift.http.port=10001  hive.server2.thrift.http.path=cliservice    Oozie  oozie.service.ProxyUserService.proxyuser.knox.groups=*  oozie.service.ProxyUserService.proxyuser.knox.hosts=*  Step 7 :- Try to access HDFS list status:-      curl -vvv -i -k -u binduser -X GET https://hdp-node1.ansari.net:8443/gateway/walhdp/webhdfs/v1?op=LISTSTATUS  curl -vvv -i -k -u binduser -X GET https://namenodehost:8443/gateway/walhdp(clustername)/webhdfs/v1?op=LISTSTATUS  Step 8:- Try to access hive beeline  !connect jdbc:hive2://hdp node1.ansari.net:8443/;ssl=true;sslTrustStore=/home/faheem/gateway.jks;trustStorePassword=bigdata;transportMode=http;httpPath=gateway/walhdp/hive  entery username: binduser  password for binduser: XXXXXXXXXX  Step 9: To access Web UI’s via knox using below lines:-      Ambari Ui access  https://ambari-server-fqdn-or ambari-server-ip:8443/gateway/walhdp/ambari/      HDFS UI's access  https://namenode-fqdn:8443/gateway/walhdp/hdfs/    HBase access  https://hbase-master-fqdn:8443/gateway/walhdp/hbase/webui/    YARN UI's  https://yarn-master-fqdn:8443/gateway/walhdp/yarn/cluster/apps/RUNNING    Resource Manager:-  https://resource-manager-fqdn:8443/gateway/walhdp/resourcemanager/v1/cluster    curl -ivk -u binduser:Ansari123 " https://hdp-node3.ansari.net:8443/gateway/walhdp/resourcemanager/v1/cluster"    curl -ivk -u binduser:Ansari123" https://localhost:8443/gateway/walhdp/resourcemanager/v1/cluster"    Ranger Web UI's  https://ranger-admin-fqdn:8443/gateway/walhdp/ranger/index.html    OOzie UI's   https://oozie-server-fqdn:8443/gateway/walhdp/oozie/    Zeppline  https://zeppline-fqdn:8443/gateway/walhdp/zeppelin/  Thanks  Ansari Faheem Ahmed  HDPCA Certified 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		04-03-2018
	
		
		12:03 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello Kuldeep Kulkarni,  I have made all the step which you mentioned in Article, but HDP installation will take a long time after one-hour installation is still in processes.   Thanks   Ansari Faheem Ahmed 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-26-2017
	
		
		10:08 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I have made the changes according to answer by  @Guilherme Braccialli  but after adding the jars and put the following setting in   custom hive-env   export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar   custome hive-site  export HIVE_AUX_JARS_PATH="${HIVE_AUX_JARS_PATH}:/usr/hdp/current/phoenix-client/phoenix-hive.jar"  but not lukc and WebCat Server is not starting :   ERROR from log:-  log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
Exception in thread "main" java.lang.IllegalStateException: Variable substitution depth too large: 20 "${HIVE_AUX_JARS_PATH}:/usr/hdp/current/phoenix-client/phoenix-hive.jar"
        at org.apache.hadoop.conf.Configuration.substituteVars(Configuration.java:967)
        at org.apache.hadoop.conf.Configuration.get(Configuration.java:987)
        at org.apache.hadoop.hive.conf.HiveConfUtil.dumpConfig(HiveConfUtil.java:77)
        at org.apache.hadoop.hive.conf.HiveConfUtil.dumpConfig(HiveConfUtil.java:59)
        at org.apache.hive.hcatalog.templeton.AppConfig.dumpEnvironent(AppConfig.java:256)
        at org.apache.hive.hcatalog.templeton.AppConfig.init(AppConfig.java:198)
        at org.apache.hive.hcatalog.templeton.AppConfig.<init>(AppConfig.java:173)
        at org.apache.hive.hcatalog.templeton.Main.loadConfig(Main.java:97)
        at org.apache.hive.hcatalog.templeton.Main.init(Main.java:81)
        at org.apache.hive.hcatalog.templeton.Main.<init>(Main.java:76)
        at org.apache.hive.hcatalog.templeton.Main.main(Main.java:289)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:148)  after  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-25-2017
	
		
		07:12 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Once I fire the command hdfs dfs -ls /user/:- please check the hdpuser1. why it showing in a double cot.   Please refer screenshot. can anyone help me on that how to remove the double cot?user.jpg 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Hadoop
 
			
    
	
		
		
		08-28-2017
	
		
		03:39 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks for the reply, But I want to change ssh session. I am configured ssh with root account but now i have to change to Centos account. It's possible to change or not? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-28-2017
	
		
		03:39 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks for the reply, But I want to change ssh session. I am configured ssh with root account but now i have to change to Centos account. It's possible to change or not?  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-09-2017
	
		
		03:16 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 can some one provide the best setting for spark heap size, much appricated  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Spark
 
			
    
	
		
		
		07-29-2017
	
		
		11:19 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks a lot Jay SenSharma 
						
					
					... View more