Member since 
    
	
		
		
		05-07-2019
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                20
            
            
                Posts
            
        
                1
            
            
                Kudos Received
            
        
                0
            
            
                Solutions
            
        
			
    
	
		
		
		02-02-2022
	
		
		11:33 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @er_sharma_shant @jsensharma Can you guys please tell me how to resolve hiveserver2 start issue by adding the znode name for hiveserver2 in zkcli shell?.. I have hiveserver2 instance created in zkcli shell, but it does not have znode name because of which hiveserver2 is failing to start.  Welcome to ZooKeeper!
2022-02-03 02:31:36,504 - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1013] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2022-02-03 02:31:36,592 - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@856] - Socket connection established, initiating session, client: /127.0.0.1:36762, server: localhost/127.0.0.1:2181
2022-02-03 02:31:36,609 - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1273] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x27ebd79aecc014c, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /
[cluster, registry, controller, brokers, storm, infra-solr, zookeeper, hbase-unsecure, hadoop-ha, tracers, admin, isr_change_notification, log_dir_event_notification, accumulo, controller_epoch, hiveserver2, hiveserver2-leader, druid, rmstore, atsv2-hbase-unsecure, consumers, ambari-metrics-cluster, latest_producer_id_block, config]
[zk: localhost:2181(CONNECTED) 1] ls /hiveserver2
[]
[zk: localhost:2181(CONNECTED) 2]     Any help would be much appreciated!!!. Thank you 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-02-2022
	
		
		11:10 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I am facing an error during hiverserver2 start as "  caught exception: ZooKeeper node /hiveserver2 is not ready yet  and when I debug more I see that there is no hiveserver2 instance in zookeeper.            Welcome to ZooKeeper!
2022-02-03 01:58:35,986 - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1013] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2022-02-03 01:58:36,072 - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@856] - Socket connection established, initiating session, client: /127.0.0.1:59736, server: localhost/127.0.0.1:2181
2022-02-03 01:58:36,093 - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1273] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x27ebd79aecc0141, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /hiveserver2
[]
[zk: localhost:2181(CONNECTED) 1] 
[zk: localhost:2181(CONNECTED) 2] ls /
[cluster, registry, controller, brokers, storm, infra-solr, zookeeper, hbase-unsecure, hadoop-ha, tracers, admin, isr_change_notification, log_dir_event_notification, accumulo, controller_epoch, hiveserver2, hiveserver2-leader, druid, rmstore, atsv2-hbase-unsecure, consumers, ambari-metrics-cluster, latest_producer_id_block, config]           Can someone tell me how to create znode name if hiveserver2 is not registered to znode?.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		02-02-2022
	
		
		10:03 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Shelton As you have mentioned solution in Point 3,    3. There seems to be a problem with hiveserver creating a znode in zookeeper. [caught exception: ZooKeeper node /hiveserver2 is not ready yet]    How can I create hiveserver2 znode instance in zookeeper if its not created?.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-02-2022
	
		
		08:27 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Shelton , I am facing the same issue as mentioned above while starting hiveserver2. I followed your debug steps and when I listed ls /hiveserver2 in Akali shell, I am getting below response  Welcome to ZooKeeper!
2022-02-02 23:11:02,741 - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1013] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2022-02-02 23:11:02,834 - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@856] - Socket connection established, initiating session, client: /127.0.0.1:58334, server: localhost/127.0.0.1:2181
2022-02-02 23:11:02,851 - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1273] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x27ebd79aecc0088, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /hiveserver2
[]
[zk: localhost:2181(CONNECTED) 1]   which means I don't have entry hiveserver2 entry in my zookeeper. And when I debug hiveserver2.log under /var/log/hive folder I see below error as permission denied.  Caused by: org.apache.hadoop.ipc.RemoteException: Permission denied: user=hive, access=EXECUTE, inode="/tmp/hive"
        at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:457)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:604)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1858)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1876)  Please help me to resolve this if you have come across this issue anytime. Thanks 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-02-2022
	
		
		07:38 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @jsensharma I am facing error in starting hiveserver2 in HDP 3.1.5, and when I check hiveserver2 logs under /var/log/hive, it says "Metrics source hiveserver2 already exists!".      2022-02-02T00:00:44,090 ERROR [main]: metrics2.CodahaleMetrics (:()) - Unable to instantiate using constructor(MetricRegistry, HiveConf) for reporter org.apache.hadoop.hive.common.metrics.metrics2.Metrics2Reporter from conf HIVE_CODAHALE_METRICS_REPORTER_CLASSES
java.lang.reflect.InvocationTargetException: null
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_112]
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_112]
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_112]
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_112]
        at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.initCodahaleMetricsReporterClasses(CodahaleMetrics.java:429) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
        at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.initReporting(CodahaleMetrics.java:396) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
        at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.<init>(CodahaleMetrics.java:196) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_112]
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_112]
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_112]
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_112]
        at org.apache.hadoop.hive.common.metrics.common.MetricsFactory.init(MetricsFactory.java:42) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
        at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:213) ~[hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
        at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1087) ~[hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
        at org.apache.hive.service.server.HiveServer2.access$1700(HiveServer2.java:137) ~[hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
        at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:1356) ~[hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
        at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:1200) ~[hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_112]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_112]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_112]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
        at org.apache.hadoop.util.RunJar.run(RunJar.java:318) ~[hadoop-common-3.1.1.3.1.5.0-152.jar:?]
        at org.apache.hadoop.util.RunJar.main(RunJar.java:232) ~[hadoop-common-3.1.1.3.1.5.0-152.jar:?]
Caused by: org.apache.hadoop.metrics2.MetricsException: Metrics source hiveserver2 already exists!
        at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152) ~[hadoop-common-3.1.1.3.1.5.0-152.jar:?]
        at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125) ~[hadoop-common-3.1.1.3.1.5.0-152.jar:?]
        at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229) ~[hadoop-common-3.1.1.3.1.5.0-152.jar:?]
        at com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.<init>(HadoopMetrics2Reporter.java:206) ~[dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:?]
        at com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter.<init>(HadoopMetrics2Reporter.java:62) ~[dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:?]
        at com.github.joshelser.dropwizard.metrics.hadoop.HadoopMetrics2Reporter$Builder.build(HadoopMetrics2Reporter.java:162) ~[dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar:?]
        at org.apache.hadoop.hive.common.metrics.metrics2.Metrics2Reporter.<init>(Metrics2Reporter.java:45) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
        ... 23 more
2022-02-02T00:00:44,090 WARN  [main]: server.HiveServer2 (HiveServer2.java:init(216)) - Could not initiate the HiveServer2 Metrics system.  Metrics may not be reported.
java.lang.reflect.InvocationTargetException: null
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_112]
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_112]
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_112]
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_112]
        at org.apache.hadoop.hive.common.metrics.common.MetricsFactory.init(MetricsFactory.java:42) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
        at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:213) [hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
        at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1087) [hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
        at org.apache.hive.service.server.HiveServer2.access$1700(HiveServer2.java:137) [hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
        at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:1356) [hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
        at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:1200) [hive-service-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_112]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_112]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_112]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
        at org.apache.hadoop.util.RunJar.run(RunJar.java:318) [hadoop-common-3.1.1.3.1.5.0-152.jar:?]
        at org.apache.hadoop.util.RunJar.main(RunJar.java:232) [hadoop-common-3.1.1.3.1.5.0-152.jar:?]
Caused by: java.lang.IllegalArgumentException: java.lang.reflect.InvocationTargetException
        at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.initCodahaleMetricsReporterClasses(CodahaleMetrics.java:437) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
        at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.initReporting(CodahaleMetrics.java:396) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]
        at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.<init>(CodahaleMetrics.java:196) ~[hive-common-3.1.0.3.1.5.0-152.jar:3.1.0.3.1.5.0-152]  Requesting you to help me resolve this.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-22-2021
	
		
		12:27 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @jijose If you are using Cloudera Manager, Login to Cloudera Manager UI > Click on "Cluster" > Click "YARN" > Actions > Add Role Instances. You will land on Assign Roles page.  Assign the host from where you want to run the job to Gateway role. Save the configuration and deploy the Client configuration.     You can try submitting the job from newly added host to Yarn Gateway role. Thanks 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-14-2020
	
		
		12:22 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Adding a bit of clarification to above mentioned solution. Find "Ranger External URL" in the field  Ranger > Configs > Advanced > Ranger Settings. It will be something like "http://<ranger_admin_host>:6080" . Copy this URL and update the particular service to which Ranger plugin is enabled. For example : If its HDFS, HDFS > Configs > Advanced > Advanced ranger-hdfs-security > ranger.plugin.hdfs.policy.rest.url. Usually this field should be auto populated by Ranger External URL value. If its not, it will be like "{{policy_mgr_url}}". Update this field by adding "Ranger External URL". Restart Ranger service, ranger KMS service and all required services.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-31-2020
	
		
		11:57 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @ccibi75Thanks for the solution to resolve timeline server v2.0 start issue in HDP3.x. It worked!!! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-07-2019
	
		
		12:17 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							Hi, Thanks for the resolution. Is there any specific procedures to upgrade krb packages to 1.15.1-19?..  Detailed steps to upgrade and post upgrade to start hadoop cluster would be more helpful
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-14-2019
	
		
		07:51 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 do   sudo yum install -y mysql-connector-java to install mysql-connector-java.jar  Check the path where mysql-connector-java is installed.  [root@c902f08x05 ~]# rpm -ql mysql-connector-java-*  /usr/share/doc/mysql-connector-java-5.1.25  /usr/share/doc/mysql-connector-java-5.1.25/CHANGES  /usr/share/doc/mysql-connector-java-5.1.25/COPYING  /usr/share/doc/mysql-connector-java-5.1.25/docs  /usr/share/doc/mysql-connector-java-5.1.25/docs/README.txt  /usr/share/doc/mysql-connector-java-5.1.25/docs/connector-j.html  /usr/share/doc/mysql-connector-java-5.1.25/docs/connector-j.pdf  /usr/share/java/mysql-connector-java.jar  /usr/share/maven-fragments/mysql-connector-java  /usr/share/maven-poms/JPP-mysql-connector-java.pom  [root@c902f08x05 ~]#       check the path and run ambari-server setup with jdbc driver path  ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar  and re-try hive-client install,.it should work. 
						
					
					... View more