Member since 
    
	
		
		
		04-25-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                579
            
            
                Posts
            
        
                609
            
            
                Kudos Received
            
        
                111
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2923 | 02-12-2020 03:17 PM | |
| 2136 | 08-10-2017 09:42 AM | |
| 12470 | 07-28-2017 03:57 AM | |
| 3409 | 07-19-2017 02:43 AM | |
| 2520 | 07-13-2017 11:42 AM | 
			
    
	
		
		
		06-19-2023
	
		
		03:19 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 i got the same problem.  When I reinstalled hive, everything worked. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-25-2021
	
		
		03:35 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi, @rajkumar_singh  - im getting same issue. Just wondering if you  were able to fix this issue? May I ask how did you resolve it? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-14-2020
	
		
		12:16 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @RajaChintala You have to make below changes.  HADOOP_OPTS="-Dsun.security.krb5.debug=true"  For more details follow below docs:   https://docs.cloudera.com/documentation/enterprise/5-10-x/topics/cdh_sg_debug_sun_kerberos_enable.html  https://spark.apache.org/docs/2.0.0/running-on-yarn.html#troubleshooting-kerberos  https://spark.apache.org/docs/2.0.0/running-on-yarn.html#troubleshooting-kerberos  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-08-2020
	
		
		07:47 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 The full format of an HDFS uri is:           hdfs://NAMESERVICE/path/to/your/file
hdfs://NAMESERVICE/path/to/your/directory           or           hdfs://ANYNAMENODE:PORT/path/to/your/file
hdfs://ANYNAMENODE:PORT/path/to/your/directory              It is ok to omit the nameservice/namenode part (to use the defaultFs).  To do it correctly, you need to keep the `hdfs://` part and relative path that starts with a `/`.  That is:           hdfs:///path/to/your/file           (Note that it looks like `hdfs:` followed by 3x `/`s) 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-19-2020
	
		
		09:20 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Writing this so that it can help someone in future:  I was installing Hive and getting error that It hive metastore wasn't able to connect, and I successfully resolved the error by recreating the hive metastore database.     Someone the user which was created in mysql Hive metastore wasn't working properly and not able to authenticate. So I dropped metastore DB, Dropped User. Recreated Metastore DB, Recreated User, Granted all privileges and then it was working without issues. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-17-2020
	
		
		08:21 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,     @rajkumar_singh   thanks to your advice I found my file hiveserver2 Interactive.log and the error is in task 4-5. I noticed 2 things :  - First, unfortunately before the first NPE error in the file hiveserver2 Interactive.log it is another NPE that we find  - Secondly, in task 3 we have    - Cannot get a table snapshot for factmachine_mv  but I don't thinks that is the error's source.     HiveServeur2Interactive.log:     2020-02-17T15:11:13,525 INFO  [HiveServer2-Background-Pool: Thread-5750]: FileOperations (FSStatsAggregator.java:aggregateStats(101)) - Read stats for : dwh_dev.factmachine_mv/	numRows	1231370
2020-02-17T15:11:13,525 INFO  [HiveServer2-Background-Pool: Thread-5750]: FileOperations (FSStatsAggregator.java:aggregateStats(101)) - Read stats for : dwh_dev.factmachine_mv/	rawDataSize	552885130
2020-02-17T15:11:13,525 WARN  [HiveServer2-Background-Pool: Thread-5750]: metadata.Hive (Hive.java:alterTable(778)) - Cannot get a table snapshot for factmachine_mv
2020-02-17T15:11:13,609 INFO  [HiveServer2-Background-Pool: Thread-5750]: stats.BasicStatsTask (SessionState.java:printInfo(1227)) - Table dwh_dev.factmachine_mv stats: [numFiles=2, numRows=1231370, totalSize=1483151, rawDataSize=552885130]
2020-02-17T15:11:13,609 INFO  [HiveServer2-Background-Pool: Thread-5750]: stats.BasicStatsTask (BasicStatsTask.java:aggregateStats(271)) - Table dwh_dev.factmachine_mv stats: [numFiles=2, numRows=1231370, totalSize=1483151, rawDataSize=552885130]
2020-02-17T15:11:13,620 INFO  [HiveServer2-Background-Pool: Thread-5750]: mapred.FileInputFormat (FileInputFormat.java:listStatus(259)) - Total input files to process : 1
2020-02-17T15:11:14,881 INFO  [HiveServer2-Background-Pool: Thread-5750]: ql.Driver (Driver.java:launchTask(2710)) - Starting task [Stage-5:DDL] in serial mode
2020-02-17T15:11:14,902 INFO  [HiveServer2-Background-Pool: Thread-5750]: parse.CalcitePlanner (CalcitePlanner.java:genLogicalPlan(385)) - Starting generating logical plan
2020-02-17T15:11:14,904 INFO  [HiveServer2-Background-Pool: Thread-5750]: parse.CalcitePlanner (SemanticAnalyzer.java:genResolvedParseTree(12232)) - Completed phase 1 of Semantic Analysis
2020-02-17T15:11:14,904 INFO  [HiveServer2-Background-Pool: Thread-5750]: parse.CalcitePlanner (SemanticAnalyzer.java:getMetaData(2113)) - Get metadata for source tables
2020-02-17T15:11:14,904 INFO  [HiveServer2-Background-Pool: Thread-5750]: parse.CalcitePlanner (SemanticAnalyzer.java:getMetaData(2244)) - Get metadata for subqueries
2020-02-17T15:11:14,905 INFO  [HiveServer2-Background-Pool: Thread-5750]: parse.CalcitePlanner (SemanticAnalyzer.java:getMetaData(2113)) - Get metadata for source tables
2020-02-17T15:11:14,935 INFO  [HiveServer2-Background-Pool: Thread-5750]: parse.CalcitePlanner (SemanticAnalyzer.java:getMetaData(2244)) - Get metadata for subqueries
2020-02-17T15:11:14,936 INFO  [HiveServer2-Background-Pool: Thread-5750]: parse.CalcitePlanner (SemanticAnalyzer.java:getMetaData(2268)) - Get metadata for destination tables
2020-02-17T15:11:14,936 INFO  [HiveServer2-Background-Pool: Thread-5750]: parse.CalcitePlanner (SemanticAnalyzer.java:getMetaData(2268)) - Get metadata for destination tables
2020-02-17T15:11:14,944 INFO  [HiveServer2-Background-Pool: Thread-5750]: ql.Context (Context.java:getMRScratchDir(551)) - New scratch dir is hdfs://lvdcluster/tmp/hive/hive/c0988fe9-1a5e-4a6a-9ddd-7b782d0cdba1/hive_2020-02-17_15-11-14_897_3869104234201454658-21
2020-02-17T15:11:14,945 INFO  [HiveServer2-Background-Pool: Thread-5750]: parse.CalcitePlanner (SemanticAnalyzer.java:genResolvedParseTree(12237)) - Completed getting MetaData in Semantic Analysis
2020-02-17T15:11:14,945 INFO  [HiveServer2-Background-Pool: Thread-5750]: parse.BaseSemanticAnalyzer (CalcitePlanner.java:canCBOHandleAst(875)) - Not invoking CBO because the statement has lateral views
2020-02-17T15:11:14,946 ERROR [HiveServer2-Background-Pool: Thread-5750]: exec.TaskRunner (TaskRunner.java:runSequential(108)) - Error in executeTask
java.lang.NullPointerException: null
	at org.apache.calcite.plan.RelOptMaterialization.<init>(RelOptMaterialization.java:68) ~[calcite-core-1.16.0.3.1.4.0-315.jar:1.16.0.3.1.4.0-315]
	at org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry.addMaterializedView(HiveMaterializedViewsRegistry.java:235) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry.createMaterializedView(HiveMaterializedViewsRegistry.java:187) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.ql.exec.MaterializedViewTask.execute(MaterializedViewTask.java:59) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) ~[hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) [hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2712) [hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2383) [hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2055) [hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1753) [hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1747) [hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) [hive-exec-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226) [hive-service-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) [hive-service-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:324) [hive-service-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at java.security.AccessController.doPrivileged(Native Method) [?:1.8.0_112]
	at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_112]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) [hadoop-common-3.1.1.3.1.4.0-315.jar:?]
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:342) [hive-service-3.1.0.3.1.4.0-315.jar:3.1.0.3.1.4.0-315]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_112]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_112]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_112]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_112]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_112]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_112]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
2020-02-17T15:11:14,947 INFO  [HiveServer2-Background-Pool: Thread-5750]: reexec.ReOptimizePlugin (ReOptimizePlugin.java:run(70)) - ReOptimization: retryPossible: false
2020-02-17T15:11:14,948 ERROR [HiveServer2-Background-Pool: Thread-5750]: ql.Driver (SessionState.java:printError(1250)) - FAILED: Hive Internal Error: org.apache.hadoop.hive.ql.metadata.HiveException(Error while invoking FailureHook. hooks: java.lang.NullPointerException
	at org.apache.hadoop.hive.ql.reexec.ReExecutionOverlayPlugin$LocalHook.run(ReExecutionOverlayPlugin.java:45)
	at org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296)
	at org.apache.hadoop.hive.ql.HookRunner.runFailureHooks(HookRunner.java:283)
	at org.apache.hadoop.hive.ql.Driver.invokeFailureHooks(Driver.java:2664)
	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2434)
	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2055)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1753)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1747)
	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
	at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226)
	at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:324)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:342)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
)     For the moment, my solution is to use use a simple view. Because I noticed that even if the materilized view doesn't work, a simple view work with this query.       
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-13-2020
	
		
		03:06 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Accidently , I marked this answer as resolved .   @rajkumar_singh     I am getting below output after executing "hdfs groups <username>" command.     <username>@<kerberose principle > : domain users dev_sudo     As i am not much aware of cluster configuration   So , Could you please help me to understand the output of this command. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-02-2017
	
		
		08:15 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		7 Kudos
		
	
				
		
	
		
					
							 Creating and running Temporary functions are discouraged while running a query on LLAP because of security reason since many users are sharing same instances of LLAP, it can create a conflict but still, you can create temp functions using add jar and hive.llap.execution.mode=auto.   with exclusive llap execution mode(hive.llap.execution.mode=only) you will run into the ClassNotFoundException, hive.llap.execution.mode=auto will allow some part of query(map tasks) to run in the tez container.   Here are steps to create a custom permanent function in LLAP(steps are tested on HDP-260)   1. create a jar for UDF function (in this case I am using simple udf):  git clone https://github.com/rajkrrsingh/SampleCode 
mvn clean package  2. upload the target/SampleCode.jar to the node where HSI is running(in my case I have copied it to /tmp directory)   3. add jar to hive_aux_jars (goto Ambari--> hive --> config --> hive-interactive-env template)  export HIVE_AUX_JARS_PATH=$HIVE_AUX_JARS_PATH:/tmp/SampleCode.jar
  4. add the jar to Auxillary JAR list (goto Ambari--> hive --> config --> Auxillary JAR list)  Auxillary JAR list=/tmp/SampleCode.jar  5. restart LLAP   6. create Permanent Custom function  connect to HSI using beeline 
create FUNCTION CustomLength as 'com.rajkrrsingh.hiveudf.CustomLength'; 
describe function CustomLength; 
select CustomLength(description) from sample_07 limit 1;
  7. check where the SampleCode.jar localized  root@hdp26 container_e06_1501140901077_0019_01_000002]# pwd
/hadoop/yarn/local/usercache/hive/appcache/application_1501140901077_0019/container_e06_1501140901077_0019_01_000002
[root@hdp26 container_e06_1501140901077_0019_01_000002]# find . -iname sample*
./app/install/lib/SampleCode.jar 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		07-11-2017
	
		
		02:22 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 np @Tomomichi Hirano feel free to accept best answer in this discussion thread so that other user can get benefit from it. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-25-2016
	
		
		08:16 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Problem Description:   often we are in need to read and write underlying files from user defined reader and writer. if the the custom reader and writer are written in java or language run over JVM
then we are good to add those in hive_aux_jar or we can add them using add jar option at session level but when if it is written in native language and shipped as *.so file 
then we will get java.lang.UnsatisfiedLinkError.   we can workaround this problem after adding it to hive-env 
  1. open Ambari-->Hive-->Advanced-->Advanced hive-env-->hive-env template    2. modify   {% if sqla_db_used or lib_dir_available %}
      export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:{{jdbc_libs_dir}}"
      export JAVA_LIBRARY_PATH="$JAVA_LIBRARY_PATH:{{jdbc_libs_dir}}"
      {% endif %}
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
 
        













