Member since 
    
	
		
		
		01-25-2020
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                6
            
            
                Posts
            
        
                1
            
            
                Kudos Received
            
        
                0
            
            
                Solutions
            
        
			
    
	
		
		
		07-23-2020
	
		
		07:39 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Check for more details and got error as : "Unexpected end of input stream"     Now, Get the hdfs LOCATION for the table by using below command on HUE or HIVE shell:  show create table <table-name>;     Check for the zero byte size files and remove them from hdfs location using below command:      hdfs dfs -rm -skipTrash $(hdfs dfs -ls -R <hdfs_location> | grep -v "^d" | awk '{if ($5 == 0) print $8}')     Try running again our query which ran successfully this time. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-23-2020
	
		
		07:37 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 We got the same issue and resolve as below:      Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask    Check for more details and got error as : "Unexpected end of input stream"     Now, Get the hdfs LOCATION for the table by using below command on HUE or HIVE shell:  show create table <table-name>;     Check for the zero byte size files and remove them from hdfs location using below command:      hdfs dfs -rm -skipTrash $(hdfs dfs -ls -R <hdfs_location> | grep -v "^d" | awk '{if ($5 == 0) print $8}')     Try running again our query which ran successfully this time. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-23-2020
	
		
		07:28 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Run msck command for the table you want to truncate in hive shell.  hive> use <database-name>;  hive> msck repair table <table-name>;  If it will show any error then rectify it as we got one of our partition was missing.     So we create that partition directory on hdfs location and re-run msck repair command. Now it would not show any issue.     Now running truncate command will run successfully.   hive> truncate table <table-name>;     [NOTE: Please update database and table name as per the requirement] 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-27-2020
	
		
		07:24 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 "Cannot obtain block length for LocatedBlock" error comes because of file is still in being-written state. Run the fsck command to get more information about the error file.  $ hdfs fsck -blocks /user/bdas/warehouse/queue/input/apollo_log/ds=20200626/hr=21/DN27.Apollo.1593219600494.txt.gz  Connecting to namenode \ugi=hdfs&blocks=1&path=%2Fuser%2Fbdas%2Fwarehouse%2Fqueue%2Finput%2Fapollo_log%2Fds%3D20200626%2Fhr%3D21%2FDN27.Apollo.1593219600494.txt.gz  FSCK started by hdfs (auth:KERBEROS_SSL) from /10.40.29.101 for path /user/bdas/warehouse/queue/input/apollo_log/ds=20200626/hr=21/DN27.Apollo.1593219600494.txt.gz at Sat Jun 27 08:33:46 EDT 2020  Status: HEALTHY   Total size:    0 B (Total open files size: 21599 B)   Total dirs:    0   Total files:   0   Total symlinks:                      0 (Files currently being written: 1)   Total blocks (validated):       0 (Total open file blocks (not validated): 1)     We can run fsck for the full directory to check for all the file's status:  ~]$ hdfs fsck /user/bdas/warehouse/queue/input/apollo_log/ds=20200626/hr=21/ -files -openforwrite  Connecting to namenode \ugi=hdfs&files=1&openforwrite=1&path=%2Fuser%2Fbdas%2Fwarehouse%2Fqueue%2Finput%2Fapollo_log%2Fds%3D20200626%2Fhr%3D21  FSCK started by hdfs (auth:KERBEROS_SSL) from /10.47.27.101 for path /user/bdas/warehouse/queue/input/apollo_log/ds=20200626/hr=21 at Sat Jun 27 08:47:32 EDT 2020  /user/bdas/warehouse/queue/input/apollo_log/ds=20200626/hr=21 <dir>  /user/bdas/warehouse/queue/input/apollo_log/ds=20200626/hr=21/DN27.Apollo.1593219600494.txt.gz 21599 bytes, 1 block(s), OPENFORWRITE:  OK  /user/bdas/warehouse/queue/input/apollo_log/ds=20200626/hr=21/DN27.Apollo.1593220237244.txt.gz 20661944 bytes, 1 block(s):  OK  /user/bdas/warehouse/queue/input/apollo_log/ds=20200626/hr=21/DN27.Apollo.1593220269292.txt.gz 20857646 bytes, 1 block(s):  OK     In above output it is showing only 1-file is having issue. So run the below command to recover its lease.  $ hdfs debug recoverLease -path /user/bdas/warehouse/queue/input/apollo_log/ds=20200626/hr=21/DN27.Apollo.1593219600494.txt.gz -retries 3     Once succeeded, we will verfiy again with fsck command:  $ hdfs fsck /user/bdas/warehouse/queue/input/apollo_log/ds=20200626/hr=21/ -files -openforwrite  Connecting to namenode via ugi=hdfs&files=1&openforwrite=1&path=%2Fuser%2Fbdas%2Fwarehouse%2Fqueue%2Finput%2Fapollo_log%2Fds%3D20200626%2Fhr%3D21  FSCK started by hdfs (auth:KERBEROS_SSL) from /10.40.29.101 for path /user/bdas/warehouse/queue/input/apollo_log/ds=20200626/hr=21 at Sat Jun 27 08:49:09 EDT 2020  /user/bdas/warehouse/queue/input/apollo_log/ds=20200626/hr=21 <dir>  /user/bdas/warehouse/queue/input/apollo_log/ds=20200626/hr=21/DN27.Apollo.1593219600494.txt.gz 3528409 bytes, 1 block(s):  OK     Now the error is resolved. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-26-2020
	
		
		09:20 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,     We are also getting the same error almost every week for any database. Can we get any permanent solution for this? After waiting for sometime it get resolved itself. But i want to know the root cause for this and if it database or table gets locked, how i can release that lock for those DB and table in MySQL database. 
						
					
					... View more