Member since 
    
	
		
		
		05-16-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                270
            
            
                Posts
            
        
                18
            
            
                Kudos Received
            
        
                4
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2258 | 07-23-2016 11:36 AM | |
| 4108 | 07-23-2016 11:35 AM | |
| 2122 | 06-05-2016 10:41 AM | |
| 1513 | 06-05-2016 10:37 AM | 
			
    
	
		
		
		04-01-2017
	
		
		06:25 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Our server had to be restarted so we are trying to start all of the process :  I keep getting this error. Why is that? How do I fix this? It is urgent as Namenode was on the master node and now the cluster is not working. Please help      Failed to start namenode.
java.io.IOException: NameNode is not formatted.
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:212)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1063)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:767)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:609)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:670)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:838)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:817)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1538)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1606)     
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Hadoop
 
			
    
	
		
		
		03-22-2017
	
		
		05:22 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Deepesh: How do I use that in query scheduled through oozie?   I need to have something like this in hive:  IF(LOCK EXISTS ON TABLE)  unlock table  else  nothing  OR  UNLOCK TABLE IF LOCK EXISTS tablename  This is because the query is going to be scheduled through oozie and I need the INSERT OVERWRITE TO always execute and not fail because of any locks.   One liner answers like this barely help. Request the questions be read with attention before answering like this. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-21-2017
	
		
		12:58 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I have   INSERT OVERWRITE   statements scheduled as hive queries through oozie. Before running INSERT OVERWRITE, I want to check if locks exist on the table   i.e something like show locks dbname.tablename   but it should basically return somethign that I can use to decide if I have to run UNLOCK TABLE dbname.tblname or not.  OR   Do we have somethign like UNLOCK TABLE IF EXISTS LOCK dbname.tblname?  It is really important as I have been tryign to find a way to update my tables in hive at the scheduled time but oozie gets stuck if a user was using select on the table and my scheduled queries tried to run right at the same time. I want to make sure that the query runs at its scheduled time even if the select may or may not give the updated results 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Hive
 - 
						
							
		
			Apache Oozie
 
			
    
	
		
		
		12-14-2016
	
		
		05:21 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @alex: Yes, but how does that change anything? I am still grouping by ingrn.fab_id and summing ingrn.passed_qty 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-14-2016
	
		
		04:49 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 This is what I have so far. The value that I get from SUM is much higher with join than what I get without join.I can't figure out why is that?    SELECT 
  SUM(ingrn.passed_qty)
  FROM
  erp.fabric_grn ingrn
  LEFT JOIN
  erp.fabric_outgrn outgrn 
  ON
  UPPER(ingrn.fab_id) = outgrn.out_id
  GROUP BY UPPER(ingrn.fab_id)   It gives a different value from:    SELECT 
  SUM(ingrn.passed_qty)
  FROM
  erp.fabric_grn ingrn
  GROUP BY UPPER(ingrn.fab_id) 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Hive
 
			
    
	
		
		
		11-04-2016
	
		
		11:29 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 After having added the a new host in the cluster, I get this error:  How can I fix this?   2016-11-05 04:43:58,923 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1118010305-10.10.10.9-1463739087854 (Datanode Uuid null) service to warehouse.swtched.com/182.18.170.55:8022 beginning handshake with NN
2016-11-05 04:43:59,188 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool BP-1118010305-10.10.10.9-1463739087854 (Datanode Uuid null) service to warehouse.swtched.com/182.18.170.55:8022 Datanode denied communication with namenode because the host is not in the include-list: DatanodeRegistration(35.162.23.31, datanodeUuid=f7b0dcd2-c987-4710-a74d-5e5f57d1147b, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=cluster52;nsid=1790842125;c=0)
	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:934)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:5079)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1156)
	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:96)
	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:29184)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
2016-11-05 04:44:02,876 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
2016-11-05 04:44:02,878 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Hadoop
 
			
    
	
		
		
		10-17-2016
	
		
		05:58 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Awesome! It worked perfect. Thanks 🙂 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-16-2016
	
		
		03:26 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I can iterate filterHeader using forEach as usual we do for file loaded using PigStorage right? There should be no difference? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-16-2016
	
		
		03:23 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks. Also, where can I find how exactly tail and head work here? Looks a little confusing to me . Any good resources? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-16-2016
	
		
		03:18 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Also, I am going to be loading file through a CSV loader, so fullfile here is fullfile = LOAD 'Path_to_File' USING PigStorage(',') ? 
						
					
					... View more