Member since 
    
	
		
		
		12-13-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                21
            
            
                Posts
            
        
                1
            
            
                Kudos Received
            
        
                4
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 7543 | 08-27-2018 07:31 PM | |
| 2582 | 08-27-2018 01:12 PM | |
| 2712 | 08-27-2018 01:11 PM | |
| 10269 | 01-10-2017 05:12 AM | 
			
    
	
		
		
		10-26-2018
	
		
		07:40 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 When I run the drop database command within impala it doesn't remove the corresponding HDFS files.                    drop database if exists databasename cascade;     I want to drop a database and remove the corresponding hdfs files.     Per the documentaion -   DROP DATABASE Statement   Removes a database from the system.   The physical operations involve removing the metadata for the database from the metastore, and deleting the corresponding *.db directory from HDFS.     The cascade parameter drops the tables within the database first.     I'm running CDH 513.3 and CM 514.3. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Cloudera Manager
			
    
	
		
		
		10-17-2018
	
		
		07:26 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Were you able to resolve this issue? What did you do to fix the problem? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-29-2018
	
		
		06:44 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I have a further question     Is there a way to have the BDR job connect to a specific source server?    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-27-2018
	
		
		07:31 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 You're the hero. I pulled diag data on one of the jobs and found a connection refused when trying to access one of the files with the missing blocks error. I tried connecting to the remote server and can't. I looked at others and some I can connect to others I can't. It's a large inventory of servers to work through.     I really appreciate your help with this. Thank you. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-27-2018
	
		
		04:49 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Here's the error from the BDR job Running in the DR system. Below that I've run an fsck on the file on the source system to show that it does exist on the source and has the same block number as listed in the error.     I've removed ip addresses and I've removed the actual file name and replaced with "filename"     ERROR  /path/filename org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1508298398-ipaddress-1406065203774:blk_2079737512_1100628731148 file=filename   at org.apache.hadoop.hdfs.DFSInputStream.refetchLocations(DFSInputStream.java:1040)   at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:1023)   at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:1002)   at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:642)   at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:895)   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:954)   at java.io.DataInputStream.read(DataInputStream.java:149)   at java.io.BufferedInputStream.read1(BufferedInputStream.java:284)   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)   at java.io.FilterInputStream.read(FilterInputStream.java:107)   at com.cloudera.enterprise.distcp.util.ThrottledInputStream.read(ThrottledInputStream.java:77)   at com.cloudera.enterprise.distcp.mapred.RetriableFileCopyCommand.readBytes(RetriableFileCopyCommand.java:371)   at com.cloudera.enterprise.distcp.mapred.RetriableFileCopyCommand.copyToFile(RetriableFileCopyCommand.java:345)   at com.cloudera.enterprise.distcp.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:161)   at com.cloudera.enterprise.distcp.util.RetriableCommand.execute(RetriableCommand.java:87)   at com.cloudera.enterprise.distcp.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:617)   at com.cloudera.enterprise.distcp.mapred.CopyMapper.map(CopyMapper.java:454)   at com.cloudera.enterprise.distcp.mapred.CopyMapper.map(CopyMapper.java:69)   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:793)   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)   at org.apache.hadoop     Here's the file on the source system -     hdfs fsck filename -files -blocks -locations  Connecting to namenode via http://prodservername:50070  FSCK started by hdfs (auth:KERBEROS_SSL) from /serveripaddress for path filename at Mon Aug 27 19:39:53 EDT 2018  filename 995352 bytes, 1 block(s):  OK  0. BP-1508298398-ipaddress-1406065203774:blk_2079737512_1100628731148 len=995352 Live_repl=3 [DatanodeInfoWithStorage[ipaddress:1004,DS-00246250-eef8-4c03-8ef7-c898594f960b,DISK], DatanodeInfoWithStorage[ipaddress:1004,DS-297b0420-a2a1-4418-8691-3ef9a374cc51,DISK], DatanodeInfoWithStorage[ipaddress:1004,DS-0ae9f985-a12a-4871-991b-d2e8017c4c4b,DISK]]       
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-27-2018
	
		
		01:35 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks for your reply.     The files and blocks reported missing by the BDR job running on the DR site do exist on the source system. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-27-2018
	
		
		01:24 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I have a DR site and run replication from prod to the DR site. My BDR jobs are failing with missing blocks error. The files and blocks that are reported missing are in the source prod system so I'm not sure why the jobs are failing and not copying them over. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Cloudera Manager
			
    
	
		
		
		08-27-2018
	
		
		01:11 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 The fix for this was to upgrade CDH from 5.5.1 to 5.13.3 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-22-2018
	
		
		10:10 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 What is the best way to stop a BDR job?     Does abort do this safely without causing problems?     How about in yarn using the kill option?    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Cloudera Manager
 
        





