Member since 
    
	
		
		
		07-17-2019
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                738
            
            
                Posts
            
        
                433
            
            
                Kudos Received
            
        
                111
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 3473 | 08-06-2019 07:09 PM | |
| 3665 | 07-19-2019 01:57 PM | |
| 5188 | 02-25-2019 04:47 PM | |
| 4663 | 10-11-2018 02:47 PM | |
| 1767 | 09-26-2018 02:49 PM | 
			
    
	
		
		
		05-24-2016
	
		
		04:40 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Increase the value of hbase.client.scanner.timeout.period. Make sure that hbase.rpc.timeout is at least as large as hbase.scanner.timeout. Most likely, the client took too much time rendering the data in your terminal and the scanner in the HBase RegionServer memory was closed to free resources. When the client went to fetch the next batch of records, the regionserver informed your client that it already closed the scanner you were trying to use. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-23-2016
	
		
		09:52 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 It sounds like the initial start of HBase just didn't actually happen successfully, but Ambari didn't notice it right away. It thought the processes started, but when it tried to run a smoke-test to verify that HBase is up and running, it failed (this error is indicative of the Master not actually running like Ted hints at) 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-19-2016
	
		
		08:34 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Current thread (0x00007f2e7c01d000):  JavaThread "Unknown thread" [_thread_in_vm, id=42067, stack(0x00007f2e83bc8000,0x00007f2e83cc9000)]
Stack: [0x00007f2e83bc8000,0x00007f2e83cc9000],  sp=0x00007f2e83cc72f0,  free space=1020k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0xaaca9a]  VMError::report_and_die()+0x2ba
V  [libjvm.so+0x4f333b]  report_vm_out_of_memory(char const*, int, unsigned long, VMErrorType, char const*)+0x8b
V  [libjvm.so+0x90e8c3]  os::Linux::commit_memory_impl(char*, unsigned long, bool)+0x103
V  [libjvm.so+0x90ee19]  os::pd_commit_memory(char*, unsigned long, unsigned long, bool)+0x29
V  [libjvm.so+0x90877a]  os::commit_memory(char*, unsigned long, unsigned long, bool)+0x2a
V  [libjvm.so+0xaa8749]  VirtualSpace::expand_by(unsigned long, bool)+0x1c9
V  [libjvm.so+0xaa921e]  VirtualSpace::initialize(ReservedSpace, unsigned long)+0xee
V  [libjvm.so+0x5eda21]  CardGeneration::CardGeneration(ReservedSpace, unsigned long, int, GenRemSet*)+0xf1
V  [libjvm.so+0x4cb3fb]  ConcurrentMarkSweepGeneration::ConcurrentMarkSweepGeneration(ReservedSpace, unsigned long, int, CardTableRS*, bool, FreeBlockDictionary<FreeChunk>::DictionaryChoice)+0x4b
V  [libjvm.so+0x5eead2]  GenerationSpec::init(ReservedSpace, int, GenRemSet*)+0xf2
V  [libjvm.so+0x5dd77e]  GenCollectedHeap::initialize()+0x1ee
V  [libjvm.so+0xa75bab]  Universe::initialize_heap()+0xfb
V  [libjvm.so+0xa75f1e]  universe_init()+0x3e
V  [libjvm.so+0x62f665]  init_globals()+0x65
V  [libjvm.so+0xa5a12e]  Threads::create_vm(JavaVMInitArgs*, bool*)+0x23e
V  [libjvm.so+0x6c3274]  JNI_CreateJavaVM+0x74
C  [libjli.so+0x745e]  JavaMain+0x9e
C  [libpthread.so.0+0x7aa1]  start_thread+0xd1
  Is telling us that the JVM was still trying to create the JVM and failed to do so.  jvm_args: -Dproc_regionserver -XX:OnOutOfMemoryError=kill -9 %p -Dhdp.version=2.4.2.0-258 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log -Djava.io.tmpdir=/tmp -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/hbase/gc.log-201605191913 -Xmn1024m -XX:CMSInitiatingOccupancyFraction=70 -Xms16384m -Xmx16384m -Dhbase.log.dir=/var/log/hbase -Dhbase.log.file=hbase-root-regionserver-ip-172-31-19-88.log -Dhbase.home.dir=/usr/hdp/current/hbase-regionserver/bin/.. -Dhbase.id.str=root -Dhbase.root.logger=DEBUG,RFA -Djava.library.path=:/usr/hdp/2.4.2.0-258/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.2.0-258/hadoop/lib/native -Dhbase.security.logger=INFO,RFAS  We can also see all of the JVM arguments. You set the young generation size to 1G, and the min and max heap size to 16G. These seem fine.  I apparently misread your free memory earlier. You have 40GB free and 20G in caches, so you should have plenty of free memory for HBase.  Can you also please share the output of `ulimit -a` as the hbase user and your hbase-site.xml? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-19-2016
	
		
		06:56 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Also, can you share that JVM crash report? It probably has some signs about what was trying to do that allocation. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-19-2016
	
		
		05:49 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 That error message is saying that the JVM failed to allocate ~20GB of memory via the Linux system call (mmap). It looks like that would be very close to the total unused memory on your system which is likely why the call failed. The operating system couldn't actually allocate that much memory for your process.  Try setting -Xmx16G in the variable HBASE_REGIONSERVER_OPTS in hbase-env.sh instead of 20G. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-18-2016
	
		
		03:23 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Have you read the documentation on HBaseStorage from the Apache Pig website?  http://pig.apache.org/docs/r0.15.0/func.html#HBaseStorage  You should be able to just not specify the column family the the columns definition (e.g. a colon with no text before it). If this doesn't work, the parsing logic probably needs to be updated. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-16-2016
	
		
		07:38 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 What do you mean by "But when I try browsing on that port locally"? This isn't a service you can access with your web browser. It requires a Thrift client speaking a specific schema to interact with. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-16-2016
	
		
		06:45 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Yes, the default port is 9090 (the --help option would show you this too). I'm not sure what ports NiFi uses. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-16-2016
	
		
		06:36 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		4 Kudos
		
	
				
		
	
		
					
							 Use the -p option:  hbase thrift -p 12345 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-16-2016
	
		
		01:18 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Please	be aware that removing the write-ahead logs is causing data loss. Removing them is almost never an ideal situation (unless there is no data in HBase). 
						
					
					... View more