Member since 
    
	
		
		
		03-14-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                4721
            
            
                Posts
            
        
                1111
            
            
                Kudos Received
            
        
                874
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2493 | 04-27-2020 03:48 AM | |
| 4963 | 04-26-2020 06:18 PM | |
| 4044 | 04-26-2020 06:05 PM | |
| 3279 | 04-13-2020 08:53 PM | |
| 4994 | 03-31-2020 02:10 AM | 
			
    
	
		
		
		02-12-2017
	
		
		04:14 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @ilhyung cho   Good to know that it works now. It will be great if you mark this thread/answer as accepted (by clicking the accept link) 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-12-2017
	
		
		03:09 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @ilhyung cho  in your command you are not using the correct Quotation marks.  Incorrect:  javac -cp 'hadoop classpath' HadoopDFSFileReadWrite.java   Correct: 
  javac -cp `hadoop classpath` HadoopDFSFileReadWrite.java   .  Everything you type between backticks is evaluated (executed) by the shell before the main command,  Whereas the same is not the case with single quotation marks.  For Example:  [root@sandbox ~]# echo 'hadoop classpath'
hadoop classpath
[root@sandbox ~]# echo `hadoop classpath`
/usr/hdp/2.5.0.0-1245/hadoop/conf:/usr/hdp/2.5.0.0-1245/hadoop/lib/*:/usr/hdp/2.5.0.0-1245/hadoop/.//*:/usr/hdp/2.5.0.0-1245/hadoop-hdfs/./:/usr/hdp/2.5.0.0-1245/hadoop-hdfs/lib/*:/usr/hdp/2.5.0.0-1245/hadoop-hdfs/.//*:/usr/hdp/2.5.0.0-1245/hadoop-yarn/lib/*:/usr/hdp/2.5.0.0-1245/hadoop-yarn/.//*:/usr/hdp/2.5.0.0-1245/hadoop-mapreduce/lib/*:/usr/hdp/2.5.0.0-1245/hadoop-mapreduce/.//*::jdbc-mysql.jar:mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.37.jar:mysql-connector-java.jar:/usr/hdp/2.5.0.0-1245/tez/*:/usr/hdp/2.5.0.0-1245/tez/lib/*:/usr/hdp/2.5.0.0-1245/tez/conf
  . 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-11-2017
	
		
		06:00 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Subramanian Santhanam  Additionally it looks like your browser is restricting "javascript" to be executed inside it.  You should allow your browser to do enable javascript execution.  Please refer to the following link in order to check if your browser has javascript enabled or not? And if it is disabled then you can follow the instructions from the same link to know how to fix it:  http://enable-javascript.com/  . 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-11-2017
	
		
		05:51 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @shyam gurram  Try the following steps (which is a kind of hack)   1.  Download from : https://github.com/OpenTSDB/opentsdb/releases/download/v2.3.0/opentsdb-2.3.0.tar.gz  2. Extract it as following:  tar xvzf opentsdb-2.3.0.tar.gz   3. Now change directory to "opentsdb-2.3.0".  Then the hack is to copy the "./third_party" directory inside the "build" directory on your own as following:  # cd opentsdb-2.3.0
# mkdir build
# cp -r third_party ./build
# ./build.sh  . 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-10-2017
	
		
		11:16 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Alexander E  As you mentioned that the Zookeeper, NameNode, DataNode and Nodemanager on Node0 is up. The nodes can reach each other nodes.  -  But is the NameNode healthy ? I means do you see any error in the NameNode log?    - Sometimes even though the NameNode is running but it is running outofmemory (or OS resource unavailability/ like too many open sockets ...etc)  and hence not able to respond. So better to check the Name Node log. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-10-2017
	
		
		09:35 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Huahua Wei  Ideally the "MaxMetaspaceSize" has no upper limit.  Please see:   http://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/considerations.html  The amount of native memory that can be used for class metadata is by default unlimited. Use the option MaxMetaspaceSize to put an upper limit on the amount of native memory used for class metadata.   .  Regarding the tuning parameter you might want to tune it as following:  Something you should try with this:
----------------------------------------
ams-hbase-env::hbase_master_heapsize 1152 MB ===>> 8192 MB 
ams-hbase-env::hbase_master_maxperm_size 128 MB ===>> 128 MB  (or 256 MB)
ams-hbase-env::hbase_regionserver_heapsize 768 MB ===>> 8192 MB 
ams-hbase-env::regionserver_xmn_size 128 MB ===>> 1280 MB  to 1536 MB  .  In JDK 1.8 PermGen space is replaced with MetaSpace. And it is always better to set the "MaxMetaspaceSize" so that if there is any classloader leak then it will not grow beyond the MaxMetaspaceSize boundary else it may cause huge system memory utilization (in case of leak).  And also disable (exclude) HBase per region metrics to avoid data flooding. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-10-2017
	
		
		08:16 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Huahua Wei  Yes it is right there in your output :  MetaspaceSize            = 21807104 (20.796875MB)     
CompressedClassSpaceSize = 1073741824 (1024.0MB)
MaxMetaspaceSize         = 17592186044415 MB          .  MetaSpace is not displayed same as the "PermGen" used to be present in the Java7 (or previous version of java)  jmap output.  But you can use some profilers to monitor it.    Or enable garbage collection logging to get the Garbage collection information of your JVM that will also show the metaspace current usage.  Example GC log:     [GC (Allocation Failure) 
      1.055: [DefNew: 861K->64K(960K), 0.0008510 secs]
      1.056: [Tenured: 1226K->1279K(1280K), 0.0009817 secs]
      1303K->1290K(2240K), 
      [Metaspace: 44K->44K(4480K)],     ------> NOTICE
      0.0019995 secs]   . 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-10-2017
	
		
		03:52 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Huahua Wei  Good to know that you are not getting alert frequently.  However i would like to add about one eye catching setting that you have made.   ams-hbase-env::hbase_master_maxperm_size 128 MB ===>> 8192 MB  (too High value)  PermGen: It space stands for memory allocation for Permanent 
generation All Java immutable objects come under this category, like 
String which is created with literals or with String.intern() methods 
and for loading the classes into memory  Basically class definitions are loaded in this area of JVM.  8192 is much much high value.   AMS Hbase master does not consume that much memory. So you may reduce that value to at least 8 times lower 🙂  If you want to check how much permgen was utilized at any point of time then you can do the following:  # su - ams
# ps -ef | grep ^ams | grep HMaster
# /JAVA_HOME/bin/jmap  -heap   $PID_HMASTER
.
.
OUTPUT (Example)
PS Perm Generation
   capacity = 57671680 (55.0MB)
   used     = 41699008 (39.76727294921875MB)
   free     = 15972672 (15.23272705078125MB)
   72.30413263494319% used
  .  NOTE: From JDK 1.8 onwards that "PermGen" space is replaced with "MetaSpace". 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-10-2017
	
		
		03:37 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Huahua Wei That's not a trap actually but a designed behaviour that once you move the AMS collector to a new host other services need to be restarted so that they can send the Sink Data to the correct metrics collector host.   The Document also says that in step6  For every service, use Ambari Web > Service Actions > Restart All to
          start sending metrics to the new collector.  Because most of the services sends data to the AMS so they need to be aware of this config change and hence need to be restarted. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-09-2017
	
		
		01:36 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Huahua Wei  Please try to first disable the auto start for AMS by commenting the following lines from the file "/etc/ambari-server/conf/ambari.properties". Then restart ambari server.  recovery.type=AUTO_START
recovery.enabled_components=METRICS_COLLECTOR  .  Above will help us in understanding why AMS went down.  Due to memory issue/overload ..etc.    Also can you please try to disable (exclude) HBase per region metrics to avoid data flooding. That can be done by explicitly adding the following lines to the end of the file:  *.source.filter.class=org.apache.hadoop.metrics2.filter.GlobFilter
hbase.*.source.filter.exclude=*Regions*  For more information please refer to:  https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.0.0/bk_ambari_reference_guide/content/_enabling_hbase_region_and_table_metrics.html  . 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
         
					
				













