Member since 
    
	
		
		
		09-12-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                39
            
            
                Posts
            
        
                45
            
            
                Kudos Received
            
        
                4
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 3134 | 09-20-2016 12:17 PM | |
| 20602 | 09-19-2016 11:18 AM | |
| 2708 | 09-15-2016 09:54 AM | |
| 4668 | 09-15-2016 07:39 AM | 
			
    
	
		
		
		09-28-2016
	
		
		07:24 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Muthyalapaa, You can follow this link for tunning YARN  http://crazyadmins.com/tag/tuning-yarn-to-get-maximum-performance/ 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-23-2016
	
		
		10:13 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Jasper,  You can use,  docker run -v hadoop:/hadoop --memory="8g" --name sandbox --hostname "sandbox.hortonworks.com"--privileged -d \  OR  docker run -v hadoop:/hadoop -m 8g --name sandbox --hostname "sandbox.hortonworks.com"--privileged -d \ 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-23-2016
	
		
		10:11 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Akhil,  OpenJDK 64-Bit Server VM, Java 1.7.0_101 is used in HDP-2.5. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-20-2016
	
		
		12:58 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 You might added multiple repositories in the Advanced repo section in 4.select stack section. There you have to add only the single repo for which you are going to install Ambari. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-20-2016
	
		
		12:55 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 You are using sles12 for installation of HDP-2.5 right or other ones. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-20-2016
	
		
		12:40 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 @Dheeraj,  We cant run snapshot in multiple iterations but we can use copytable and copy data from one timestamp to other timestamp like,  http://hbase.apache.org/0.94/book/ops_mgt.html#copytable  CopyTable is a utility that can copy part or of all of a table, either to the same cluster or another cluster. The usage is as follows:     $ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable[--starttime=X][--endtime=Y][--new.name=NEW][--peer.adr=ADR] tablename  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-20-2016
	
		
		12:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 @Balkrishna,  The problem is with special characters in the KAFKA service metrics file. We use this file as a part of stack_advisor calculations for AMS split points. Following is a grep on non ASCII characters that reveals the problem :-   grep --color='auto' -P -n "[\x80-\xFF]" /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/package/files/service-metrics/KAFKA.txt  This will show you the ascii characters present in the file as,  Output:-43:��kafka.network.RequestMetrics.RequestsPerSec.request.OffsetFetch.count��
45:��kafka.network.RequestMetrics.RequestsPerSec.request.OffsetCommit.count
47:kafka.network.RequestMetrics.RequestsPerSec.request.LeaderAndIsr.1MinuteRate��  Use /var/lib/ambari-server/resources/scripts/configs.sh to modify and get values from ambari-server as,  /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p ADMIN_PASSWORD -port 8080 get localhost CLUSTER_NAME ams-site  Check the value of "timeline.metrics.cluster.aggregate.splitpoints" and "timeline.metrics.host.aggregate.splitpoints", Look for special non-ascii characters example:  "dfs.datanode.ReplaceBlockOpAvgTime,kafka.network.RequestMetrics.RequestsPerSec.request.JoinGroup.1MinuteRate ,master.Master.ProcessCallTime_num_ops,regionserver.Server.blockCacheEvictionCount"  Here after 1MinuteRate there is space which will shown as a special character on browser through API call as,  "http://AMBARI_SERVER_HOSTS:8080/api/v1/clusters/CLUSTER_NAME/configurations?type=ams-site"  Take latest tag from it at the bottom of page. and put it on browser you will see the special character as,  "timeline.metrics.cluster.aggregate.splitpoints":"dfs.datanode.ReplaceBlockOpAvgTime,kafka.network.RequestMetrics.RequestsPerSec.request.JoinGroup.1MinuteRate
,master.Master.ProcessCallTime_num_ops,regionserver.Server.blockCacheEvictionCount"  To resolve this issue use,  /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p ADMIN_PASSWORD -port 8080 set localhost CLUSTER_NAME ams-site timeline.metrics.cluster.aggregate.splitpoints dfs.datanode.ReplaceBlockOpAvgTime,kafka.network.RequestMetrics.RequestsPerSec.request.JoinGroup.1MinuteRate,master.Master.ProcessCallTime_num_ops,regionserver.Server.blockCacheEvictionCount  To change second property paramater use:-  /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p ADMIN_PASSWORD -port 8080 get localhost CLUSTER_NAME ams-site  /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p ADMIN_PASSWORD -port 8080 set localhost CLUSTER_NAME ams-site timeline.metrics.host.aggregate.splitpoints EventTakeSuccessCount,cpu_idle,dfs.FSNamesystem.ExcessBlocks,dfs.datanode.ReadBlockOpNumOps,disk_total,jvm.JvmMetrics.LogError,kafka.controller.ControllerStats.LeaderElectionRateAndTimeMs.99percentile,kafka.network.RequestMetrics.RequestsPerSec.request.FetchFollower.5MinuteRate,kafka.network.RequestMetrics.RequestsPerSec.request.UpdateMetadata.1MinuteRate,kafka.server.BrokerTopicMetrics.FailedFetchRequestsPerSec.meanRate,master.AssignmentManger.ritCount,master.FileSystem.MetaHlogSplitTime_95th_percentile,mem_shared,proc_total,regionserver.Server.Append_median,regionserver.Server.Replay_95th_percentile,regionserver.Server.totalRequestCount,rpcdetailed.rpcdetailed.GetBlockLocationsAvgTime,write_bps
### Restart the Service 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-20-2016
	
		
		11:46 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Then snapshot the  best among them. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-20-2016
	
		
		10:31 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 I am getting an error in hbase while starting it in HDP. I am using HDP-2.3.2  2016-09-17 04:47:58,238 FATAL [master:hb-qa:60000] master.HMaster: Master server abort: loaded coprocessors are: []
2016-09-17 04:47:58,239 FATAL [master:hb-qa:60000] master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.hbase.TableExistsException: hbase:namespace
at org.apache.hadoop.hbase.master.handler.CreateTableHandler.prepare(CreateTableHandler.java:133)
at org.apache.hadoop.hbase.master.TableNamespaceManager.createNamespaceTable(TableNamespaceManager.java:232)
at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:86)
at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1046)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:925)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:605)
at java.lang.Thread.run(Thread.java:745)  Can someone help me with this? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache HBase
 
        













