Member since 
    
	
		
		
		09-24-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                816
            
            
                Posts
            
        
                488
            
            
                Kudos Received
            
        
                189
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 3103 | 12-25-2018 10:42 PM | |
| 13984 | 10-09-2018 03:52 AM | |
| 4683 | 02-23-2018 11:46 PM | |
| 2401 | 09-02-2017 01:49 AM | |
| 2822 | 06-21-2017 12:06 AM | 
			
    
	
		
		
		09-02-2017
	
		
		01:49 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hi @Aishwarya Dixit, pe or "Performance Evaluation" is a tool based on MapReduce to test read/writes to HBase. nclients means that 10*nclients mappers will be started to run the supplied pe command. Example:  hbase pe randomWrite 2
...
2017-09-02 01:31:17,681 INFO  [main] mapreduce.JobSubmitter: number of splits:20  starts a MR job with 20 mappers and 1 reducer. So you can start with a small number like 1-3 to make sure HBase works as expected, and then increase it to about max number of mappers you can run on your cluster divided by 10. Of course you can use a larger number, but then mappers will run in multiple "waves". 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-23-2017
	
		
		12:48 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 The same problem here @Dongjoon Hyun, the cluster is not connected to the Internet, and browsing http://repo.hortonworks.com/content/groups/public/com/hortonworks/spark/spark-llap-assembly_2.11/1.1.3-2.1/ returns no jars, only pom for Maven.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-21-2017
	
		
		01:11 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 You will need to tell Ambari about hostname changes. Try "ambari-server update-host-names", details here. Without the update other services don't work well either, for example HBase most likely lists 2 times more Region servers etc. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-21-2017
	
		
		12:06 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 You use either the public repo pointing to public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.1.0 (meaning ambari.repo as you download it without any changes), or you download the tar file, untar it, and change ambari.repo so that baseurl points to http://your-apache-server/ambari/centos6 and also change gpgkey to http://your-apache-server/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins. Either should work. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-13-2017
	
		
		05:05 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 This works after replacing "global" with "cluster-env", and quotes on the new value are not required, for example:  ./configs.sh set h10.example.com c1 cluster-env ignore_groupsusers_create true 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-01-2017
	
		
		09:34 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 auto.create.topics.enable=true means that Kafka will create a topic automatically when you send messages to a non existing topic. So, in production Kafka you usually set that to false. It doesn't mean that your Kafka instance won't store any topics. And even if you can achieve that somehow, your broker will be running in vain, wasting resources. If you want automatism, I'd go with rsync. You can call it a primitive solution but works great. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-01-2017
	
		
		08:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Well, no. The best you can do is to install Kafka on your node (yum install kafka), and copy /etc/kafka/conf files from one of your brokers. Then whenever you change configuration of your Kafka you need to update conf files on your node. If that doesn't happen often you can do it manually, or you can configure rsync to do it for you. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-31-2017
	
		
		08:21 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 This worked for me as well (when replacing Spark-1 with a private build based on Apache Spark) 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-21-2017
	
		
		11:11 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Either one will work, but on a long-running cluster where you rarely restart Yarn, set it to the FQDN of rm1, and also set rm1 to be the active RM. That's because the discovery of active RM is done in sequence rm1 --> rm2. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-19-2017
	
		
		02:02 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 Hi @james.jones  The answer is Yes to both your questions.  Regarding Spark user and group, in "spark-env" block of "configurations" you can set exactly what you said:  "spark_user" : "svcspark",
"spark_group" : "svcspark"
  and yes, Spark will run as svcspark. Regarding Part 2, those settings can be provided in the "cluster-env" block. Property names and defaults are   "ignore_groupsusers_create" : "false",
"override_uid" : "true",
"sysprep_skip_create_users_and_groups" : "false",  The best way to familiarize with these and other "obscure" properties is to export a blueprint from an existing cluster, and explore cluster-env and other config blocks. HTH. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
         
					
				













