Member since 
    
	
		
		
		05-03-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                18
            
            
                Posts
            
        
                5
            
            
                Kudos Received
            
        
                1
            
            
                Solution
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 3619 | 08-31-2016 01:50 PM | 
			
    
	
		
		
		06-23-2017
	
		
		01:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 We had kerberised the HDP cluster and storm UI was not opening and we need to troubleshoot StormUI issue.  To put DEBUG on for STormUI follow this steps:  --------------------------------------------------------------  In Ambari - Storm - Config - STORM UI Server section   add below :  ui.childopts(original)= -Xmx768m_JAAS_PLACEHOLDER  ui.childopts = -Xmx768m_JAAS_PLACEHOLDER -Dsun.security.krb5.debug=true  we wanted to troubleshoot zookeeper as well   To put DEBUG on for Zookeeper follow this steps:  ------------------------------------------------------------------  In Amabri - Zookeeeper - Advanced Zookeeper-log4j  add below:  #log4j.rootLogger=INFO,CONSOLE  ***Comment this line  log4j.rootLogger=DEBUG,CONSOLE,ROLLINGFILE ***uncomment this line  The log file will be generated by default to /home/zookeeper/zookeeper.log if below property is set as default.  log4j.appender.ROLLINGFILE.File=zookeeper.log 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		10-26-2018
	
		
		09:23 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Abhilash Chandrasekharan ...were you able to enable HA for mysql database?  If yes, could you please help us by posting the steps you followed. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-03-2016
	
		
		03:22 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 The Ranger KMS logs are present   cd /usr/hdp/current/ranger-kms/ews
lrwxrwxrwx  1  kms  kms  19  sep 23  20:33 logs-> /var/log/ranger/kms  We would like to change ranger KMS logs location from /var/log/ ranger/kms to /hadoop/log/ranger/kms.  mkdir /hadoop/log/ranger/kms
chmod 755 /hadoop/log/ranger/kms
chown kms:kms /hadoop/log/ranger/kms
  rm symbolic link for logs   cd /usr/hdp/current/ranger-kms/ 
vi ranger-kms
search for word "TOMCAT_LOG_DIR" and replace with new location.
TOMCAT_LOG_DIR=/hadoop/log/ranger/kms   In Ambari webUI update log directory for ranger KMS kms_log_dir property.  
Restart KMS  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-30-2016
	
		
		05:00 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Not sure of your exact question, but typically it is a good idea to compress the output of your map step in map-reduce jobs.  This is because this data is written to disk and then sent within your cluster to the reducer (shuffle) and the overhead of compressing/decompressing is almost always minimal compared to the large gains from sending over the wire significantly lower data volumes from compressed data.  To set this for all of your jobs, use these configs in mapred-site.xml"  <property>
  <name>mapred.compress.map.output</name>
  <value>true</value>
</property> 
 
<property> 
  <name>mapred.map.output.compression.codec</name>
  <value>org.apache.hadoop.io.compress.SnappyCodec</value> 
</property>   You can of course set the first value to false in mapred-site.xml and override it by setting it for each job (e.g. as a parameter in the command line or set at the top of a pig script).  See this link for details: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_hdfs_admin_tools/content/ch04.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-09-2016
	
		
		04:51 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks @Artem Ervits.It worked! we were using HDP2.2  $hadoop daemonlog -setlevel <resource mgr host :8088>  org.apache.hadoop.yarn.server.resourcemanger DEBUG   We used above command to debug on and it did worked. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-07-2016
	
		
		07:35 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @jigar.patel Can you accept this answer to close this post ? Thanks 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-05-2016
	
		
		03:50 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 # Check Commands
# --------------
# Spark Scala
# -----------
# Optionally export Spark Home
export SPARK_HOME=/usr/hdp/current/spark-client
# Spark submit example in local mode
spark-submit --class org.apache.spark.examples.SparkPi --driver-memory 512m --executor-memory 512m --executor-cores 1 $SPARK_HOME/lib/spark-examples*.jar 10
# Spark submit example in client mode
spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --num-executors 3 --driver-memory 512m --executor-memory 512m --executor-cores 1 $SPARK_HOME/lib/spark-examples*.jar 10
# Spark submit example in cluster mode
spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster --num-executors 3 --driver-memory 512m --executor-memory 512m --executor-cores 1 $SPARK_HOME/lib/spark-examples*.jar 10
# Spark shell with yarn client
spark-shell --master yarn-client --num-executors 3 --driver-memory 512m --executor-memory 512m --executor-cores 1
# Pyspark
# -------
# Optionally export Hadoop COnf and PySpark Python
export HADOOP_CONF_DIR=/etc/hadoop/conf
export PYSPARK_PYTHON=/opath/to/bin/python
# PySpark submit example in local mode
spark-submit --verbose /usr/hdp/2.3.0.0-2557/spark/examples/src/main/python/pi.py 100
# PySpark submit example in client mode
spark-submit --verbose --master yarn-client /usr/hdp/2.3.0.0-2557/spark/examples/src/main/python/pi.py 100
# PySpark submit example in cluster mode
spark-submit --verbose --master yarn-cluster /usr/hdp/2.3.0.0-2557/spark/examples/src/main/python/pi.py 100
# PySpark shell with yarn client
pyspark --master yarn-client
  @jigar.patel 
						
					
					... View more