Member since 
    
	
		
		
		04-25-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                579
            
            
                Posts
            
        
                609
            
            
                Kudos Received
            
        
                111
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2925 | 02-12-2020 03:17 PM | |
| 2136 | 08-10-2017 09:42 AM | |
| 12471 | 07-28-2017 03:57 AM | |
| 3410 | 07-19-2017 02:43 AM | |
| 2522 | 07-13-2017 11:42 AM | 
			
    
	
		
		
		10-09-2016
	
		
		11:02 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Fabien   could you please try these steps to login ambari using admin   open a terminal do  ssh root@127.0.0.1 -p 2222    ambari-admin-password-reset // update admin passwd ambari-agent restart   now open ambari ui and login using admin/<passwd you setted> 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-21-2016
	
		
		02:39 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 Env:   HDP-2.3.4.0-3485  Java 8  Attached code contain:  —pom.xml to manage all the dependencies  — HiveClientSecure.java - oozie java action to be configured into workflow.xml  — jaas.conf — oozie uses jaas configuration for kerberos login  — log4j.properties - to capture logs  jaas.conf: modify principal name and key tab location accordingly and place it to on each node on the cluster.I have placed it on /tmp/jaas/jaas.conf for testing purpose.
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=trueuseTicketCache=true
principal="ambari-qa-hbase234@HWXBLR.COM"
keyTab="/etc/security/keytabs/smokeuser.headless.keytab"debug="true"doNotPrompt=true;
};  workflow.xml:  <workflow-app xmlns="uri:oozie:workflow:0.2" name="java-main-wf">    <start to="java-node"/>    <a<action name="java-node">        
<java>            
<job-tracker>${jobTracker}</job-tracker>            
<name-node>${nameNode}</name-node>            
<configuration>                
<property>                    
<name>mapred.job.queue.name</name>                    
<value>${queueName}</value>               
 </property>            </configuration>            
<main-class>HiveJdbcClientSecure</main-class>           
 <arg>jdbc:hive2://hb-n2.hwxblr.com:10000/;principal=hive/hb-n2.hwxblr.com@HWXBLR.COM</arg>            <arg>ambari-qa-hbase234@HWXBLR.COM</arg>            
<arg>/etc/security/keytabs/smokeuser.headless.keytab</arg>        </java>       
 <ok to="end"/>        <error to="fail"/>    </action>    <kill name="fail">       
 <message>Java failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>    
</kill>    
<end name="end"/>
</workflow-app>  Sample Application Build and Run Instruction.  1. extract attached jar  2. cd HiveServer2JDBCSample  3. mvn clean package(it will create a fat jar with all the dependencies into it.)  4. upload it to hdfs  // in my case I am using amberi-qa user which is map to principal defined in workflow xml  5. hadoop fs -put target/HiveServer2JDBCTest-jar-with-dependencies.jar examples/apps/java-main/lib  6. upload workflow xml  7. hadoop fs -put /tmp/workflow.xml examples/apps/java-main/  Run Through oozie.  source /etc/oozie/conf/oozie-env.sh ; /usr/hdp/current/oozie-client/bin/oozie  job -oozie http://hb-n2.hwxblr.com:11000/oozie -config /usr/hdp/current/oozie-client/doc/examples/apps/java-main/job.properties -run  hiveserver2oozieaction.tar.gz 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		09-19-2016
	
		
		05:20 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 @srinivasa rao  I guess you read about when you perform a "select * from <tablename>", Hive fetches the whole data from file as a FetchTask  rather than a mapreduce task which just dumps the data as it is without doing anything on it, similar to "hadoop dfs -text <filename>"  However, the above does not take advantage of the true parallelism. In your case, for 1 GB will not make the difference, but image a 100 TB table and you do use a single threaded task in a cluster with 1000 nodes. FetchTask is not a good use of parallelism. Tez provides some options to split the data set to allow true parallelism.  tez.grouping.max-size and  tez.grouping.min-size are split parameters.  Ref: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_installing_manually_book/content/ref-ffec9e6b-41f4-47de-b5cd-1403b4c4a7c8.1.html  If any of the responses was helpful, please don't forget to vote/accept the answer. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-26-2016
	
		
		06:01 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							  @Rajkumar Singh thanks. good stuff. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-11-2019
	
		
		02:36 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Constantin Stanca  Hi, could you please why there could be a split-brain situation when the number of zookeeper nodes is even? Thanks~ 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-18-2016
	
		
		05:39 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 It worked! Even though did not resolve my problem, but I was able to restart Hive. Thank you for the help! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-09-2018
	
		
		11:44 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		4 Kudos
		
	
				
		
	
		
					
							 By connecting to web ui over port 10002 one could check the number of live sessions on that instance of hiveserver 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-05-2016
	
		
		02:30 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Good to hear, thanks for the follow up information @Ravikumar Kumashi. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-19-2016
	
		
		03:02 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 normally mapper dont fail with OOM and 8192M is pretty good, I suspect that if you have some big records while reading from csv, are you doing some memory intensive operation inside mapper. could you please share the task log for this attempt attempt_1466342436828_0001_m_000008_2 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-14-2016
	
		
		08:57 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							
 Thanks @  Pierre Villard  This is what i was expecting... Thanks once again...
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		- « Previous
- Next »
 
        













