Member since 
    
	
		
		
		06-09-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                529
            
            
                Posts
            
        
                129
            
            
                Kudos Received
            
        
                104
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1730 | 09-11-2019 10:19 AM | |
| 9315 | 11-26-2018 07:04 PM | |
| 2475 | 11-14-2018 12:10 PM | |
| 5303 | 11-14-2018 12:09 PM | |
| 3137 | 11-12-2018 01:19 PM | 
			
    
	
		
		
		08-19-2019
	
		
		09:59 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 To be sure that your user have access to warehouse location please try run something like below     val warehouseLocation = new File("spark-warehouse").getAbsolutePath
val spark = SparkSession
  .builder()
  .appName("Spark Hive Example")
  .config("spark.sql.warehouse.dir", warehouseLocation)
  .enableHiveSupport()
  .getOrCreate()   Regards,     Bart 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-27-2018
	
		
		08:10 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Felix Albani, Tanx for your response.  I reffered this site i'm thinking my issues is related to the story.  Cheers,  MJ 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-29-2018
	
		
		03:13 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 HDP 3.0 has different way of integrating Apache Hive with Apache Spark  using Hive Warehouse Connector.  Below article explains the steps:   https://community.hortonworks.com/content/kbentry/223626/integrating-apache-hive-with-apache-spark-hive-war.html   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-07-2018
	
		
		01:25 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Jonathan Sneep   Could you please check if the below metrics queries are correct :    Driver Memory  Driver Heap Usage   aliasByNode($application.driver.jvm.heap.usage, 1)   Driver JVM Memory Pools Usage  aliasByNode($application.driver.jvm.pools.*.used, 4)    Executor & Driver Memory Used  aliasByNode($application.*.jvm.heap.used, 1)  Executor Memory Used  aliasByNode(exclude($application.*.jvm.heap.used, '.driver.jvm.heap'), 1)  alias(sumSeries(exclude($application.*.jvm.heap.used, '.driver.jvm.heap')), 'total')  Task Executor  Active Tasks Per Executor  aliasByNode(summarize($application.*.executor.threadpool.activeTasks, '10s', 'sum', false), 1)  Completed Tasks per Executor  aliasByNode($application.*.executor.threadpool.completeTasks, 1)  Completed Tasks/Minute per Executor  aliasByNode(nonNegativeDerivative(summarize($application.*.executor.threadpool.completeTasks, '1m', 'avg', false)), 1)    Read/Write IOPS  Read IOPS  alias(perSecond(sumSeries($application.*.executor.filesystem.hdfs.read_ops)), 'total')  aliasByNode(perSecond($application.*.executor.filesystem.hdfs.read_ops), 1)  Write IOPS  alias(perSecond(sumSeries($application.*.executor.filesystem.hdfs.write_ops)), 'total')  aliasByNode(perSecond($application.*.executor.filesystem.hdfs.write_ops), 1)    HDFS Bytes Reads/Writes Per Executor  Executor HDFS Reads  aliasByMetric($application.*.executor.filesystem.hdfs.read_bytes)  Executor HDFS Bytes Written  aliasByMetric($application.*.executor.filesystem.hdfs.write_bytes)     Also please let me know the queries for the below :    HDFS Read/Write Byte Rate   HDFS Read Rate/Sec  HDFS Write Rate/Sec     Looking forward to your update regarding the same. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-12-2018
	
		
		09:50 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks, this did work for me!    Is there a way to configure the hadoop cluster to use a specific installed version of python? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-07-2018
	
		
		12:49 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Michael Bronson  In yarn master mode executors will run inside a yarn container.   Spark will launch an Application Master that will be responsible of negotiating the containers with Yarn. Having that said only nodes running Nodemanager are eligible to run executors.    First question: The executor logs you are looking for will be part of the yarn application logs for the container running on the specific node. (yarn logs -applicationId <appId>)  Second question: Executor will notify in case heartbeat fails to reach driver for some network problem/timeout. So this should be in the executor log that is part of the application logs.   HTH  *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-05-2018
	
		
		10:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 
 @felix Albani
 
 
 Thanks for the viedo.
 
 
 Sorry to reply after such a long time period.
 
  
 
 I have watched and check, but still dont know what I miss config.
 
 
 Before reinstall HDF and config again, there are some questions I would like to ask.
 
  
 
 In the nifi-app.log:
 
 
 2018-09-05 17:54:07,793 WARN [Thread-22] o.a.r.admin.client.RangerAdminRESTClient Error getting policies. secureMode=false, user=nifi (auth:SIMPLE), response={"httpStatusCode":400,"statusCode":0}, serviceName=hdf_nifi
 
  
 
 Do I need to resolve the WARN message in nifi-app.log.
 
 
 [Error getting policies. secureMode=false, user=nifi (auth:SIMPLE) user=nifi]
 
  
 
 Both NiFi and Ranger had been enabled in SSL mode.
 
 
 But getting policies does not seems run in secure mode.
 
  
 
 I have three NiFi ranger plugin certificate with DN [CN=ambari01.test.com, OU=NiFi、CN=ambari02.test.com, OU=NiFi、CN=ambari03.test.com, OU=NiFi]
 
 
 A nifi user is manually created in Ranger admin UI as internal.
 
  
 
 The following images are my Ranger/Ambari screen shot and question
 
 
 1.Do the nifi user need to create certificate too?
 
 
 2.Is the nifi user a OS user in NiFi host or also a NiFi application user?
 
  
 
 #nifi user in Ranger admin
 
 
   
 
  
 
 #ranger_nifi_policymgr
 
 
   
 
  
  
 
 Thanks for your help.
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-27-2018
	
		
		06:36 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi again @Felix Albani I use Intellij IDEA. I put the arguments into run -> edit configurations -> program arguments as below.      But it didn't work.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-25-2018
	
		
		06:40 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Felix Albani  yes your suggestion works fine, i think i have to extend my object as app to make it work. Tanx Bro. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-21-2018
	
		
		12:03 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Sudharsan
 Ganeshkumar If the above has helped, please take a moment to login and click the "accept" link on the answer. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













