Member since 
    
	
		
		
		09-29-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                286
            
            
                Posts
            
        
                601
            
            
                Kudos Received
            
        
                60
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 12842 | 03-21-2017 07:34 PM | |
| 3758 | 11-16-2016 04:18 AM | |
| 2142 | 10-18-2016 03:57 PM | |
| 5096 | 09-12-2016 03:36 PM | |
| 8426 | 08-25-2016 09:01 PM | 
			
    
	
		
		
		11-06-2024
	
		
		12:29 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hi,    I am trying to connect hive database from nodejs using hive-driver npm. In that code,the session cannot be established to access the hive database.  I am putting the console for each and every variables,but the log was printed,before the session establishing code only.please anyone help me on this.please refer a snap shot below.            
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-04-2024
	
		
		05:07 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 Connecting Salesforce with Hortonworks Data Flow (powered by Apache NiFi) unlocks powerful data integration. Here's how:   NiFi Processors: Utilize processors like "InvokeHTTP" to call Salesforce APIs and retrieve data.  Real-Time or Batch: Move data bi-directionally (Salesforce to NiFi or vice versa) in real-time or batches.  Data Transformation: Cleanse, transform, and enrich data using NiFi's processors before storing it in your data lake.   This salesforce integration helps you leverage valuable Salesforce data for analytics, reporting, and deeper customer insights.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-13-2021
	
		
		04:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Do you have any reference on how to enable ranger for kafka sitting in a separate cluster in CDP? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-14-2021
	
		
		08:13 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 A bit late to the party, but hope the following will help.     By calling the main functions of the classes,   UnixUserGroupBuilder, PolicyMgrUserGroupBuilder or LdapUserGroupBuilder  is not going to work, since the main classes of these are only initializing the classes. In order to start the actual sync, the function updateSink needs to be called. During startup this is handled by the class  org.apache.ranger.usergroupsync.UserGroupSync  thus, calling its main function will trigger the syncing using the configuration that you set in your cluster.     A complete example for triggering the usersync manually could be:  java -Dlogdir=/var/log/ranger/usersync -cp "/usr/hdp/current/ranger-usersync/dist/unixusersync-1.2.0.3.1.5.135-2.jar:/usr/hdp/current/ranger-usersync/lib/*:/etc/ranger/usersync/conf" org.apache.ranger.usergroupsync.UserGroupSync  for HDP and  java -Dlogdir=/var/log/ranger/usersync -cp "/opt/cloudera/parcels/CDH/lib/ranger-usersync/dist/unixusersync-2.1.7.1.7.0-460.jar:/opt/cloudera/parcels/CDH/lib/ranger-usersync/lib/*:/etc/ranger/usersync/conf" org.apache.ranger.usergroupsync.UserGroupSync  for CDP. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-13-2021
	
		
		02:22 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello,      Do you have any test to be sure the cluster is working fine after all those steps ?   I move one node ever 3.   I put some files on the hdfs and i don't see file system in the dfs.journalnode.edits.dir even whether for the old or new Journal Node.      Best Regards  Abdou        
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-30-2020
	
		
		01:06 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I see that you use Active DIrecyory     Did you use the below property?     +++  <property>  <name>hive.server2.authentication.ldap.Domain</name>  <value>AD_Domain</value>  </property>  +++ 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-19-2020
	
		
		10:49 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 with newer versions of spark, the sqlContext is not load by default, you have to specify it explicitly :     scala> val sqlContext = new org.apache.spark.sql.SQLContext(sc)  warning: there was one deprecation warning; re-run with -deprecation for details  sqlContext: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@6179af64    scala> import sqlContext.implicits._  import sqlContext.implicits._    scala> sqlContext.sql("describe mytable")  res2: org.apache.spark.sql.DataFrame = [col_name: string, data_type: string ... 1 more field]       I'm working with spark 2.3.2 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-17-2019
	
		
		07:48 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi All .     here is more Details about above :-     https://community.cloudera.com/t5/Support-Questions/HDInsight-Vs-HDP-Service-on-Azure-Vs-HDP-on-Azure-IaaS/m-p/166424           Thanks  HadoopHelp 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-28-2019
	
		
		12:05 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi, the link seems broken. Can you share with us the working one? Thanks. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
         
					
				













