Member since 
    
	
		
		
		09-17-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                436
            
            
                Posts
            
        
                736
            
            
                Kudos Received
            
        
                81
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 5086 | 01-14-2017 01:52 AM | |
| 7348 | 12-07-2016 06:41 PM | |
| 8721 | 11-02-2016 06:56 PM | |
| 2810 | 10-19-2016 08:10 PM | |
| 7088 | 10-19-2016 08:05 AM | 
			
    
	
		
		
		10-30-2015
	
		
		02:58 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks @George Vetticaden! As @Jonas Straub and @Andrew Grande mentioned, in this example I used cloud mode (notice that Solr was started with -c -z arguments) but you can easily change the Solr processor to point to Solr standalone instance too 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-30-2015
	
		
		12:12 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 For Ranger YARN policies to be in affect you can turn off YARN ACLs   Ambari > YARN > Custom ranger-yarn-security > add below property and restart YARN  ranger.add-yarn-authorization = false     https://github.com/abajwa-hw/security-workshops/blob/master/Setup-ranger-23.md#yarn-audit-exercises-in-ranger  I looked in the official docs but I did not see this mentioned 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-29-2015
	
		
		04:26 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		8 Kudos
		
	
				
		
	
		
					
							 I just tried this recently. Couple of options:  1. Compilation on HDP 2.3:  curl -o /etc/yum.repos.d/epel-apache-maven.repo https://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo
yum -y install apache-maven-3.2*
git clone https://github.com/apache/flink.git
cd flink
mvn clean install -DskipTests -Dhadoop.version=2.7.1.2.3.2.0-2950 -Pvendor-repos
  2. Download prebuilt tarball:  wget http://www.gtlib.gatech.edu/pub/apache/flink/flink-0.9.1/flink-0.9.1-bin-hadoop27.tgz
tar xvzf  flink-0.9.1-bin-hadoop27.tgz
cd flink-0.9.1
export HADOOP_CONF_DIR=/etc/hadoop/conf
./bin/yarn-session.sh -n 1 -jm 768 -tm 1024
  3. Experimental Ambari service:  https://github.com/abajwa-hw/ambari-flink-service  Once its up, run word count example  export HADOOP_CONF_DIR=/etc/hadoop/conf
./bin/flink run ./examples/flink-java-examples-0.9.1-WordCount.jar
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-28-2015
	
		
		09:58 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		13 Kudos
		
	
				
		
	
		
					
							 Looking for input from the field teams on sample code (preferably Java) for interacting with HDP components on kerberos enabled cluster   Kafka  Storm  HBase  Solr  Hive  Knox  YARN  HDFS  ...   Here are some of the code resources I have found so far:   Hive code: http://community.hortonworks.com/questions/1807/connecting-to-kerberos-enabled-hive-via-jdbc-direc.html   HBase code: http://community.hortonworks.com/articles/1452/sample-application-to-write-to-a-kerberised-hbase.html    Found other resources but they are around configuration of components on kerborized env:   Kafka config: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_secure-kafka-ambari/content/ch_secure-kafka-overview.html  Storm config: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_secure-storm-ambari/content/ch_secure-storm-overview.html  http://community.hortonworks.com/questions/721/acessing-the-storm-ui-with-hdp-23-kerberized-clust.html  Solr config: https://cwiki.apache.org/confluence/display/solr/Kerberos+Authentication+Plugin   If you have any useful resources, please post here so we can have a consolidated list  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-27-2015
	
		
		08:50 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 In case it helps, we had been able to access data from Phoenix as Spark RDD using early version of HDP 2.3 (Spark 1.3). See steps here 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-26-2015
	
		
		10:05 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks @Alex Miller this was exactly the issue we were debugging today 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-26-2015
	
		
		04:29 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 I'm surprised it wasn't @bbende@hortonworks.com who wrote this article 😉 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-25-2015
	
		
		10:15 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Zeppelin Ambari service has been updated to install the updated TP Zeppelin bits for Spark 1.4.1 and 1.3.1. The update will be made for 1.5.1 this week after the TP is out  Also the Magellan notebook has also been updated with documentation and to enable it to run standalone on 1.4.1 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-24-2015
	
		
		05:54 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Installed this through the Ambari service for testing and basic Spark, SparkSQL, PySpark seem ok  Couple of issues:  1.  tried out the Magellan blog notebook (after modifying it to include the %dep from the blog) and the UberRecord cell errors out:      From the log:  5/10/24 10:48:06 INFO SchedulerFactory: Job remoteInterpretJob_1445708886505 started by scheduler org.apache.zeppelin.spark.SparkInterpreter313266037
15/10/24 10:48:06 ERROR Job: Job failed
scala.reflect.internal.Types$TypeError: bad symbolic reference. A signature in Shape.class refers to term geometry
in value com.core which is not available.
It may be completely missing from the current classpath, or the version on
the classpath might be incompatible with the version used when compiling Shape.class.
 at scala.reflect.internal.pickling.UnPickler$Scan.toTypeError(UnPickler.scala:847)
 at scala.reflect.internal.pickling.UnPickler$Scan$LazyTypeRef.complete(UnPickler.scala:854)
 at scala.reflect.internal.pickling.UnPickler$Scan$LazyTypeRef.load(UnPickler.scala:863)
 at scala.reflect.internal.Symbols$Symbol.typeParams(Symbols.scala:1489)
 at scala.tools.nsc.transform.SpecializeTypes$$anonfun$scala$tools$nsc$transform$SpecializeTypes$$normalizeMember$1.apply(SpecializeTypes.scala:798)
 at scala.tools.nsc.transform.SpecializeTypes$$anonfun$scala$tools$nsc$transform$SpecializeTypes$$normalizeMember$1.apply(SpecializeTypes.scala:798)
 at scala.reflect.internal.SymbolTable.atPhase(SymbolTable.scala:207)
 at scala.reflect.internal.SymbolTable.beforePhase(SymbolTable.scala:215)
 at scala.tools.nsc.transform.SpecializeTypes.scala$tools$nsc$transform$SpecializeTypes$$norma
  (side note: this notebook doesn't seem to have much documentation on what its doing like the other...would be good to add)  2. The blog currently says the below  This technical preview can be installed on any HDP 2.3.x cluster  ...however 2.3.0 comes with Spark 1.3.1 which will not work unless they manually install Spark 1.4.1 TP so either:   a) we may want to include steps for those users too (esp since the current version of the sandbox comes with 1.3.1)  b) explicitly ask users to try the Zeppelin TP with 2.3.2 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













