Member since 
    
	
		
		
		09-17-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                436
            
            
                Posts
            
        
                736
            
            
                Kudos Received
            
        
                81
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 5103 | 01-14-2017 01:52 AM | |
| 7361 | 12-07-2016 06:41 PM | |
| 8799 | 11-02-2016 06:56 PM | |
| 2816 | 10-19-2016 08:10 PM | |
| 7162 | 10-19-2016 08:05 AM | 
			
    
	
		
		
		11-05-2015
	
		
		03:11 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Its on the roadmap for 0.6.0 apparently https://cwiki.apache.org/confluence/display/ZEPPELIN/Zeppelin+Roadmap 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-05-2015
	
		
		02:35 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 Not yet. See this JIRA for more details:  https://issues.apache.org/jira/browse/ZEPPELIN-156 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-04-2015
	
		
		10:26 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I would recommend asking this on internal channels (sme-security email list or hipchat channel). We can not discuss customers on public forums. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-04-2015
	
		
		08:56 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 We do have customers running kerborized HDP with IPA (with IPA support coming from RedHat) using the manual option of Ambari kerberos wizard. There is a JIRA logged for Ambari to officially support IPA as one of the options (as of Ambari 2.1.x the options are AD, MIT KDC and manual). The idea behind is for Ambari to help automate principal/keytab creation and distribution similar to how it does for AD/KDC. See https://issues.apache.org/jira/browse/AMBARI-6432 for more details 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-04-2015
	
		
		07:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 I believe its the same. You just need to make sure that you don't already have Spark App master up when you run the zeppelin cell which declares the %dep (otherwise it will not get loaded). If needed you can stop existing Spark app master by restart the Spark interpreter via Interpreter tab in Zeppelin UI.  More details on Zeppelin dependency loading in docs: https://zeppelin.incubator.apache.org/docs/interpreter/spark.html#dependencyloading 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-04-2015
	
		
		07:06 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @azeltov@hortonworks.com Yes you can delete it: that zip is just the package is downloaded by Ambari service containing demo notebooks from https://github.com/hortonworks-gallery/zeppelin-notebooks 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-04-2015
	
		
		03:04 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Not yet. We have been exporting notebooks by zipping up the 'notebook' subdir under Zeppelin folder (e.g. /opt/incubator-zeppelin). To import on separate cluster, drop the dir into the same location and restart Zeppelin 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-03-2015
	
		
		04:19 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		14 Kudos
		
	
				
		
	
		
					
							  Exploring Apache Flink with HDP  Apache Flink is an open source platform for distributed stream and batch data processing. More details on Flink and how it is being used in the industry today available here: http://flink-forward.org/?post_type=session. There are a few ways you can explore Flink on HDP 2.3:    1. Compilation on HDP 2.3.2  To compile Flink from source on HDP 2.3 you can use these commands:  curl -o /etc/yum.repos.d/epel-apache-maven.repo https://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo
yum -y install apache-maven-3.2*
git clone https://github.com/apache/flink.git
cd flink
mvn clean install -DskipTests -Dhadoop.version=2.7.1.2.3.2.0-2950 -Pvendor-repos  Note that with this option I ran into a classpath bug and raised it here: https://issues.apache.org/jira/browse/FLINK-3032  2. Run using precompiledtarball  wget http://www.gtlib.gatech.edu/pub/apache/flink/flink-0.9.1/flink-0.9.1-bin-hadoop27.tgz
tar xvzf  flink-0.9.1-bin-hadoop27.tgzcd flink-0.9.1
export HADOOP_CONF_DIR=/etc/hadoop/conf./bin/yarn-session.sh -n 1 -jm 768 -tm 1024  3. Using Ambari service (demo purposes only for now)  The Ambari service lets you easily install/compile Flink on HDP 2.3  
 Features:
 
 By default, downloads prebuilt package of Flink 0.9.1, but also gives option to build the latest Flink from source instead  Exposes flink-conf.yaml in Ambari UI     Setup  
 Download HDP 2.3 sandbox VM image (Sandbox_HDP_2.3_1_VMware.ova) from Hortonworks website  Import Sandbox_HDP_2.3_1_VMware.ova into VMWare and set the VM memory size to 8GB  Now start the VM  After it boots up, find the IP address of the VM and add an entry into your machines hosts file. For example:   192.168.191.241 sandbox.hortonworks.com sandbox    
  
 Note that you will need to replace the above with the IP for your own VM   
 Connect to the VM via SSH (password hadoop)   ssh root@sandbox.hortonworks.com
  
 To download the Flink service folder, run below   VERSION=`hdp-select status hadoop-client | sed 's/hadoop-client - \([0-9]\.[0-9]\).*/\1/'`
sudo git clone https://github.com/abajwa-hw/ambari-flink-service.git   /var/lib/ambari-server/resources/stacks/HDP/$VERSION/services/FLINK   
  
 Restart Ambari   #sandbox
service ambari restart
#non sandbox
sudo service ambari-server restart
  
 Then you can click on 'Add Service' from the 'Actions' dropdown menu in the bottom left of the Ambari dashboard:   On bottom left -> Actions -> Add service -> check Flink server -> Next -> Next -> Change any config you like (e.g. install dir, memory sizes, num containers or values in flink-conf.yaml) -> Next -> Deploy  
 By default:
 
 Container memory is 1024 MB  Job manager memory of 768 MB  Number of YARN container is 1     
 On successful deployment you will see the Flink service as part of Ambari stack and will be able to start/stop the service from here:     You can see the parameters you configured under 'Configs' tab     One benefit to wrapping the component in Ambari service is that you can now monitor/manage this service remotely via REST API   export SERVICE=FLINK
export PASSWORD=admin
export AMBARI_HOST=localhost
#detect name of cluster
output=`curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari'  http://$AMBARI_HOST:8080/api/v1/clusters`
CLUSTER=`echo $output | sed -n 's/.*"cluster_name" : "\([^\"]*\)".*/\1/p'`
#get service status
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X GET http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
#start service
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Start $SERVICE via REST"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
#stop service
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Stop $SERVICE via REST"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
  
 ...and also install via Blueprint. See example here on how to deploy custom services via Blueprints   Use Flink  
 Run word count job   su flink
export HADOOP_CONF_DIR=/etc/hadoop/conf
cd /opt/flink
./bin/flink run ./examples/flink-java-examples-0.9.1-WordCount.jar
  
 This should generate a series of word counts     Open the YARN ResourceManager UI. Notice Flink is running on YARN     Click the ApplicationMaster link to access Flink webUI     Use the History tab to review details of the job that ran:     View metrics in the Task Manager tab:      Other things to try  
 Apache Zeppelin now also supports Flink. You can also install it via Zeppelin Ambari service for vizualization   More details on Flink and how it is being used in the industry today available here: http://flink-forward.org/?post_type=session  Remove service  
 To remove the Flink service: 
 Stop the service via Ambari  Unregister the service export SERVICE=FLINK
export PASSWORD=admin
export AMBARI_HOST=localhost
#detect name of cluster
output=`curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari'  http://$AMBARI_HOST:8080/api/v1/clusters`
CLUSTER=`echo $output | sed -n 's/.*"cluster_name" : "\([^\"]*\)".*/\1/p'`
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X DELETE http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
#if above errors out, run below first to fully stop the service
#curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Stop $SERVICE via REST"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
 
  Remove artifacts   rm -rf /opt/flink*
rm /tmp/flink.tgz   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		11-02-2015
	
		
		09:09 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Randy Gelhausen recently was able to get this to work after messing with classpath: HADOOP_CLASSPATH=/usr/hdp/current/hbase-client/lib/hbase-protocol.jar:/etc/hbase/conf hadoop jar /usr/hdp/current/phoenix-client/phoenix-client.jar org.apache.phoenix.mapreduce.CsvBulkLoadTool --table test --input /user/root/test --zookeeper localhost:2181:/hbase-unsecure
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-31-2015
	
		
		05:11 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 Configuration groups is what you need:  http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.0/bk_Ambari_Users_Guide/content/_using_host_config_groups.html 
						
					
					... View more