Member since 
    
	
		
		
		09-17-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                436
            
            
                Posts
            
        
                736
            
            
                Kudos Received
            
        
                81
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 5085 | 01-14-2017 01:52 AM | |
| 7348 | 12-07-2016 06:41 PM | |
| 8711 | 11-02-2016 06:56 PM | |
| 2807 | 10-19-2016 08:10 PM | |
| 7075 | 10-19-2016 08:05 AM | 
			
    
	
		
		
		12-01-2015
	
		
		06:48 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 exactly, its basically the content of these two fields from ambari UI =>      As you pointed out, the actual root cause is usually available in the individual service log -> /var/log/... 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-21-2016
	
		
		12:43 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Ali Bajwa A simplified approach:
On the Ambari Server:    yum -y install git
git clone https://github.com/seanorama/ambari-bootstrap
cd ambari-bootstrap
export ambari_server_custom_script=${ambari_server_custom_script:-~/ambari-bootstrap/ambari-extras.sh}
export install_ambari_server=true
./ambari-bootstrap.sh Then deploy the cluster. The "extras" script above takes care of all the tedious stuff automatically (cloning Zeppelin, the blueprint defaults, the role command order, ...). yum -y install python-argparse
cd deploy
export ambari_services="HDFS MAPREDUCE2 YARN ZOOKEEPER HIVE SPARK ZEPPELIN"
bash ./deploy-recommended-cluster.bash 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-10-2016
	
		
		09:25 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Ali Bajwa is this still an issue with the latest Spark and Sandbox? I have a user with exact same issues and no means to fix it yet. He claimed he tried fixing this using this article. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-18-2015
	
		
		06:26 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Just to add to this article, sandbox.hortonworks.com needs to be mapped to the ip address of the sandbox virtual machine. Typically out of the box, the VirtualBox version uses the loop back ip 127.0.0.1 vs. the Vmware image provides an IP generated dependent on the network vm settings configured.  Thus, if you don't  have sandbox.hortonworks.com in your hosts file on your machine, use the ip address instead such as http://127.0.0.1:4200 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-03-2015
	
		
		07:59 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Also see   Practical Data Science with Apache Spark & Apache Zeppelin  https://hadoopsummit.uservoice.com/forums/332055-data-science-applications-for-hadoop/suggestions/10847007-practical-data-science-with-apache-spark-apache  Running Spark in Production  https://hadoopsummit.uservoice.com/forums/332061-hadoop-governance-security-deployment-and-operat/suggestions/10848240-running-spark-in-production  Cover topics of Spark Perf Tuning, Security & Spark on YARN  Please consider voting if you want to hear more on these topics. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-23-2015
	
		
		05:57 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		6 Kudos
		
	
				
		
	
		
					
							 Use OpenTSDB Ambari service to store/visualize stock data on HDP sandbox   Goal:
  OpenTSDB (Scalable Time Series DB) allows you to store and serve massive amounts of time series data without losing granularity (more details here). In this tutorial we will install it on Hbase on HDP sandbox using the Ambari sevice and use it to import and visualize stock data.  Steps:  Setup VM and install Ambari service
   Download HDP latest sandbox VM image (.ova file) from Hortonworks website  Import ova file into VMWare and ensure the VM memory size is set to at least 8GB  Now start the VM  After it boots up, find the IP address of the VM and add an entry into your machines hosts file e.g.   192.168.191.241 sandbox.hortonworks.com sandbox    
   Connect to the VM via SSH (password hadoop)   ssh root@sandbox.hortonworks.com
   Start HBase service from Ambari and ensure Hbase is up and root has authority to create tables. You can do this by trying to create a test table   hbase shell
create 't1', 'f1', 'f2', 'f3'
   If this fails with the below, you will need to provide appropriate access via Ranger (http://sandbox.hortonworks.com:6080)   ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions for user 'root (auth:SIMPLE)' (global, action=CREATE)      To deploy the OpenTSDB service, run below   VERSION=`hdp-select status hadoop-client | sed 's/hadoop-client - \([0-9]\.[0-9]\).*/\1/'`
sudo git clone https://github.com/hortonworks-gallery/ambari-opentsdb-service.git /var/lib/ambari-server/resources/stacks/HDP/$VERSION/services/OPENTSDB
   Restart Ambari   #on sandbox
sudo service ambari restart
#on non-sandbox clusters  
sudo service ambari-server restart
sudo service ambari-agent restart
   Then you can click on 'Add Service' from the 'Actions' dropdown menu in the bottom left of the Ambari dashboard:      On bottom left -> Actions -> Add service -> check OpenTSDB server -> Next -> Next -> Customize as needed -> Next -> Deploy  You can customize the port, ZK quorum, ZK dir in the start command. Note that Hbase must be started if the option to automatically create OpenTSDB schema is selected       On successful deployment you will see the OpenTSDB service as part of Ambari stack and will be able to start/stop the service from here:     You can see the parameters you configured under 'Configs' tab     One benefit to wrapping the component in Ambari service is that you can now automate its deployment via Ambari blueprints or  monitor/manage this service remotely via REST API   export SERVICE=OPENTSDB
export PASSWORD=admin
export AMBARI_HOST=sandbox.hortonworks.com
export CLUSTER=Sandbox
#get service status
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X GET http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
#start service
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Start $SERVICE via REST"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
#stop service
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Stop $SERVICE via REST"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
   To remove the OpenTSDB service:
  Stop the service via Ambari 
 Delete the service #Ambari password
export PASSWORD=admin
#Ambari host
export AMBARI_HOST=localhost
export SERVICE=OPENTSDB
#detect name of cluster
output=`curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari'  http://$AMBARI_HOST:8080/api/v1/clusters`
CLUSTER=`echo $output | sed -n 's/.*"cluster_name" : "\([^\"]*\)".*/\1/p'`
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Stop $SERVICE via REST"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE    
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X DELETE http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
  
 Remove artifacts rm -rf /root/opentsdb
rm -rf /var/lib/ambari-server/resources/stacks/HDP/2.2/services/opentsdb-service/
        Import stock data   Use below sample code (taken from here) to pull 30 day intraday stock prices for a few securities in both OpenTSDB and csv formats   cd
/bin/rm -f prices.csv
/bin/rm -f opentsd.input
wget https://raw.githubusercontent.com/abajwa-hw/opentsdb-service/master/scripts/google_intraday.py
python google_intraday.py AAPL > prices.csv
python google_intraday.py GOOG >> prices.csv
python google_intraday.py HDP >> prices.csv
python google_intraday.py ORCL >> prices.csv
python google_intraday.py MSFT >> prices.csv
   Review opentsd.input which contains the stock proces in OpenTSDB-compatible format   tail opentsd.input
   Import data from opentsd.input into OpenTSDB   /root/opentsdb/build/tsdb import opentsd.input --zkbasedir=/hbase-unsecure --zkquorum=localhost:2181 --auto-metric
    Open WebUI and import stock data   The OpenTSDB webUI login page should be at the below link (or whichever port you configured) http://sandbox.hortonworks.com:9999  Query the data in OpenTSDB webUI by entering values for:
  From: pick a date from 3 weeks ago  To: pick todays date  Check Autoreload  Metric: (e.g. volume)  Tags: (e.g. symbol GOOG)  You can similarly create multiple tabs  Tags: symbol ORCL  Tags: symbol AAPL      To make the charts smoother:
  Under Style tab, check the 'Smooth' checkbox  Under Axes tab, check the 'Log scale' checkbox    You can also open it from within Ambari via iFrame view     
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		03-01-2016
	
		
		04:28 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 hello nice tutorial 🙂  deploying on nifi 0.4.1  or 0.5.0, the  maven  target nar file   result (ie  nifi-network-nar-1.0-SNAPSHOT.nar )  nifi starts   but i  cannot instantiate the processor  from the IHM  i have the following trace in the logs  with a WARN   nifi-app.log:2016-03-01 16:12:26,797 WARN [main] org.apache.nifi.nar.NarClassLoader ./work/nar/extensions/nifi-network-nar-1.0-SNAPSHOT.nar-unpacked does not contain META-INF/bundled-dependencies!  
nifi-app.log:2016-03-01 16:12:26,797 INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /home/cloud/fxd/Nifi/nifi-0.5.0/./work/nar/extensions/nifi-network-nar-1.0-SNAPSHOT.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[./work/nar/extensions/nifi-network-nar-1.0-SNAPSHOT.nar-unpacked]  phil  best regards  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-31-2015
	
		
		03:03 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 You can install the latest HDP 2.3.4 using Ambari 2.2.0.0: it comes with Spark 1.5.2 and its integrated with ATS 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-11-2015
	
		
		12:14 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 That worked! Thanks @Ali Bajwa 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













