Member since
09-17-2015
436
Posts
736
Kudos Received
81
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 5200 | 01-14-2017 01:52 AM | |
| 7537 | 12-07-2016 06:41 PM | |
| 8962 | 11-02-2016 06:56 PM | |
| 2871 | 10-19-2016 08:10 PM | |
| 7352 | 10-19-2016 08:05 AM |
04-15-2016
08:28 PM
If the curl request errors out with the 500 error run the below command (from same README) and then re-try the original curl request #if above errors out, run below first to fully stop the service
curl -u admin:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Stop $SERVICE via REST"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services/$SERVICE
... View more
04-15-2016
06:05 PM
@sree balaji the ambari service does not support upgrade, so a reinstall would be needed. As Andrew mentioned, if you have already installed Nifi then you should use the instructions on the github to first remove service and Nifi installation. Then follow the instructions I provided above to update the service definition and re-install Nifi.
... View more
04-15-2016
04:25 PM
5 Kudos
If you haven't already installed Nifi on the cluster, to update the Nifi service definition in Ambari on HDP 2.4 sandbox, you can follow the steps here rm -rf /var/lib/ambari-server/resources/stacks/HDP/2.4/services/NIFI
sudo git clone https://github.com/abajwa-hw/ambari-nifi-service.git /var/lib/ambari-server/resources/stacks/HDP/2.4/services/NIFI
service ambari restart Then restart Ambari and install Nidi using 'Add service wizard' as usual
... View more
04-04-2016
09:18 PM
Usually this happens when Zeppelin took longer to come up (e.g. if your VM is running out of resources). Can you give it a few min and see if status of Zeppelin turns green in Ambari
... View more
04-04-2016
08:11 PM
1 Kudo
Big Data Wrangling on HDP with Trifacta - How to Get started Data Preparation is a constant challenge for any enterprise and the speed, diversity and volumes of data introduced by Big Data simply amplify this problem substantially. Trifacta with HDP helps introduce a new approach to organizing, cleansing, enriching and structuring your data, Data Wrangling, where business users are able to connect and engage with the data to drive out high quality data sets for analytics. Step 1: Download VM Trifacta would like to provide Hortonworks partners and SI's with an opportunity to test drive Data Wrangling on HDP. Here is a link to a pre-configured virtual machine containing Trifacta Enterprise and HDP 2.3. Feel free to download and try Wrangling today: ftp://download.trifacta.com/Hortonworks/Trifacta_3.0_HDP_2.3.2_sandbox.ova user: hortonworks pw: wrangler Step 2: Start VM and access consoles The Trifacta Enterprise Wrangler on HDP is built on HDP 2.3 and the demo/sandbox instance of Centos. To access the instance: When using vmware desktop/fusion, the VM is configured to share with your host (NAT) so the IP issued to your VM should be something like this, so the ambari login is: http://172.16.238.133:8080 login to ambari (admin/admin) to ensure your HDP services are running NOTE: If using VirtualBox, port forwarding will allow you to access these services on the same ports, but through localhost: http://127.0.0.1:8080 to access the vm via ssh: ssh root@172.16.238.133 pw: Wrangler!123 to start the trifacta service, type: service trifacta start to access the trifacta UI, http://172.16.238.133:3005 user: admin@trifacta.local pw: admin Step 3: Try out the demos The demo instance comes configured with 11 canned trifacta demos, the datasets for these is available for use immediately:
CPG_CrossSell IoT_CityBike CPG_InventoryPlanning Pharmacovigilance_DrugSafety ClickStream_WeblogAnalytics SIEM_CyberSecurity DemoContentOverview.pdf SalesDashboard_For_Executives FinServ_TraderFraud TelcoChurn_4MinuteDemo TelcoChurn_Customer360 Insurance_CrossSell Data Wrangling allows a business user to discovery, register, transform, structure and publish high quality analytic data sets in a matter of minutes. Register on the Trifacta Partner Portal for more information of these demos. https://trifacta.channeltivity.com/ for more, visit http://www.trifacta.com.
... View more
03-27-2016
03:52 AM
@Maeve Ryan might be a good idea to go through this tutorial: http://hortonworks.com/hadoop-tutorial/learning-the-ropes-of-the-hortonworks-sandbox/
... View more
03-18-2016
11:45 PM
3 Kudos
Background: Starting with HDP 2.2 which is based on Hadoop 2.6, Hortonworks has added support for rolling upgrades (detailed description available here http://hortonworks.com/blog/introducing-rolling-upgrades-downgrades-apache-hadoop-yarn-cluster/). A fundamental assumption made by rolling upgrades is that jobs should not rely implicitly on the current version of artefacts such as jar files and native libraries, since they could change during the execution of a job in the middle of a rolling upgrade. Instead, the system is configured to require a particular value for hdp.version at the time of job submission. Solution: 1. One option is to modify mapred-site.xml to replace the hdp.version property with the right value for your cluster. CAUTION - if you modify mapred-site.xml on a node on the cluster, this will break rolling upgrades in certain scenarios where a program like oozie submitting a job from that node will use the hardcoded version instead of the version specified by the client. 2. Better option is to: a) create a file called java-opts with the following config value in it -Dhdp.version=2.3.4.0-3485. You can also specify the same value using SPARK_JAVA_OPTS, i.e. export SPARK_JAVA_OPTS="-Dhdp.version=2.3.4.0-3485" b) modify /usr/hdp/current/spark-client/conf/spark-defaults.conf and add below lines spark.driver.extraJavaOptions -Dhdp.version=2.3.4.0-3485
spark.yarn.am.extraJavaOptions -Dhdp.version=2.3.4.0-3485
... View more
03-10-2016
08:34 PM
1 Kudo
Try restarting Hive from Ambari. If it still doesn't work, try connecting with --verbose to get more error logging and checking hive server (/var/log/hive/hiveserver2.log) and client logs (/tmp/root/hive.log) Tried it on mine and it seemed to work fine # beeline --verbose -u 'jdbc:hive2://localhost:10000' -n mktg1 -p mktg1
WARNING: Use "yarn jar" to launch YARN applications.
issuing: !connect jdbc:hive2://localhost:10000 mktg1 [passwd stripped]
Connecting to jdbc:hive2://localhost:10000
Connected to: Apache Hive (version 1.2.1000.2.4.0.0-169)
Driver: Hive JDBC (version 1.2.1000.2.4.0.0-169)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.2.1000.2.4.0.0-169 by Apache Hive
0: jdbc:hive2://localhost:10000> show databases;
Getting log thread is interrupted, since query is done!
+----------------+--+
| database_name |
+----------------+--+
| default |
| xademo |
+----------------+--+
2 rows selected (0.206 seconds)
... View more
03-10-2016
04:10 AM
1 Kudo
hmm it seems to work on this end. Maybe try restarting the VM?
... View more
03-09-2016
05:54 PM
2 Kudos
You are invoking the API to stop nodemanager (not put in maintenance mode). To put it in maintenance mode, try below: curl -u admin:OpsAm-iAp1Pass -H "X-Requested-By:ambari"-i -X PUT -d '{"RequestInfo":{"context":"Turn On Maintenance Mode For NodeManaager"}, "Body":{"HostRoles":{"maintenance_state":"ON"}}}' http://viceroy10:8080/api/v1/clusters/et_cluster/hosts/serf120int.etops.tllsc.net/host_components/NODEMANAGER
... View more