Member since
01-04-2016
55
Posts
100
Kudos Received
14
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
823 | 03-15-2017 06:42 AM | |
790 | 09-26-2016 04:30 PM | |
1453 | 09-21-2016 04:04 PM | |
706 | 09-20-2016 04:34 PM | |
5690 | 08-10-2016 07:16 PM |
03-17-2017
08:41 AM
2 Kudos
@vkumar : As far as I know, the setting can't be set using ambari before/after deploy. By doing some research, I understand that this can be done via Zeppelin UI and therefore via the Zeppelin rest API AFTER the deploy is completed. (You can see full steps to install interpreter with the help of ambari here : https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_zeppelin-component-guide/content/config-livy-interp.html. The config is done in the zeppelin UI page) I see that by default the value of zeppelin.livy.url is set to : {"envName":"ZEPPELIN_LIVY_HOST_URL","propertyName":"zeppelin.livy.url","defaultValue":"http://localhost:8998","description":"The URL for Livy Server."} Here is the full documentation of how to set/create/update zeppelin configs after deploy : https://zeppelin.apache.org/docs/0.5.5-incubating/rest-api/rest-interpreter.html. Hope this helps!
... View more
03-16-2017
04:38 AM
2 Kudos
@Apoorva Teja Vanam : It doesn't look like there is a straight forward approach to this. Have you checked : http://stackoverflow.com/questions/37017366/how-can-i-make-spark1-6-saveastextfile-to-append-existing-file
... View more
03-15-2017
06:47 AM
4 Kudos
@shi cheng : I see your older post where you mentioned you used the following url to install the component : [root@bj-rc-dptd-ambari-sr-1-v-test-1 RANGER]# curl --user shicheng:123456 -H "X-Requested-By: ambari" -i -X POST http://localhost:8080/api/v1/clusters/ChorusCluster/RANGER/components/RANGER_ADMIN I see you are missing /services in your url. The url should be : [root@bj-rc-dptd-ambari-sr-1-v-test-1 RANGER]# curl --user shicheng:123456 -H "X-Requested-By: ambari" -i -X POST http://localhost:8080/api/v1/clusters/ChorusCluster/services/RANGER/components/RANGER_ADMIN Hope this helps!
... View more
03-15-2017
06:42 AM
4 Kudos
@joe john
Have you checked trying to wget http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0/hdp.repo from the server? It could be a case of firewall issues. Also, can you post what error is shown next to the red exclamation?
... View more
09-26-2016
05:29 PM
2 Kudos
@samuel sayag Is Ambari infra service installed/started?
... View more
09-26-2016
04:30 PM
4 Kudos
@Anas A 1) HDP is a stack that is maintained by Hortonworks. It is a collection of services and versions of the services certified by Hortonworks to work together as a hadoop system. With a version of HDP "stack", you will have a recommended set of versions of services installed. You can see the growth of the HDP stack in the diagram titled "Ongoing innovation in Apache", here : http://hortonworks.com/products/data-center/hdp/ 2) To use HDP repo you don't need an enterprise license. HDP is completely open source 3) Before starting off things in a production system, you may want to check install using sandbox and get familiar with HDP: http://hortonworks.com/hadoop-tutorial/learning-the-ropes-of-the-hortonworks-sandbox/ and then go ahead and look at : https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/ch_getting_ready_chapter.html To get a starting point into HDP docs, look at : http://hortonworks.com/downloads/#data-platform and http://docs.hortonworks.com/index.html -- This has docs for every version of HDP and ambari
... View more
09-21-2016
04:04 PM
2 Kudos
@Ludovic Janssens Please refer to the following doc : https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Ambari_Users_Guide/content/_decommissioning_masters_and_slaves_.html and https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_Sys_Admin_Guides/content/ref-b50b4ee6-0d7b-4b86-a06f-8e7bac00810f.1.html to understand #1 To answer #2, Yes the physical data will remain in the worker node (unless you delete the node). You will need to rebalance once you recommission your node. Refer point #7 here : https://acadgild.com/blog/commissioning-and-decommissioning-of-datanode-in-hadoop/ Hope this helps!
... View more
09-20-2016
04:34 PM
1 Kudo
Hi @Andrew Watson, Please refer the following community question for the same : https://community.hortonworks.com/questions/49340/how-do-i-change-namenode-and-datanode-dir-for-an-e.html#comment-49804. This has a certified approved answer. You can also check : https://community.hortonworks.com/articles/2308/how-to-move-or-change-the-hdfs-datanode-directorie.html Hope this helps!
... View more
09-13-2016
05:31 PM
1 Kudo
@Hammad Ali This most definitely looks like an agent issue. Can you check if 1. There are stale agent processes 2. The agent is up and running (And not shutting down after starting for some reason) To confirm both, you can use : ps -ef | grep "ambari_agent"
... View more
08-24-2016
06:51 AM
2 Kudos
@Roberto Sancho The second issue is caused by the ambari repo file name. Please ensure the repo file name is ambari.repo
... View more