Member since
01-04-2016
55
Posts
100
Kudos Received
14
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
741 | 03-15-2017 06:42 AM | |
723 | 09-26-2016 04:30 PM | |
1321 | 09-21-2016 04:04 PM | |
632 | 09-20-2016 04:34 PM | |
5299 | 08-10-2016 07:16 PM |
03-17-2017
08:41 AM
2 Kudos
@vkumar : As far as I know, the setting can't be set using ambari before/after deploy. By doing some research, I understand that this can be done via Zeppelin UI and therefore via the Zeppelin rest API AFTER the deploy is completed. (You can see full steps to install interpreter with the help of ambari here : https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_zeppelin-component-guide/content/config-livy-interp.html. The config is done in the zeppelin UI page) I see that by default the value of zeppelin.livy.url is set to : {"envName":"ZEPPELIN_LIVY_HOST_URL","propertyName":"zeppelin.livy.url","defaultValue":"http://localhost:8998","description":"The URL for Livy Server."} Here is the full documentation of how to set/create/update zeppelin configs after deploy : https://zeppelin.apache.org/docs/0.5.5-incubating/rest-api/rest-interpreter.html. Hope this helps!
... View more
03-16-2017
04:38 AM
2 Kudos
@Apoorva Teja Vanam : It doesn't look like there is a straight forward approach to this. Have you checked : http://stackoverflow.com/questions/37017366/how-can-i-make-spark1-6-saveastextfile-to-append-existing-file
... View more
03-15-2017
06:47 AM
4 Kudos
@shi cheng : I see your older post where you mentioned you used the following url to install the component : [root@bj-rc-dptd-ambari-sr-1-v-test-1 RANGER]# curl --user shicheng:123456 -H "X-Requested-By: ambari" -i -X POST http://localhost:8080/api/v1/clusters/ChorusCluster/RANGER/components/RANGER_ADMIN I see you are missing /services in your url. The url should be : [root@bj-rc-dptd-ambari-sr-1-v-test-1 RANGER]# curl --user shicheng:123456 -H "X-Requested-By: ambari" -i -X POST http://localhost:8080/api/v1/clusters/ChorusCluster/services/RANGER/components/RANGER_ADMIN Hope this helps!
... View more
03-15-2017
06:42 AM
4 Kudos
@joe john
Have you checked trying to wget http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0/hdp.repo from the server? It could be a case of firewall issues. Also, can you post what error is shown next to the red exclamation?
... View more
09-26-2016
05:29 PM
2 Kudos
@samuel sayag Is Ambari infra service installed/started?
... View more
09-26-2016
04:30 PM
4 Kudos
@Anas A 1) HDP is a stack that is maintained by Hortonworks. It is a collection of services and versions of the services certified by Hortonworks to work together as a hadoop system. With a version of HDP "stack", you will have a recommended set of versions of services installed. You can see the growth of the HDP stack in the diagram titled "Ongoing innovation in Apache", here : http://hortonworks.com/products/data-center/hdp/ 2) To use HDP repo you don't need an enterprise license. HDP is completely open source 3) Before starting off things in a production system, you may want to check install using sandbox and get familiar with HDP: http://hortonworks.com/hadoop-tutorial/learning-the-ropes-of-the-hortonworks-sandbox/ and then go ahead and look at : https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/ch_getting_ready_chapter.html To get a starting point into HDP docs, look at : http://hortonworks.com/downloads/#data-platform and http://docs.hortonworks.com/index.html -- This has docs for every version of HDP and ambari
... View more
09-21-2016
04:04 PM
2 Kudos
@Ludovic Janssens Please refer to the following doc : https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Ambari_Users_Guide/content/_decommissioning_masters_and_slaves_.html and https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_Sys_Admin_Guides/content/ref-b50b4ee6-0d7b-4b86-a06f-8e7bac00810f.1.html to understand #1 To answer #2, Yes the physical data will remain in the worker node (unless you delete the node). You will need to rebalance once you recommission your node. Refer point #7 here : https://acadgild.com/blog/commissioning-and-decommissioning-of-datanode-in-hadoop/ Hope this helps!
... View more
09-20-2016
04:34 PM
1 Kudo
Hi @Andrew Watson, Please refer the following community question for the same : https://community.hortonworks.com/questions/49340/how-do-i-change-namenode-and-datanode-dir-for-an-e.html#comment-49804. This has a certified approved answer. You can also check : https://community.hortonworks.com/articles/2308/how-to-move-or-change-the-hdfs-datanode-directorie.html Hope this helps!
... View more
09-13-2016
05:31 PM
1 Kudo
@Hammad Ali This most definitely looks like an agent issue. Can you check if 1. There are stale agent processes 2. The agent is up and running (And not shutting down after starting for some reason) To confirm both, you can use : ps -ef | grep "ambari_agent"
... View more
08-24-2016
06:51 AM
2 Kudos
@Roberto Sancho The second issue is caused by the ambari repo file name. Please ensure the repo file name is ambari.repo
... View more
08-10-2016
07:16 PM
2 Kudos
@Zach Kirsch :The problem in the script could be that the wait between stopping all services and starting it is not enough. The start immediately after stop would result in something of the sort : {
"status" : 500,
"message" : "org.apache.ambari.server.controller.spi.SystemException: An internal system exception occurred: Invalid transition for servicecomponenthost, clusterName=cl1, clusterId=2, serviceName=HDFS, componentName=SECONDARY_NAMENODE, hostname=nat-r6-dtxs-ambari-hosts-4-4.openstacklocal, currentState=STOPPING, newDesiredState=STARTED"
} Instead what you could do is parse the response of the call to put services to INSTALLED state and check that it is completed. Code here (Assuming you have the ambari.props set up as in https://community.hortonworks.com/questions/29439/ambari-api-to-restart-all-the-services-with-stale.html: curl -u $AMBARI_ADMIN_USER:$AMBARI_ADMIN_PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context": "put services into STOPPED STATE"},"Body":{"ServiceInfo": {"state" : "INSTALLED"}}}' "$URL" > /tmp/response.txt
newURL=`grep -o '"href" : [^, }]*' /tmp/response.txt | sed 's/^.*: //' | tr -d '"'`
echo newURL=$newURL
request_status=""
while [ "$request_status" != "COMPLETED" ];
do
curl -u $AMBARI_ADMIN_USER:$AMBARI_ADMIN_PASSWORD -i -X GET "$newURL" > /tmp/new_response.txt
request_status=`grep -o '"request_status" : [^, }]*' /tmp/new_response.txt | sed 's/^.*: //' | tr -d '"'`
echo $request_status
done
curl -u $AMBARI_ADMIN_USER:$AMBARI_ADMIN_PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context": "put services into STARTED state"},"Body":{"ServiceInfo": {"state" : "STARTED"}}}' "$URL"
NOTE : This will fail if the services are all already in stopped state or if the stop of services fails (You will need to check in the while loop if "$request_status" = "FAILED", abort the script) These scripts give you bare minimum to get things to work. Extra checks needs to be added to make them fault tolerant (esp to timing issues).
... View more
08-09-2016
06:24 PM
2 Kudos
@Gulshad Ansari: Please check http://stackoverflow.com/questions/19205057/how-to-fix-corrupt-hadoop-hdfs. This has a clear tutorial on how to get the corrupted block. Once you locate it, it is a simple hdfs fs -rm command to remove the corrupted block
... View more
08-08-2016
05:51 PM
4 Kudos
@Deepak k : There are times when the customer needs services beyond the HDP stack available. In such scenarios, he may choose to have custom services. I found an interesting one here : https://github.com/hortonworks-gallery/ambari-vnc-service It says it is : An Ambari Stack service package for VNC Server with the ability to install developer tools like Eclipse/IntelliJ/Maven as well to 'remote desktop' to the sandbox and quickly start developing on HDP Hadoop Like you can see, it gives a lot of flexibility to the customer to write custom code on top of HDP code to help his individual use case. Keeping hadoop as open as possible 🙂
... View more
08-06-2016
02:42 AM
2 Kudos
@Zach Kirsch 1) I don't know what you mean by AMBARI_HOST? Is it where your ambari server is installed? From any of the agent hosts, you can find the server host from : /etc/ambari-agent/conf/ambari-agent.ini. Look for the entry [server]
hostname=$SERVER_HOST 2) Are you ok using API? You can get the cluster name by running : curl --user admin:admin http://$AMBARI_HOST:8080/api/v1/clusters/ This will return you a response of the type : { "href" : "http://$AMBARI_HOST:8080/api/v1/clusters/", "items" : [ { "href" : "http://$AMBARI_HOST:8080/api/v1/clusters/clustername", "Clusters" : { "cluster_name" : "clustername", "version" : "version" } } ]} You may then extract the cluster name. Hope this helps!
... View more
08-05-2016
11:40 PM
6 Kudos
Most frequently there are questions on how to add/delete groups/users from Ambari. This document describes how to add/delete ambari's local users and groups (Delete applies to LDAP users/groups as well). This document should give a consolidated list of rest calls to manage users/groups. Following are the steps for end-to-end creation/updating users/groups using ambari REST API. 1) Creating a user : curl -iv -u admin:admin -H "X-Requested-By: ambari" -X POST -d '{"Users/user_name":"{USER}","Users/password":"{PASSWORD}","Users/active":"{ISACTIVE}","Users/admin":"{ISADMIN}"}' http://ambari-server:8080/api/v1/users 2) Change password of user : curl -iv -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '{"Users/password":"{UPDATED_PASSWORD}"}' http://ambari-server:8080/api/v1/users/{USER} 3) Toggle user's admin status: curl -iv -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '{"Users/admin":"{true/false}"}' http://ambari-server:8080/api/v1/users/{USER} 4) Creating a group : curl -iv -u admin:admin -H "X-Requested-By: ambari"-X POST -d '{"Groups/group_name":"{GROUP_NAME}"}' http://ambari-server:8080/api/v1/groups 5) Adding a user to a (new) group : curl -iv -u admin:admin -H "X-Requested-By: ambari"-X PUT -d '[{"MemberInfo/user_name":"{USER_NAME}","MemberInfo/group_name":"{GROUP_NAME}"}]' http://ambari-server:8080/api/v1/groups/{GROUP_NAME}/members This will delete users from the group if the group exists. 6) Adding user to the users list in a group : curl -iv -u admin:admin -H "X-Requested-By: ambari"-X POST -d '[{"MemberInfo/user_name":"{USER_NAME}","MemberInfo/group_name":"{GROUP_NAME}"}]' http://ambari-server:8080/api/v1/groups/{GROUP_NAME}/members 7) Get all users : curl -iv -u admin:admin -X GET http://ambari-server:8080/api/v1/users 😎 Get all groups : curl -iv -u admin:admin -X GET http://ambari-server:8080/api/v1/groups 9) Get members of a group : curl -iv -u admin:admin -X GET http://ambari-server:8080/api/v1/groups/{GROUP_NAME}/members 10) Get user info : curl -iv -u admin:admin -X GET http://ambari-server:8080/api/v1/groups/{GROUP_NAME} 11) Get group info : curl -iv -u admin:admin -X GET http://ambari-server:8080/api/v1/users/{USER_NAME} 12) Removing a user from the users list in a group : To remove a user from a group, request a PUT with request body having a list of users that should be included in the group (Exclude ones that need to be deleted) curl -iv -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '[{"MemberInfo/user_name":"{USER_NAME1}","MemberInfo/group_name":"{GROUP_NAME}"},{"MemberInfo/user_name":"{USER_NAME2}","MemberInfo/group_name":"{GROUP_NAME}"}]' http://ambari-server:8080/api/v1/groups/{GROUP_NAME}/members 13) Delete a user using REST API : curl -iv -u admin:admin -H "X-Requested-By: ambari"-X DELETE http://ambari-server:8080/api/v1/users/{USER_NAME} 14) Delete a group using REST API : curl -iv -u admin:admin -H "X-Requested-By: ambari"-X DELETE http://ambari-server:8080/api/v1/groups/{GROUP_NAME}
... View more
- Find more articles tagged with:
- Ambari
- Cloud & Operations
- how-to-tutorial
- How-ToTutorial
Labels:
08-05-2016
12:55 PM
1 Kudo
My assumption was this was a new group. Yes, POST would work for existing groups.
... View more
08-05-2016
11:34 AM
1 Kudo
Sorry about that. I have updated the answer with the right URI to add user to groups : http://ambari-server:8080/api/v1/groups/{GROUP_NAME}/members
... View more
08-05-2016
11:25 AM
2 Kudos
Hi @Savanna Endicott : Like most people suggested in the post, this looks like a network issue. I faced the exact same issue today and it looked like my VM had problems connecting to the proxy. It had nothing to do with the ambari repo (I disabled ambari repo and still faced the same issue). What helped was to disable proxy and try running yum clean all; yum update. To bypass your proxy, follow the post : https://community.hortonworks.com/questions/26872/forbidden-403-error-on-hdp-24-installation.html At the same time, you can talk to your IT folks to check why proxy settings are not working on your node. (May be iptables, selinux services etc are misconfigured) Hope this helps!
... View more
08-05-2016
09:53 AM
2 Kudos
Hi @marko, You may add the user to a group using the following steps : 1) add a user using API : curl -iv -u admin:admin -H "X-Requested-By: ambari" -X POST -d '{"Users/user_name":"{USER}","Users/password":"{PASSWORD}","Users/active":"{ISACTIVE}","Users/admin":"{ISADMIN}"}' http://ambari-server:8080/api/v1/users 2) add a group using API : curl -iv -u admin:admin -H "X-Requested-By: ambari" -X POST -d '{"Groups/group_name":"{GROUP_NAME}"}' http://ambari-server:8080/api/v1/groups 3) add the user to the group : curl -iv -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '[{"MemberInfo/user_name":"{USER_NAME}","MemberInfo/group_name":"{GROUP_NAME}"}]' http://ambari-server:8080/api/v1/groups/{GROUP_NAME}/members Change {USER_NAME} to the user name you want and {GROUP_NAME} to the group you want Hope this helps!
... View more
08-05-2016
06:39 AM
2 Kudos
@Muthukumar S : The issue is with the double quotes you used around X-Requested-By: ambari. When I tried copy pasting the url you have pasted, I got the same issue as you. I tried with single quotes like : curl -u admin:admin -H 'X-Requested-By: ambari' -X DELETE -d ‘{“RequestInfo”:{“state”:”INSTALL_FAILED”}}’ http://172.22.127.69:8080/api/v1/clusters/cl1/services/HIVE and it worked. Have you used an editor which does a formatting because it looks like the quotes you used were special quotes which lead to the header not being read by curl. Here is a link on how to delete a service. You will not be able to delete the service if it has components installed. The link has commands to delete components as well : https://cwiki.apache.org/confluence/display/AMBARI/Using+APIs+to+delete+a+service+or+all+host+components+on+a+host PS : Make sure when you copy/paste, you replace the double quotes in the terminal or a reliable editor.
... View more
08-05-2016
06:16 AM
2 Kudos
Hi @Kumar Veerappan, You can find namenode via command line using : dfsadmin -report or <code>hadoop getconf -namenodes ( you can use this to get secondary namenode/backup node etc)
From ambari, you can go to services page, get to the service whose admin you want and click on the component link to find the component host. For eg, to find the namenode go to HDFS service page and click on the link for namenode : Else, move to the hosts page and search for the component : screen-shot-2016-08-05-at-114146-am.png Based on the version you are using, the filter page may vary. But every version has a filter for component type which will give you host name. Hope this helps!
... View more
08-04-2016
04:27 PM
2 Kudos
@Dinesh E Here is an article on why this could happen. It explains why and steps to take when such a scenario occurs : https://community.hortonworks.com/articles/18088/ambari-shows-hdp-services-to-be-down-whereas-they.html Hope this helps!
... View more
08-04-2016
09:55 AM
2 Kudos
Hi @Kyle Dunn, Here is a similar issue listed here : https://community.hortonworks.com/questions/17201/registered-version-hdp-2340-is-not-listed.html. You can see the DB changes needed to get this to work. From the link : 1 - Access Ambari Database 2 - ambari=> select * from stack; Compare the above table in the 2 clusters to know what needs to be added. Please let me know if this helps!
... View more
08-04-2016
09:21 AM
3 Kudos
Hi @Matthias Rueling I can see a similar question asked here : https://community.hortonworks.com/questions/17201/registered-version-hdp-2340-is-not-listed.html. It looks like Alessio Ubaldi faced the exact same issue and the comment says "I rebooted my sandbox vm. After that the wizard worked fine. Tks". You may not have to restart your VM. I think restarting ambari-server/ambari-agents should help. Can you try that?
... View more
07-28-2016
10:05 AM
1 Kudo
Can you check if this helps? https://community.hortonworks.com/questions/18727/how-do-i-add-filesjars-through-hive-view.html
... View more
07-28-2016
10:02 AM
3 Kudos
Hi @Michael Dennis "MD" Uanang : You can check here http://stackoverflow.com/questions/14326308/how-to-include-hbase-site-xml-in-the-classpath to ensure hbase-site.xml changes are picked. Alternately, if you are using ambari, you could add the property in the hbase service. Go to HBASE -> Configs -> Custom hbase-site. Add the property phoenix.query.dateFormatTimeZone with value GMT+08:00. You may have to restart dependent services.
... View more
07-28-2016
09:38 AM
1 Kudo
@Muthukumar S Yes. That is correct. You will not have to do all of that. HDFS is built to be fault tolerant. So it should work seemlessly. But I would still prefer the second method if it is a production box. Do update your findings and upvote the answer if it works. Thanks!
... View more
07-28-2016
09:30 AM
1 Kudo
@Muthukumar S The cluster failover is automatically done by ambari. If you want to make your standby namenode the active one, you can stop the current active namenode (ensuring the current standby namenode is alive) and the current standby namenode will take over as the active namenode. See also : https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html#Automatic_Failover To manually change it check : https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html#Administrative_commands
... View more
07-21-2016
06:04 AM
2 Kudos
@ARUN Try the command hadoop fs -du -h / This should get us the space occupied by directories in hdfs. To drill down, change / to the directory you want to check.
... View more
06-07-2016
03:06 PM
2 Kudos
Some of the custom commands I know are : DECOMMISSION/RECOMMISSION and CLEAN for hive. Is there an API to get all the custom commands? Or a doc with details of custom commands that can be run for services?
... View more
Labels: