Member since
01-04-2016
55
Posts
100
Kudos Received
14
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1629 | 03-15-2017 06:42 AM | |
1382 | 09-26-2016 04:30 PM | |
2173 | 09-21-2016 04:04 PM | |
1344 | 09-20-2016 04:34 PM | |
8559 | 08-10-2016 07:16 PM |
03-16-2017
04:38 AM
2 Kudos
@Apoorva Teja Vanam : It doesn't look like there is a straight forward approach to this. Have you checked : http://stackoverflow.com/questions/37017366/how-can-i-make-spark1-6-saveastextfile-to-append-existing-file
... View more
03-15-2017
06:42 AM
4 Kudos
@joe john
Have you checked trying to wget http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0/hdp.repo from the server? It could be a case of firewall issues. Also, can you post what error is shown next to the red exclamation?
... View more
09-26-2016
05:29 PM
2 Kudos
@samuel sayag Is Ambari infra service installed/started?
... View more
09-26-2016
04:30 PM
4 Kudos
@Anas A 1) HDP is a stack that is maintained by Hortonworks. It is a collection of services and versions of the services certified by Hortonworks to work together as a hadoop system. With a version of HDP "stack", you will have a recommended set of versions of services installed. You can see the growth of the HDP stack in the diagram titled "Ongoing innovation in Apache", here : http://hortonworks.com/products/data-center/hdp/ 2) To use HDP repo you don't need an enterprise license. HDP is completely open source 3) Before starting off things in a production system, you may want to check install using sandbox and get familiar with HDP: http://hortonworks.com/hadoop-tutorial/learning-the-ropes-of-the-hortonworks-sandbox/ and then go ahead and look at : https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/ch_getting_ready_chapter.html To get a starting point into HDP docs, look at : http://hortonworks.com/downloads/#data-platform and http://docs.hortonworks.com/index.html -- This has docs for every version of HDP and ambari
... View more
09-21-2016
04:04 PM
2 Kudos
@Ludovic Janssens Please refer to the following doc : https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Ambari_Users_Guide/content/_decommissioning_masters_and_slaves_.html and https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_Sys_Admin_Guides/content/ref-b50b4ee6-0d7b-4b86-a06f-8e7bac00810f.1.html to understand #1 To answer #2, Yes the physical data will remain in the worker node (unless you delete the node). You will need to rebalance once you recommission your node. Refer point #7 here : https://acadgild.com/blog/commissioning-and-decommissioning-of-datanode-in-hadoop/ Hope this helps!
... View more
09-20-2016
04:34 PM
1 Kudo
Hi @Andrew Watson, Please refer the following community question for the same : https://community.hortonworks.com/questions/49340/how-do-i-change-namenode-and-datanode-dir-for-an-e.html#comment-49804. This has a certified approved answer. You can also check : https://community.hortonworks.com/articles/2308/how-to-move-or-change-the-hdfs-datanode-directorie.html Hope this helps!
... View more
09-13-2016
05:31 PM
1 Kudo
@Hammad Ali This most definitely looks like an agent issue. Can you check if 1. There are stale agent processes 2. The agent is up and running (And not shutting down after starting for some reason) To confirm both, you can use : ps -ef | grep "ambari_agent"
... View more
08-24-2016
06:51 AM
2 Kudos
@Roberto Sancho The second issue is caused by the ambari repo file name. Please ensure the repo file name is ambari.repo
... View more
08-10-2016
07:16 PM
2 Kudos
@Zach Kirsch :The problem in the script could be that the wait between stopping all services and starting it is not enough. The start immediately after stop would result in something of the sort : {
"status" : 500,
"message" : "org.apache.ambari.server.controller.spi.SystemException: An internal system exception occurred: Invalid transition for servicecomponenthost, clusterName=cl1, clusterId=2, serviceName=HDFS, componentName=SECONDARY_NAMENODE, hostname=nat-r6-dtxs-ambari-hosts-4-4.openstacklocal, currentState=STOPPING, newDesiredState=STARTED"
} Instead what you could do is parse the response of the call to put services to INSTALLED state and check that it is completed. Code here (Assuming you have the ambari.props set up as in https://community.hortonworks.com/questions/29439/ambari-api-to-restart-all-the-services-with-stale.html: curl -u $AMBARI_ADMIN_USER:$AMBARI_ADMIN_PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context": "put services into STOPPED STATE"},"Body":{"ServiceInfo": {"state" : "INSTALLED"}}}' "$URL" > /tmp/response.txt
newURL=`grep -o '"href" : [^, }]*' /tmp/response.txt | sed 's/^.*: //' | tr -d '"'`
echo newURL=$newURL
request_status=""
while [ "$request_status" != "COMPLETED" ];
do
curl -u $AMBARI_ADMIN_USER:$AMBARI_ADMIN_PASSWORD -i -X GET "$newURL" > /tmp/new_response.txt
request_status=`grep -o '"request_status" : [^, }]*' /tmp/new_response.txt | sed 's/^.*: //' | tr -d '"'`
echo $request_status
done
curl -u $AMBARI_ADMIN_USER:$AMBARI_ADMIN_PASSWORD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context": "put services into STARTED state"},"Body":{"ServiceInfo": {"state" : "STARTED"}}}' "$URL"
NOTE : This will fail if the services are all already in stopped state or if the stop of services fails (You will need to check in the while loop if "$request_status" = "FAILED", abort the script) These scripts give you bare minimum to get things to work. Extra checks needs to be added to make them fault tolerant (esp to timing issues).
... View more
08-09-2016
06:24 PM
2 Kudos
@Gulshad Ansari: Please check http://stackoverflow.com/questions/19205057/how-to-fix-corrupt-hadoop-hdfs. This has a clear tutorial on how to get the corrupted block. Once you locate it, it is a simple hdfs fs -rm command to remove the corrupted block
... View more