Member since
09-23-2015
151
Posts
110
Kudos Received
50
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
242 | 08-12-2016 04:05 AM | |
542 | 08-07-2016 03:58 AM | |
234 | 07-27-2016 06:24 PM | |
455 | 07-20-2016 03:14 PM | |
344 | 07-18-2016 12:54 PM |
01-13-2016
03:36 PM
Thanks for pointing that out @Daniel Hendrix - we will get the PDF fixed right away. -Rich
... View more
01-13-2016
03:32 PM
Just FYI: WebHDFS is not a part of the HDPCD exam.
... View more
01-13-2016
03:31 PM
The two best resources to prepare for the exam are: 1. Going through the list of exam objectives and making sure you know how to perform each task: http://hortonworks.com/training/class/hdp-certified-developer-hdpcd-exam/ 2. Working through the practice exam -Rich
... View more
01-13-2016
03:28 PM
Hi Daniel, There are no WebHDFS questions on the exam. The exam objectives are listed here: http://hortonworks.com/training/class/hdp-certified-developer-hdpcd-exam/ Have you tried the practice exam? It contains tasks similar to the real exam, and more importantly you get to see what the exam environment is like: http://hortonworks.com/wp-content/uploads/2015/02/HDPCD-PracticeExamGuide1.pdf Thanks, Rich Raposa Certification Manager
... View more
01-12-2016
10:07 PM
2 Kudos
OK - I think I answered my own question: sorry for the confusion. Ambari 2.2.x appears to be smarter than its predecessors. The reason my initial REST request "failed" was because HDFS was already running, so Ambari simply ignored the request. REST API requests used to always show up in the "Background Operations" window, but now they do not if the service doesn't need starting. That decision seems fine with me - just caught me off guard! Thanks for providing feedback. -Rich
... View more
01-12-2016
10:02 PM
If the request is not being denied, then my question should be changed. The REST request above used to work - but now on Ambari 2.2 it does not have any affect. Here is something else that is interesting - when I try to start all services using the following request: curl -u admin:admin -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Start all services"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' http://namenode:8080/api/v1/clusters/horton/services?ServiceInfo this request gets accepted by Ambari, but here is what it does: That image is not cropped - all that happens is Pig, Slider, Sqoop and the Tez Client get installed. What I expected to see was all of my services starting. Again, this is based on the curl command above that I have used with Ambari 2.1 regularly.
... View more
01-12-2016
09:58 PM
Changing "services" to "requests" resulted in a bad request - maybe I need to change something else? I have never started a service using "requests" before: HTTP/1.1 400 Bad Request
{
"status" : 400,
"message" : "org.apache.ambari.server.controller.spi.UnsupportedPropertyException: The properties [ServiceInfo/state] specified in the request or predicate are not supported for the resource type Request."
}
... View more
01-12-2016
09:43 PM
1 Kudo
I am trying to startup services using the REST API on a newly-installed cluster named "horton" using Ambari 2.2.0.0. The following command used to work on prior versions of Ambari, but now it returns an error: $ curl -u admin:admin -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Start HDFS"}, "Body": {"ServiceInfo": {"state": "STARTED"}}}' http://namenode:8080/api/v1/clusters/horton/services/HDFS The response I get back is: HTTP/1.1 200 OK
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
User: admin
Set-Cookie: AMBARISESSIONID=zs3kuvep5dte6gie9q6lb5u5;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 0
Server: Jetty(8.1.17.v20150415)
Any help would be greatly appreciated. Thanks!
... View more
Labels:
01-04-2016
09:11 PM
1 Kudo
You need to view this page carefully - the current exam is on HDP 2.2 with Ambari 1.7: http://hortonworks.com/training/class/hdp-certified-developer-hdpcd-exam/ That can change at any time, but for now you should be prepared for that environment. The Sandbox can be helpful when practicing and learning Pig and Hive, but I would highly recommend you attempt the practice exam. It is on the same environment as the real exam and contains tasks similar to the real exam.
... View more
01-04-2016
08:41 PM
1 Kudo
How you run the script is irrelevant on the exam. If you can get it to work from the command line, then that is perfect.
... View more
01-04-2016
05:35 PM
Thanks. I logged in to it and everything looks fine. MySQL is running on the namenode, as expected, and I can run the mysql client to view the data in it. Let me know if you need any further assistance.
... View more
01-04-2016
05:01 AM
1 Kudo
Are you running Pig (or starting the Grunt shell) with the -useHCatalog flag?
... View more
01-03-2016
03:22 AM
1 Kudo
Because you are using the wrong class name for HCatLoader. It should be: org.apache.hive.hcatalog.pig.HCatLoader Let me know if that fixes your issue.
... View more
01-03-2016
03:22 AM
2 Kudos
Because you are using the wrong class name for HCatLoader. It should be: org.apache.hive.hcatalog.pig.HCatLoader Let me know if that fixes your issue.
... View more
01-03-2016
02:31 AM
1 Kudo
@Vidya SK: You do not need to use the REGISTER command to use any of the standard libraries, which includes the HCatLoader and HCatStorer class.
... View more
01-02-2016
07:19 PM
Let me make a comment for anyone reading this who is attempting the practice exam: mysqld is already running on the namenode and you do not need to start it. If you want to connect to mysql, follow the instructions I provided below. It is possible that mysqld may have stopped, but very unlikely. If you are having problems with the Sqoop tasks, make sure your connection URL is correct. Look in /home/horton/solutions for the correct URL.
... View more
01-02-2016
06:07 PM
Send me the public dns of your instance. Email it to certification@hortonworks.com. I'll take a look
... View more
12-31-2015
06:47 PM
What errors are you getting? And what is the command you are using to run the code? Perhaps you could share your code also. Thanks.
... View more
12-31-2015
04:05 AM
You need to ssh onto the namenode first: ssh root@namenode (password is hadoop) No need to start mysqld on the namenode - it's already running. You can view what's in the DB by running mysql: # mysql --user root -p Password is "hadoop". Let me know if that works.
... View more
12-28-2015
02:27 AM
No worries. I am just trying to make it clear for anyone in the future who is preparing for our exams. The HCatLoader and HCatStorer classes have different package names, so the subject of this post is misleading and has nothing to do with missing JAR files. If you use the correct class name then everything works fine.
... View more
12-28-2015
02:11 AM
That issue with Ambari has nothing to do with the environment of the practice exam. The package name for HCatLoader and HCatStorer changed in HDP 2.0. Using the proper class name (shown above) is the answer to this question.
... View more
12-26-2015
04:46 PM
Can you share the error? I am guessing it is either not an error (there is a warning you can ignore), or you are using the wrong package name for HCatLoader. The class name to use is: org.apache.hive.hcatalog.pig.HCatLoader Let me know if you get it to work.
... View more
12-22-2015
01:42 AM
Yes - restart the ambari-agent and then run start_all_services.sh. Let me know if that fixes the issue. If not, I can login to your instance and see what the issue is.
... View more
12-21-2015
06:09 PM
How large is your dataset? The number of mappers is based on your Input Splits, not the number of buckets. If you have a large amount of data, then 1 bucket may require multiple mappers. Based on your question though I wonder if your buckets might not have been created properly. It sounds like you have a bucket per client id. Are you able to share any of the CREATE TABLE code?
... View more
12-16-2015
05:37 PM
In the situations I have seen when using XML and Hive, each record of XML has to be on a single line. Also, if you use a custom SerDe designed for XML then you normally don't worry about the line terminator. How are you processing the records now? This might help: https://github.com/dvasilen/Hive-XML-SerDe/wiki/XML-data-sources
... View more
12-16-2015
05:19 PM
Are you talking about running a Pig UDF written in Python? You need to make sure you are using streaming_python instead of jython. What does your REGISTER command look like?
... View more
12-16-2015
05:08 PM
1 Kudo
If you do an "INSERT OVERWRITE" then all the files in the table's LOCATION will be deleted and replaced with the new data.
... View more
12-16-2015
02:21 AM
You can set properties using -D, but you need a space. For example: oozie job -D property=value -run ... Not sure if that works for log4j properties though.
... View more
12-16-2015
02:00 AM
2 Kudos
I think you answered your own question: you did not use OVERWRITE on the second "load" command, so you added the records twice. If you wanted to start over w/ all new data in the table, run the load command with OVERWRITE.
... View more
12-16-2015
12:57 AM
Make sure you are not using the free EC2 instance type - it is not big enough to run HDP. Log in and try restarting the Ambari agent, and then run the start_all_services.sh script: # ssh namenode "ambari-server restart"
# /root/start_all_services.sh Let me know if that works for you.
... View more
- « Previous
- Next »