Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Running mapreduce job on HDP cluster of four computers

avatar
New Contributor

I have created a cluster of four computers with HDP on ubuntu 16.04. But I am clueless about how to run mapreduce jobs on this cluster. Please help me.,I have installed HDP on ubuntu 16.04 with 4 cluster setup. Now I am clueless about how to run mapreduce jobs on this cluster. Please help me.

1 ACCEPTED SOLUTION

avatar
Guru

hi @Vikas Malviya

I would recommend you get started by looking the tutorials provided on the Hortonworks website. While they may be designed for the HDP Sandbox, you can likely still follow them on your own cluster.

https://hortonworks.com/tutorials/

Good luck on your big data journey!

View solution in original post

4 REPLIES 4

avatar
Guru

hi @Vikas Malviya

I would recommend you get started by looking the tutorials provided on the Hortonworks website. While they may be designed for the HDP Sandbox, you can likely still follow them on your own cluster.

https://hortonworks.com/tutorials/

Good luck on your big data journey!

avatar
New Contributor

Thanks sonu for your answer. I have one more doubt. One ambari user account is created on my ubuntu automatically and I am not able to login that account. How can I login?

avatar
Guru

Hi @Vikas Malviya

the default is usually admin/admin but if that is not working for you, can you ssh as a super user and run 'ambari-admin-password-reset'

avatar
New Contributor

No actually that user account is created automatically on my ubuntu system not on my hortonwork.