Member since
08-23-2016
261
Posts
201
Kudos Received
106
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1756 | 01-26-2018 07:28 PM | |
1399 | 11-29-2017 04:02 PM | |
35335 | 11-29-2017 03:56 PM | |
3516 | 11-28-2017 01:01 AM | |
955 | 11-22-2017 04:08 PM |
02-22-2017
09:13 PM
1 Kudo
Hi @Vladislav Falfushinsky Hortonworks is aiming for early Q2 for Ambari 2.5 GA release! Very exciting.
... View more
02-22-2017
06:56 PM
1 Kudo
Hi, is there a reason you wish to add them using the Sandbox? The Sandbox, being a specialized product, is likely adding unnecessary complexity. It might be easier avoid the complications fro the Sandbox and just do a fresh Ambari-based install.
... View more
02-21-2017
10:07 PM
you can also use the shell in a box included in the Sandbox to remove ambiguity: http://127.0.0.1:4200/
... View more
01-26-2017
11:27 PM
3 Kudos
@diegoavella if you are having trouble with the Ambari admin credentials, you can always login/ssh to the sandbox as the root user and use an included command-line tool to reset the password: ssh root@127.0.0.1 -p 2222 then run: ambari-admin-password-reset There is a good explanation with screenshots in Step 2.2 of the following tutorial that might be a useful reference for you: http://hortonworks.com/hadoop-tutorial/learning-the-ropes-of-the-hortonworks-sandbox/
... View more
12-16-2016
05:08 PM
With virtualbox, it is it a good idea to have a second adapter enabled and configured as a host-only adapter in the screenshot here.
... View more
12-15-2016
09:34 PM
@Arsalan Siddiqi Can you post screenshots of the virtual box network settings (specifically, adapter 1 and adapter 2), and also the output of ifconfig?
... View more
12-14-2016
05:04 PM
2 Kudos
Each use case may have different requirements. I know of several organizations using Hadoop that simply have not encountered a need for Yarn Node labels yet, but are all using the queues and the capacity scheduler heavily.
... View more
12-14-2016
05:02 PM
2 Kudos
I agree on using the Ambari Files View on navigating around the HDP cluster when learning, it makes life a little easier 🙂 To answer your question, the *-site.xml files for the core hadoop components on an HDP cluster are located in /etc/hadoop/conf Other frameworks/technologies likely will have their own folders, such as Pig being /etc/pig/conf
... View more
11-29-2016
10:43 PM
1 Kudo
Hi Omkar, I don't believe it will work with HBase today directly. I have heard that this may come in the future (possibly via Apache Phoenix), but no ETA.
... View more
11-25-2016
02:55 PM
2 Kudos
Hi, One way might be to mount the windows shared folder on your Linux machine, then use the GetFiles processor and configure to look at the mounted path.
... View more
- « Previous
- Next »