Member since
07-31-2019
346
Posts
259
Kudos Received
62
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2534 | 08-22-2018 06:02 PM | |
1526 | 03-26-2018 11:48 AM | |
3732 | 03-15-2018 01:25 PM | |
4791 | 03-01-2018 08:13 PM | |
1314 | 02-20-2018 01:05 PM |
12-27-2016
12:47 PM
Hi all,
Has anyone a workaround for this problem ? I have exactly the same case.
I have similar issues on the Sandbox 2.5 (VirtualBox-5.1.12-112440-Win - HDP_2.5_virtualbox).
I killed
the jobs with putty as root : yarn application -kill
application_1482410373661_0002 but they are still visible on Ambari.
[root@sandbox ~]# yarn application -kill application_1482410373661_0002
16/12/24 12:26:40 INFO impl.TimelineClientImpl: Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
16/12/24 12:26:40 INFO client.RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/172.17.0.2:8050
16/12/24 12:26:40 INFO client.AHSProxy: Connecting to Application History server at sandbox.hortonworks.com/172.17.0.2:10200
16/12/24 12:26:44 WARN retry.RetryInvocationHandler: Exception while invoking ApplicationClientProtocolPBClientImpl.getApplicationReport over null. Not retrying because try once and fail.
org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application with id 'application_1482410373661_0002' doesn't exist in RM.
I've found an issue corresponding :
Tez
client keeps trying to talk to RM even if RM does not know about the
application
https://issues.apache.org/jira/browse/TEZ-3156
This patch should be included as it was fixed for version 0.7.1
In the log (Ambary query) I can read 993 time :
INFO
: Map 1: 0/1 Reducer 2: 0/2
The query is the proposed in the tutorial : (
http://fr.hortonworks.com/hadoop-tutorial/hello-world-an-introduction-to-hadoop-hcatalog-hive-and-pig/#section_4)
<code>SELECT truckid, avg(mpg) avgmpg FROM truck_mileage GROUP BY truckid;
Any idea how to clear the history and restart without the running state ? Thanks in advance
... View more
03-26-2016
05:43 PM
1 Kudo
I found the problem. I must have missed the step "chown -R solr:solr /opt/lucidworks-hdpsearch/solr". Once I did this the query worked but still did not see tweets in the dashboard. I deleted the collection and reloaded it. After that data started to appear.
... View more
06-15-2016
03:43 AM
@Scott Shaw: Hi Scott, just wanted to know can I possibly access the REST API of Ambari. Is there a way so that I could further conduct analysis with the data thrown in the collector.
... View more
03-14-2016
11:16 PM
1 Kudo
Hello Michael for now an alternative way to access Zeppelin and Storm is through their web addresses: Storm UI http://127.0.0.1:8744/ Zeppelin Notebook http://127.0.0.1:9995/#/ Note: the latest sandbox refresh will be available soon.
... View more
06-08-2016
12:02 PM
Hue is not included with the current version of the sandbox. All activities are done either through Ambari or from the OS prompt. If you want to use Hue, you would have to "side load" it onto your sandbox. I am sure there are instructions as to how to do that out on the Internet. I did not do that. We want to stay "stock" Hortonworks.
... View more
11-26-2017
03:43 PM
I clicked the Quick Links box under Advanced HDP and found that the username and password were both 'raj_ops'.
... View more
02-22-2016
09:32 PM
2 Kudos
Scott, there's two layers of memory settings that you need to be aware of - NodeManager and Containers. NodeManager has all the available memory it can provide to containers. You want to have more containers with decent memory. Rule of thumb is to use 2048MB of memory per container. So if you have 53GB of available memory per node, then you have about 26 containers available per node to do the job. 8GB of memory per container IMO is too big. We don't know how many disks are there to be used by Hadoop from the SAN storage. You can disregard the disks in the equation as the formula is typically done for on-premise clusters. But you can run a manual calculation of the memory settings since you have the minimum container per node and memory per container values (26, 2048MB respectively). You can use the formula below. Just replace the # of containers per node and RAM per container with your values. Please note that 53GB of available ram per vm is too big knowing it only has 54GB RAM. Typically, you would want to set aside about 8GB for other processes - OS, HBase, etc. which means available memory per node is just 46GB. Hope this helps.
... View more
02-17-2016
01:29 AM
2 Kudos
@Jeremy Salazar since the error states that the user is "ambari" you will need to add the following values to the HDFS custom core-site configuration: hadoop.proxyuser.ambari.groups=*
hadoop.proxyuser.ambari.hosts=* Once that's done you'll need to follow @Neeraj Sabharwal step and create your home directory and assign access.
... View more
02-16-2016
01:42 AM
1 Kudo
I got it ! At last I could install HDP2.3.4.0 with MSSQL 2012. Point. After creating users and passwords, "hive_user" and "oozie_user" and password "hive_password" and "oozie_password", on MSSQL, I had login to MSSQL via new users ,"hive_user" and "oozie_user", to determine these users. Then I could install HDP2.3.4.0 with MSSQL 2012. Thank you for your advice, Scott Shaw and Neeraj Sabharwal. By the way, The reason that I decided to use MSSQL is that Derby is not stable with HDP on Windows Server 2012. If Derby is unstable, Oozie service is not stable. Are there same phenomena like this?
... View more