Member since
01-22-2016
15
Posts
13
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5627 | 01-23-2016 08:45 PM | |
2241 | 01-23-2016 08:14 AM |
02-03-2016
07:37 AM
1 Kudo
@Niall Moran In my case the configuration changes outlined above did solve the problem. I first reverted to the original sandbox configs for all of the components, then committed the changes exactly as suggested by Paul Hargis. (Note: when using Ambari via Internet Explorer, my queries often hanged in the interface, but were processed still in the background. I don't have this problem when using Firefox.)
... View more
01-24-2016
10:02 AM
Sure, it is on this page in the comments below: http://hortonworks.com/hadoop-tutorial/hello-world-an-introduction-to-hadoop-hcatalog-hive-and-pig/#section_1
... View more
01-23-2016
08:45 PM
6 Kudos
I just solved this Azure Sandbox issue based on a comment by Paul Hargis I found on one of the tutorial pages: Workaround for Hive queries OutOfMemory errors: Please note that in some cases (such as when running the Hortonworks Sandbox on Microsoft Azure VM and allocating ‘A4’ VM machine), some of the Hive queries will produce OutOfMemory (Java Heap) errors. As a workaround, you can adjust some Hive-Tez config parameters using Ambari console. Go to the Services–>Hive page, click on ‘Configs’ tab, and make the following changes: 1) Scroll down to Optimization section, change Tez Container Size, increasing from 200 to 512
Param: “hive.tez.container.size” Value: 512 2) Click on “Advanced” tab to show extra settings, scroll down to find parameter “hive.tez.java.opts”, and change Hive-Tez Java Opts by increasing Java Heap Max size from 200MB to 512MB:
Param: “hive.tez.java.opts” Value: “-server -Xmx512m -Djava.net.preferIPv4Stack=true”
... View more
01-23-2016
08:23 PM
1 Kudo
Thanks a lot Neeraj.
... View more
01-23-2016
05:03 PM
1 Kudo
Update: I reverted all configs to their default Sandbox state. I tried running the query again. The result is now: failed with java.lang.OutOfMemoryError: Java heap space. ERROR : Vertex failed, vertexName=Map 1,
vertexId=vertex_1453566734706_0004_1_00, diagnostics=[Task failed,
taskId=task_1453566734706_0004_1_00_000000, diagnostics=[TaskAttempt 0
failed, info=[Error: Failure while running
task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap
space (On my local VM the query runs without problems, so it is only on the Azure Sandbox)
... View more
01-23-2016
12:18 PM
1 Kudo
Strangely, checking the Yarn scheduler via Ambari it turns out that the job (004) finished succesfully after 31 minutes. Seems a very long time though. I notice now in 'cluster metrics' that it says total memory = 2.20GB. Could this be the problem? The VM has 28GB, so something seems wrong there in the memory allocation. (Also tried to check the application log, but when I click 'logs' it points to an invalid url.)
... View more
01-23-2016
12:06 PM
Thanks Neeraj
... View more
01-23-2016
12:02 PM
1 Kudo
It is the only job running... Thanks for the link to the tuning guide. I was wondering though: could something more basic be wrong (i.e. without needing to do sophisticated tuning)? It is a standard Sandbox on an off-the-shelf Azure VM, and I am just following the simple instructions from the Hello World tutorial. The tutorial doesn't mention any specific changes to the configuration (except from having enough resources, which an A6 should have?). Thanks for your help.
... View more
01-23-2016
11:16 AM
Thanks Kuldeep. I tried, but as soon as the progress bar is shown, no additional debug info is produced. All that happens after this point is just the elapsed time increasing.
... View more
01-23-2016
11:07 AM
1 Kudo
Hi, after experiencing very slow performance on various VM's I am now using HDP Sandbox on an Azure A6 configuration (4 cores, 28GB memory). I assume this should be enough for reasonable performance. Yet, I got a Java heap OutOfMemoryError while running simple queries based on the Hello World tutorial (e.g. SELECT max(mpg) FROM truck_mileage). I increased the Tez container size to 2048MB, which solves the OutOfMemoryError, however now the query is stalling. What is going wrong / are there any parameters to be set differently? Thanks in advance.
hive> select max(mpg) from truck_mileage;
Query ID = hive_20160123104344_a796f4ba-2fa0-4272-b4e8-42720fd32417
Total jobs = 1
Launching Job 1 out of 1
Status: Running (Executing on YARN cluster with App id application_1453543525470_0004)
--------------------------------------------------------------------------------
VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
--------------------------------------------------------------------------------
Map 1 INITED 1 0 0 1 0 0
Reducer 2 INITED 1 0 0 1 0 0
-------------------------------------------------------------------------------
VERTICES: 00/02 [>>--------------------------] 0% ELAPSED TIME: 1236.84 s
--------------------------------------------------------------------------------
... View more
Labels:
- Labels:
-
Apache Hive