Member since
08-15-2016
189
Posts
63
Kudos Received
22
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5659 | 01-02-2018 09:11 AM | |
2993 | 12-04-2017 11:37 AM | |
2144 | 10-03-2017 11:52 AM | |
21559 | 09-20-2017 09:35 PM | |
1600 | 09-12-2017 06:50 PM |
09-22-2016
10:59 PM
Hi, I have insufficient memory on my Docker Sandbox. Hiveserver2 can not start because of it: OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f74b44d3000, 3183083520, 0) failed; error='Cannot allocate memory' (errno=12) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (malloc) failed to allocate 3183083520 bytes for committing reserved memory. I tried allocating 8GB to the container by altering the start_sandbox.sh script: docker run -v hadoop:/hadoop -m 8G --name sandbox --hostname "sandbox.hortonworks.com" --privileged -d \ But still I have only 2GB in the container: top - 22:58:30 up 18 min, 2 users, load average: 0.26, 1.47, 1.25
Tasks: 39 total, 1 running, 38 sleeping, 0 stopped, 0 zombie
Cpu(s): 5.1%us, 3.5%sy, 0.0%ni, 67.8%id, 23.6%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 2047164k total, 1973328k used, 73836k free, 400k buffers
Swap: 4093948k total, 3181344k used, 912604k free, 22904k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
608 hdfs 20 0 982m 132m 5372 S 25.0 6.6 0:07.20 java
606 hdfs 20 0 1022m 146m 5304 S 3.7 7.3 0:12.34 java
How to solve this?
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
09-22-2016
09:55 PM
2 Kudos
Well, I turned to the Docker version instead. That one is working!
... View more
09-21-2016
09:17 PM
I have 8GB allocated. Longest time I ran the start up was well over 12 min.
... View more
09-21-2016
08:54 PM
3 Kudos
Hi, Today I downloaded the Sandbox from http://hortonworks.com/downloads/#sandbox When I start it though it seems like some but not all components are launched. First the Virtualbox screen is stuck in this mode: When you hit a button I get more info: The time counter just runs forever. By this time the Ambari login screen is up and the YARN app WebUI on 8088 as well, but that is it. It is stuck on start up. What is going on? UPDATE After 36 minutes it it still doing something
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
09-21-2016
04:12 PM
@anjul tiwari have you tried changing the hbase tablespace into '<host>:2181/hbase-unsecure' in stead of /hbase ?
... View more
09-20-2016
03:05 PM
@Chris Tarnas You are right, on HDP2.4 this is broken. But it still works as long as you don't go for the DF option, that one fails. It needs Phoenix 4.7.0.0 to work, that will be available in HDP2.5. The Phoenix_Spark connector for getting Hbase data into a RDD (not DF !) works for me. An update on @Guilherme Braccialli excellent guidance, to make the Phoenix Spark thing work, for RDD option, but then for the 2.4 Sandbox: spark-shell --master yarn-client --jars /usr/hdp/2.4.0.0-169/phoenix/phoenix-4.4.0.2.4.0.0-169-client.jar,/usr/hdp/2.4.0.0-169/phoenix/lib/phoenix-spark-4.4.0.2.4.0.0-169.jar --conf "spark.executor.extraClassPath=/usr/hdp/2.4.0.0-169/phoenix/lib/phoenix-spark-4.4.0.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/phoenix/phoenix-4.4.0.2.4.0.0-169-client.jar"
... View more
09-19-2016
04:48 PM
@srinivasa rao Play with the threshold value. Set it to a higher value (2GB)
... View more
09-19-2016
02:39 PM
@srinivasa rao If you have HDFS block size set to 2MB, then split size will also be 2MB. These 2 entities are connected.
... View more
09-19-2016
02:35 PM
2 Kudos
@srinivasa rao This behaviour is directed by some of the hive performance tuning settings of the hive.fetch.* family. They decide on whether a shortcut to just go at the (table)file in HDFS without any MR/Tez is wanted and/or feasible. There are a few of them: hive.fetch.task.conversion
hive.fetch.task.conversion.threshold
hive.fetch.task.aggr The default is hive.fetch.task.conversion=more and it means that going straight at the data (without spinning up mappers) is default. It works even if you query for only 1 col out of many. If it is set to none or minimal then you probably need to put in the limit x clause to have the same bypass of any map functions. I think your env does not have it set to more or the threshold value is too low. There is some more info about these settings here
... View more
09-07-2016
10:44 PM
1 Kudo
Well it turns out that this is the minimum required: spark.driver.extraClassPath /usr/hdp/2.4.0.0-169/phoenix/lib/phoenix-spark-4.4.0.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/phoenix/lib/hbase-client.jar:/usr/hdp/2.4.0.0-169/phoenix/lib/hbase-common.jar:/usr/hdp/2.4.0.0-169/phoenix/lib/phoenix-core-4.4.0.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/phoenix/lib/hbase-protocol.jar:/usr/hdp/2.4.0.0-169/zeppelin/lib/interpreter/phoenix/hbase-server-1.0.1.jar And use the same for spark.executor.extraClassPath All these jar are directly available on the HDP 2.4 sandbox
... View more
- « Previous
- Next »