Member since
10-27-2014
38
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2937 | 04-07-2015 01:13 AM | |
16590 | 12-18-2014 02:21 AM |
02-02-2015
07:00 PM
Thanks for reply, I've increase the datanode heap size to 1Gb , and my datanode work well so far, but there is one more thing: I upload data (just using -put command) to my cluster (2736 folder with 200 file each folder (about 15kB each file) ) and my cluster go from 350k up to over 700k blocks each node, then the warning too many block prompted. I really don't understand why there are so many blocks because the total size of data is just about 5GB. Regards, Tu Nguyen
... View more
01-29-2015
05:45 AM
Thanks for reply, I got 3 datanodes, the one that shutdown is on master host, this is the information: 00master - block: 342823 - block pool used: 53,95GB (6,16%) 01slave - block: 346297 - block pool used: 54,38GB (12,46%) 02slave - block: 319262 - block pool used: 48,39GB (33,23%) and this is my heap setting DataNode Default Group / Resource Management : 186 MB DataNode Group 1 / Resource Management 348 MB Regards, Tu Nguyen
... View more
01-28-2015
01:23 AM
Hi, i'm using CDH5.3 i've got a cluster with 3 host: 1 master host have namenode & datanode, 2 host just have datanode, Everything run fine till recently when i run a hive Job, the datanode on the master shutdown and i got the error missing block & underreplicated blocks. Here is the error on the master's datanode: 3:35:09.545 PM ERROR org.apache.hadoop.hdfs.server.datanode.DirectoryScanner Error compiling report java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:188) at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:545) at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:422) at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:403) at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:359) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.OutOfMemoryError: Java heap space 3:35:09.553 PM INFO org.apache.hadoop.hdfs.server.datanode.DataNode opWriteBlock BP-993220972-192.168.0.140-1413974566312:blk_1074414393_678864 received exception java.io.IOException: Premature EOF from inputStream 3:35:09.553 PM ERROR org.apache.hadoop.hdfs.server.datanode.DirectoryScanner Exception during DirectoryScanner execution - will continue next cycle java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:549) at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:422) at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:403) at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:359) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:188) at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:545) ... 10 more Caused by: java.lang.OutOfMemoryError: Java heap space 3:35:09.553 PM ERROR org.apache.hadoop.hdfs.server.datanode.DataNode 00master.mabu.com:50010:DataXceiver error processing WRITE_BLOCK operation src: /192.168.6.10:48911 dst: /192.168.6.10:50010 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:468) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:772) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:724) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:126) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:72) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:226) at java.lang.Thread.run(Thread.java:745) Can someone help me to fix this ? Thanks !
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
HDFS
01-07-2015
07:13 PM
I'm using CM 5.2, When i change the IP on my cluster , then restart CM, it wont detect my new IP address. Then i try to fix by edit /etc/cloudera-scm-agent/config.ini, but still don't work. TILL NOW, I STILL CAN ACCESS TO CLOUDERA MANAGER. Then, i try to update coudera manager to 5.3, hope that the update can auto reconfig the IP, this is when i got the whole new error, i can access to CM, but the HOME, CLUSTER and HOST tab wont show anything and i got this error: A server error has occurred. Send the following information to Cloudera. Path: http://192.168.6.10:7180/cmf/parcel/topLevelCount Version: Cloudera Express 5.3.0 (#166 built by jenkins on 20141218-0505 git: 9ec4939d0a7b563597da611c675725916369a60d) javax.persistence.PersistenceException:org.hibernate.exception.GenericJDBCException: Could not open connection at AbstractEntityManagerImpl.java line 1387 in org.hibernate.ejb.AbstractEntityManagerImpl convert()
... View more
Labels:
- Labels:
-
Cloudera Manager
12-18-2014
02:21 AM
Hello masfworld, I've found my solution here, hope this'll help you too: http://community.cloudera.com/t5/Cloudera-Search-Apache-SolrCloud/Solr-Server-not-starting/m-p/4839#M97
... View more
12-17-2014
02:07 AM
Hello, i'm currently using CDH 5.2 I've a sample data like this : 0 18.6 36.1 53.7 86.5 but when i'm upload to hive with float type, i got this : 0.0 18.6000003815 36.0999984741 53.7000007629 86.5 I don't understand why 18.6 become 18.6000003815 , but 86.5 is still the same, every other variable is change except the .0 and .5 varible. Can someone explain to me, really appreciate the help, thanks !
... View more
Labels:
11-19-2014
09:01 PM
Hi romain, I'm using CDH 5.2, i've add "[desktop] app_blacklist=" to the Service and Wide Configuration tab of Hue on Cloudera Manager but Spark still not show up on Hue. I also try to edit the hue.ini in /etc/hue/conf but nothing happen to, the desktop configuration tab on Hue still show this: app_blacklist spark Comma separated list of apps to not load at server startup. Default: Can you provide any help ? Thanks !
... View more
11-19-2014
01:46 AM
Thanks again owen, The example go well, i can see the Pi result now, still got some error : WARN YarnClientClusterScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory ERROR ConnectionManager: Corresponding SendingConnection to ConnectionManagerId(03slave.mabu.com,42930) not found WARN ConnectionManager: All connections not cleaned up. Don't know if it's because of the poor connection or the amount of RAM on my cluster, but this is still a good start for me anyway. By the way, do you know where i can find more information about Spark system ( how it work, it's operation, when to user yarn-clsuter/yarn-client ...). Thanks alot !
... View more
11-19-2014
12:18 AM
Thanks for your reply sowen, I'm just trying with another link: https://spark.apache.org/docs/1.1.0/running-on-yarn.html and it work. I got the result: 14:52:41 INFO Client: Application report from ResourceManager: application identifier: application_1416365742014_0003 appId: 3 clientToAMToken: null appDiagnostics: appMasterHost: 01slave.mabu.com appQueue: root.root appMasterRpcPort: 0 appStartTime: 1416383498088 yarnAppState: FINISHED distributedFinalState: SUCCEEDED appTrackingUrl: http://00master.mabu.com:8088/proxy/application_1416365742014_0003/history/spark-pi-1416383528301 appUser: root Problem is i can't find where the result of Pi is like when we run Pi example on Hadoop (it'll print the resutl 3.14333...) , where can i find it ? Thanks !
... View more
11-18-2014
07:22 PM
Thanks for the help, I follow the instruction and get this error: Error: Cannot load main class from JAR: file:/var/lib/hadoop-hdfs/class Can you give any advise ? Thanks !
... View more
- « Previous
-
- 1
- 2
- Next »