Member since
02-11-2015
5
Posts
2
Kudos Received
0
Solutions
03-20-2015
02:04 AM
I have around 3 lacs files stored in directory on hdfs. When i tried to access that folder using following command hadoop fs -copyToLocal /user/docsearch/data/DiscardedAttachments /opt/ I am getting following issue : Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded at java.lang.AbstractStringBuilder.<init>(AbstractStringBuilder.java:64) at java.lang.StringBuffer.<init>(StringBuffer.java:108) at java.net.URI.decode(URI.java:2756) at java.net.URI.getPath(URI.java:1318) at org.apache.hadoop.fs.Path.isUriPathAbsolute(Path.java:210) at org.apache.hadoop.fs.Path.isAbsolute(Path.java:223) at org.apache.hadoop.fs.Path.makeQualified(Path.java:335) at org.apache.hadoop.hdfs.DistributedFileSystem.makeQualified(DistributedFileSystem.java:373) at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:445) at org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:213) at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:337) at org.apache.hadoop.fs.shell.CommandWithDestination.recursePath(CommandWithDestination.java:193) at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308) at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278) at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:147) at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260) at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244) at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:124) at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190) at org.apache.hadoop.fs.shell.Command.run(Command.java:154) at org.apache.hadoop.fs.FsShell.run(FsShell.java:254) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at org.apache.hadoop.fs.FsShell.main(FsShell.java:304) I have also did some changes in heap size of hdfs daemons, assigned 4GB to namenode and all datanodes. Still facing same issue. Need urgent help
... View more
Labels:
- Labels:
-
Apache Hadoop
-
HDFS
02-11-2015
03:59 AM
2 Kudos
Sometime just deleting lock file doesnt work. Need to restart cloudera-server service. Thanks
... View more