Member since
01-24-2014
101
Posts
32
Kudos Received
18
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
28051 | 02-06-2017 11:23 AM | |
6972 | 11-30-2016 12:56 AM | |
7871 | 11-29-2016 11:57 PM | |
3692 | 08-16-2016 11:45 AM | |
3711 | 05-10-2016 01:55 PM |
08-23-2016
03:06 PM
I would say that you should work with your cluster administrator to update the permissions, since your user will not be able to create the subfolder that YARN is trying to create for your user either.
... View more
08-16-2016
11:45 AM
1 Kudo
Ah I see, then yes I agree, replication does impact fsimage and memory footprint and that documentation should be updated to reflect that.
... View more
08-11-2016
03:51 PM
Hi Tomas, I think this would be best in the Cloudera Manager forum[1]. I agree with you, this seems to be a Cloudera Manager issue rather than an HBase one. [1]https://community.cloudera.com/t5/Cloudera-Manager-Installation/bd-p/CMInstall
... View more
08-11-2016
03:48 PM
Thanks for the post, made me re-examine my assumptions based on available topics that come up on a search. The number i had always heard is 150 bytes for every block, file, and directory. [1][2] this combined with a heavy dose of "back of the envelope" type math and a taking it all times 2 as a safety factor has worked well for me in the past for a quick estimation of the amount of jvm heap you need. [1]http://blog.cloudera.com/blog/2009/02/the-small-files-problem/ [2]http://www.mail-archive.com/core-user@hadoop.apache.org/msg02835.html If the above is actually true then the size of the files, block size, and number of directories all matter for your raw heap needed calculations. keep in mind all of the handlers use heap as well, so this won't be 100% accurate for predicting namenode heap necessary for x many files, directories, and blocks. If you are interested in performing an experiment to find out what it really is, not just what it is theoritically by looking at the individual methods, I think you could find this by starting a test cluster, take a heap dump, write a single file less than the block size to HDFS / with replication 1, take another heap dump, and then compare. then you can repeat the experiment each variable, (i.e. create the file in a directory, create a file bigger than 1 block, create a file with more than 1 replica). good luck and let us all know what you find! -Ben
... View more
08-11-2016
12:00 PM
you shouldn't have to, as when the journalnode starts up, it should recognize that it is behind and then sync up with the other journalnodes. (similar things happen when you stop a journalnode and then restart it later)
... View more
05-25-2016
08:28 AM
okay looking through the errors, 16/05/25 11:37:48 ERROR orm.CompilationManager: Could not make directory: /home/bigdata/. looks like /home in hdfs may not have the correct permissions to allow users to create their home directory if it doesn't exist already. Error: java.lang.RuntimeException: java.lang.RuntimeException: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection This is again indicative of a network issue. this time it's complaing about the Network adapter. this is a total guess, but this could be because of tnsnames.ora or listener.ora configuration?
... View more
05-24-2016
02:09 PM
maybe make sure that 1521 is open from node01 to 10.7.48.240? here is an example using netcat: from node01: nc -z 10.7.48.240 1521 If you don't see "Connection to10.7.48.240 port 1521 [tcp/rfb] succeeded!" or something similar, that means either 10.7.48.240 is not listening on 1521 or that there is something blocking your way, either on the Host OS itself, or something in between the hosts like a router ACL.
... View more
05-24-2016
12:41 PM
1 Kudo
Glad to Hear it!
... View more
05-10-2016
01:55 PM
1 Kudo
Short Answer: No, not possible to balance with respect to cpu/memory resources out of the box today to my knowledge. Longer Answer: You can write a custom balancer either external to hbase or using the hbase balancer protocol. Using the region_mover.rb as an example you can write your own jruby that can be run by the shell. Unfortunately, ultimately you will likely be better off without the underpowered nodes in the cluster than you are with them in. Perhaps keep them in for HDFS storage and run just YARN there?
... View more