Member since
06-07-2016
923
Posts
322
Kudos Received
115
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4082 | 10-18-2017 10:19 PM | |
| 4336 | 10-18-2017 09:51 PM | |
| 14836 | 09-21-2017 01:35 PM | |
| 1838 | 08-04-2017 02:00 PM | |
| 2418 | 07-31-2017 03:02 PM |
08-17-2016
05:50 AM
@Tech Guy your latest error is just because of incompatible java version. According to this link: major version number of the class file format being used. Java SE 9 = 53 (0x35 hex), [3]
Java SE 8 = 52 (0x34 hex),
Java SE 7 = 51 (0x33 hex),
Java SE 6.0 = 50 (0x32 hex),
Java SE 5.0 = 49 (0x31 hex),
JDK 1.4 = 48 (0x30 hex),
JDK 1.3 = 47 (0x2F hex),
JDK 1.2 = 46 (0x2E hex),
JDK 1.1 = 45 (0x2D hex). update your Java to JDK 1.8. This should resolve it. Also look at the following link. http://stackoverflow.com/questions/22489398/unsupported-major-minor-version-52-0
... View more
08-17-2016
03:02 AM
Can you try removing python and then reinstalling it.
sudo dpkg -P python2.7 sudo apt-get install python2.7 http://askubuntu.com/questions/426664/how-do-i-reinstall-python-on-my-ubuntu-12-04-server-using-apt-get
... View more
08-17-2016
02:21 AM
1 Kudo
@narender pasunooti Please check the following code (line 81). Basically your python version is evaluating to less than 2.6. It is definitely not coming evaluating as 2.7.5. https://github.com/apache/ambari/blob/trunk/ambari-server/sbin/ambari-server What is your PYTHON home? Can you run "/usr/bin/python -V" and share the result. If you still see the message then based on following link, you can try with "--skip-borken" https://issues.apache.org/jira/browse/AMBARI-6845
... View more
08-17-2016
12:15 AM
@narender pasunooti I will start with something very basic. After installing python, did you start a new ssh window or putty session? if not, can you please try that first. Also run python -V to find python version.
... View more
08-16-2016
07:05 AM
@Fasil Ahamed "If, say for example, the kind of input file that we want to process is immensely huge such that the inputs splits generated for that file are accordingly huge in number, and the resulting meatadata information for the storage references of the entire input splits in HDFS grows beyond the RAM capacity." Dude. Calm down :)...seriously that is some imagination. I must give you that (At this point I am assuming your question is more academic than work related - I could of course be wrong) One namenode object uses about 150 bytes to store metadata information. Assume a 128 MB block size - you should increase the block size for the case you describe. Assume a file size 150 MB. The file will be split in two blocks. First block with 128 MB and second block with 22MB. For each block following information will be stored by Namenode. 1 file inode and 2 blocks. That is 3 namenode objects. They will take about 450 bytes on namenode. For example, at 1MB block size, in this case we will have 150 file blocks. We will have one inode and 150 blocks information in namenode. This means 151 namenode objects for same data. 151 x 150 bytes = 22650 bytes. Even worse would be to have 150 files with 1MB each. That would require 150 inodes and 150 blocks = 300 x 150 bytes = 45000 bytes. See how this all changes. That's why we don't recommend small files for Hadoop. Now assuming 128 MB file blocks, on average 1GB of memory is required for 1 million blocks. Now let's do this calculation at PB scale. Assume 6000 TB of data. That's a lot. I am sure your large file is less than that. Imagine 30 TB capacity for each node. this will require 200 nodes. At 128 MB block size, and replication factor of 3. Cluster capacity in MB = 30 x 1000 (convert to GB) x 1000 (convert to MB) x 200 nodes = 6 000000000 MB (6000 TB) How many blocks can we store in this cluster? 6 000 000 000 MB/128 MB = 46875000 (that's 47 million blocks) Assume 1 GB of memory required per million blocks, you need a mere 46875000 blocks / 1000000 blocks per GB = 46 GB of memory. Namenodes with 64-128 GB memory are quite common. You can do a few things here. 1. Increase the block size to 256 MB and that will save you quite a bit of namenode space. At the scale you are talking about, you should do that regardless. May be even 384-512 MB. 2. Get more memory for name node. Probably 256 GB or even 512 GB servers. Finally, read the following. https://issues.apache.org/jira/browse/HADOOP-1687 and following (notice for 40-50 million files only 24 GB is recommended - half of our calculations. Probably because block size assumed at that scale is 256 MB rather than 128 MB) https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_installing_manually_book/content/ref-80953924-1cbf-4655-9953-1e744290a6c3.1.html
... View more
08-16-2016
04:07 AM
@Antonio Ye Can you please share your spark-submit command. You do have SPARK_HOME set from you are launching the job, right?
... View more
08-16-2016
04:05 AM
Thanks @Sunile Manjee. This helps. I was wondering how to make sure in my replaceText processor to not replace everything that matched with empty string. But apparently it takes care of this but it also gets rid of curly braces as well as double quotes that are part of the json. I would like to keep those. I will update here once I figure out.
... View more
08-15-2016
11:40 PM
1 Kudo
Hi I have a json file which has some special characters like "$" and "@" symbol. I would like to get rid of these characters while keeping everything else the way it is. So, for example I have "$type". This should become "type" or "@version" should become "version". The way I am currently doing it is using replaceText processor twice and using literal replace. It works and solves my problem. However, I would prefer to use a regex. I have tried \$ but that doesn't work because the string has lot more than just \$ symbol. I am very bad with regular expressions so I need help with figuring out the regex to solve this.
... View more
Labels:
- Labels:
-
Apache NiFi
08-15-2016
11:33 PM
@sujitha sanku set the new password for root user using the following command mysqladmin -u root -h localhost password 'newpassword'
... View more
08-15-2016
06:57 PM
Can you please share the code without collect.
... View more