Member since
06-07-2016
923
Posts
322
Kudos Received
115
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3989 | 10-18-2017 10:19 PM | |
4253 | 10-18-2017 09:51 PM | |
14627 | 09-21-2017 01:35 PM | |
1769 | 08-04-2017 02:00 PM | |
2356 | 07-31-2017 03:02 PM |
04-26-2017
08:56 PM
@Anishkumar Valsalam Once you have developed a custom NAR file, you simply have to drop it in /usr/hdf/current/nifi/lib/ folder and restart Nifi. Once Nifi is restarted, you should be able to see your new custom processor along with other processors.
... View more
04-25-2017
05:57 PM
@PPR Reddy When you say schema was dropped, you mean, you dropped the table which means the metadata is gone. Your data still exists, right? You just have to run a create table again. Don't you have this saved somewhere already? Why not just create the table. If not, then, assuming your metadata table is MySQL, you can use the following method: https://twindb.com/recover-after-drop-table-innodb_file_per_table-is-off/ What is a little surprising is you have not lost the data so writing a create table statement again even if its 100 lines is not as bad as if your data was in mysql and you had dropped it without a backup. that would have been a much bigger issue.
... View more
04-25-2017
04:52 AM
1 Kudo
@Tech Gig
Which version of Hadoop are you using? In an HA cluster between Namenode and Standby namenode, there is Quorum journal manager (usually three nodes - one disk each). Assume everything is up to date. Now if a namespace change occurs, namenode is going to write to Quorum journal manager the same change. Standby Namenode is also watching this Quorum journal manager and promptly apply the changes to its own copy of namespace. To ensure fast failover, datanodes are also configured with the location of both namenode and standby namenode and they send block information and heartbeats to both namenodes (although only one is active and other is standby). For additional prevention of corruption of data, administrators use some fencing mechanism to prevent what is called a "split-brain-scenario". One way to achieve this is having journal nodes allow write operation by only one namenode. When your active namenode goes down, your standby will become the writer of journal nodes. Now if standby is down, when it comes back up, it reads the journal nodes to bring itself up to date.
... View more
04-17-2017
09:58 PM
@ed day If you followed the process in that link then you shouldn't be running into this issue. You don't really need to download Ambari jar separately. Can you please confirm that you have the repo file for your operating system? If you are on CentOS 7 then is your file for CentOS 7? http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.0.1/bk_ambari-upgrade/content/upgrade_ambari.html
... View more
04-17-2017
04:45 AM
@bhavik shah Are you willing to write a map reduce job? You can use mapreduce.input.fileinputformat.split.minsize to reduce the split size of your ORC file. The split is calculated using max(mapreduce.input.fileinputformat.split.minsize, min(mapreduce.input.fileinputformat.split.maxsize
, dfs.blocksize)) Set your mapreduce.input.fileinputformat.split.minsize to 50 MB and then send the output of each mapper to S3. Writing a mapreduce will be right way to do it. If you don't want to write a map reduce and instead use hive, then you will have to create a new table with same data and use Insert Select to populate new table: set hive.merge.mapredfiles=true;
set hive.merge.mapfiles=true;
set hive.merge.smallfiles.avgsize=51200000;
set hive.merge.size.per.task=51200000;
... View more
04-14-2017
07:52 PM
@Karan Alang you are getting invalid url. Shouldn't the url start with '!connect jdbc:hive2://<host>:<port>/<db>'
... View more
04-14-2017
06:34 PM
@dominic lodento This is coming from CentOS. How much memory do you have? For Ambari, can you please check Ambari logs under /var/log?
... View more
04-13-2017
11:54 PM
@John T I am a little confused since you say file name is "file123.txt" but then say you don't know how the file name starts or what the extension is (seems like extension is .txt). Remote File Name supports expression language so you should be able to use "contains" for the file.
... View more
04-13-2017
07:55 PM
1 Kudo
@hduraiswamy You are not missing any thing. Starting HDP 2.5, Ambari supports managing HDP-Search from Ambari making it easier to manage. Before that, it was not managed by Ambari. HDP-Search under the hood is SOLR version 5.5.2 and Banana 1.6.0 Please check following link: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_solr-search-installation/content/ch_hdp-search.html
... View more
04-11-2017
10:35 PM
1 Kudo
@Stephen knott Are filter columns part of scan? If filter columns are not part of scan then filters are ignored. Check the following link: https://issues.apache.org/jira/browse/HBASE-4364
... View more