Member since
06-17-2015
61
Posts
20
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1922 | 01-21-2017 06:18 PM | |
2322 | 08-19-2016 06:24 AM | |
1694 | 06-09-2016 03:23 AM | |
2774 | 05-27-2016 08:27 AM |
09-19-2016
10:41 AM
1 Kudo
Please see below options and NOTE NOTE : for both options CopyTable and Export/Import Since the cluster is up, there is a risk that edits could be missed in the export process. http://hbase.apache.org/0.94/book/ops_mgt.html#copytable CopyTable is a utility that can copy part or of all of a table, either to the same cluster or another cluster. The usage is as follows: $ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable [--starttime=X] [--endtime=Y] [--new.name=NEW] [--peer.adr=ADR] tablename http://hbase.apache.org/0.94/book/ops_mgt.html#export 14.1.7. Export Export is a utility that will dump the contents of table to HDFS in a sequence file. Invoke via: $ bin/hbase org.apache.hadoop.hbase.mapreduce.Export <tablename> <outputdir> [<versions> [<starttime> [<endtime>]]]
Note: caching for the input Scan is configured via hbase.client.scanner.caching in the job configuration. 14.1.8. Import Import is a utility that will load data that has been exported back into HBase. Invoke via: $ bin/hbase org.apache.hadoop.hbase.mapreduce.Import <tablename> <inputdir>
... View more
08-24-2016
09:20 AM
@da li hey please have a look at below https://community.hortonworks.com/questions/153/impersonation-error-while-trying-to-access-ambari.html if it helps accept the answer You need to create the proxy settings for 'root', since Ambari runs as root. This allows it to impersonate the user in hdfs. similar thing you need to do for oozie user , like its done for root hadoop.proxyuser.root.groups=* hadoop.proxyuser.root.hosts=*
... View more
08-22-2016
04:16 PM
@Scott Shaw Thanks but please see below questions 1. can i get same performance what i get in my optimized and purpose-built infrastructure HDP cluster ? because data lake is central and can i tune it specifically for 1 application ? 2. how can i manage different HDP versions in data lake ? 3. if something goes wrong with security or configuration because of 1 application then my whole data lake will be impacted ?
... View more
08-22-2016
03:54 PM
1 Kudo
Hi i have a small application that generates some reports without using any map reduce code i want to understand what are the real benefits of using Data lake, i think it will be useful for enterprise if there are many products which are writing data to various hadoop clusters and in order to have unified view of the various issues and having common data store , apart from this what are the other real benefits ? How does data lake work if i want particular HDP version ? i think its easier to switch to particular HDP in a separate cluster from ambari but what about data lake? also if multiple applications use data lake and just 1 application require frequent changes like hbase coprocessor for testing various things , is it advisable to go for data lake ? HA we get in cluster as well , so what are the main advantages technically if we dont bother cost
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
08-19-2016
11:02 AM
1 Kudo
Hi Team, is anyone aware of issues during installation , that why do we get so many broken symlink issues during installation , this issue i faced in HDP 2.3.4 and ambari-2.2.2.0 Please see below https://community.hortonworks.com/questions/33492/hdp-234-failed-parent-directory-usrhdpcurrenthadoo.html i was installing 3 Node HDP 2.4.0.0 cluster, wherein at step "Install,Start and Test" on 1 node installation went fine but on other 2 nodes there were random symlink issues i had to fix broken symlink issues manually most of the times and final after spending so much time i was able to successfully installed HDP 2.4.0.0 issues like as below and shown in image 2016-08-18 21:20:17,474 - Directory['/etc/hive'] {'mode': 0755} 2016-08-18 21:20:17,474 -
Directory['/usr/hdp/current/hive-client/conf'] {'owner': 'hive', 'group':
'hadoop', 'recursive': True} 2016-08-18
21:20:17,474 - Creating directory Directory['/usr/hdp/current/hive-client/conf']
since it doesn't exist I had proper prerequisites available before starting installation as given in http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.2.0/bk_Installing_HDP_AMB/content/_hdp_24_repositories.html also randomly doing retries it works 😞 Please advise if you think i am doing something wrong or any good best practices for installation and debugging Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Hive
08-19-2016
06:24 AM
2 Kudos
@Ted Yu @emaxwell @Josh Elser thanks all for your confirmation , that's why i asked if rpm is relocatable 🙂 so the bottom line is Hortonworks installation directories cannot be changed , all binary and config files of HDP go in /usr and /etc .. since its hardcoded in RPM and RPM is not relocatable i will close this thread But I believe it should support relocatability from corporate IT policy POV , wherein we many times we have issue putting files in /usr and /etc also i suggest at the time of RPM creation hortonworks should make RPM to be relocatable in order to allow installing binary and config files in other directories instead of /usr and /etc . i understand there are other software's which HDP consists of, but ultimately Hortwonworks can customize this bundle to support user specific needs I should open this as an idea , WDYT ?
... View more
08-19-2016
06:23 AM
@Ted Yu @emaxwell @Josh Elser thanks all for your confirmation , that's why i asked if rpm is relocatable 🙂 i will close this thread But I believe it should support this from corporate IT policy POV , wherein we many times have issue putting files in /usr and /etc i should open this as an idea , WDYT ?
... View more
08-18-2016
07:19 PM
1 Kudo
Hi team, i see HDP stores its lib files and packages in /usr/hdp and maintains diff versions can we control HDP installation packages or rpm and make installation relocatable to other directories like /opt if my It team does not permit installation inside /usr then what to do ? # ls /usr/hdp/ 2.4.0.0-169 2.4.2.0-258 current Please advise rpm -ql hdp-select-2.4.2.0-258.el6.noarch
/usr/bin/conf-select
/usr/bin/hdp-select
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
08-18-2016
06:35 AM
2 Kudos
Hi Team, we see logs of various hadoop services are stored in /var/log can we change it to our customized location if i dont want to store logs in below location then /var/log/ambari-agent/
/var/log/ambari-metrics-monitor/
/var/log/ambari-server/
/var/log/hbase
/var/log/zookeeper i see in ambari changing log location is disabled ?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
08-03-2016
06:26 AM
1 Kudo
Hi Team, My hadoop namenode servers are without HBA and but servers are RAID 10 so do i need NFS point to save namenode metada file edits etc on NFS location as well if i have active namenode as well in cluster also my question is if my hardware is without HBA storage and RAID 10 so can i connect to NFS point from such hardware ? basically what are the recommendations for namenode HA ?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase