Member since
01-19-2017
3620
Posts
599
Kudos Received
360
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1081 | 04-06-2023 12:49 PM | |
513 | 10-26-2022 12:35 PM | |
1036 | 09-27-2022 12:49 PM | |
1188 | 05-27-2022 12:02 AM | |
1026 | 05-26-2022 12:07 AM |
06-08-2019
06:39 PM
@Nani Bigdata Please could you try this different approach, invoke the beeline as user hive $ beeline
Beeline version 0.14.0.2.2.7.1-10 by Apache Hive
beeline> !connect jdbc:hive2://headnodehost:10001/;transportMode=http admin Hope that helps
... View more
06-08-2019
05:53 AM
@Adil BAKKOURI When I looked @Jay's answer something struck me about the same question I had responded to last night. This a duplicate thread, please try to endeavor and desist from opening multiple threads as members won't have the history so as to eliminate answers already provided. https://community.hortonworks.com/questions/247498/errno-111-connexion-failed-between-hosts-only-hdfs.html For example, now Jay would have capitalized on my previous answer to provide maybe alternative answers.
... View more
06-07-2019
06:11 PM
@kailash salton That is not the backup command I am interested in? Those commands are for listing and checking for the backup $ hbase backup create "full" hdfs://xxxxxxxxxxxxxxxxxxx Show the command and the location of the backup directory
... View more
06-07-2019
03:53 PM
@kailash salton Can you share the exact backup command? My interest is the backup dir !
... View more
06-07-2019
03:48 PM
@dalin qin According to reliable sources from both HWX and CDH the new platform will since the merger was sealed with being called Cloudera Data Platform (CDP) a concatenation of (HDP and CDH). Hortonworks will not release a new major version or any version between now and then. The last HDP release is and will remain the 3.x which is being used as the base for integration with the CDH5. Having said that you should familiarise yourself the 2 versions HDP 3.x and CDH 5 because the knowledge you will acquire will be transportable to the new CDP, I am sure for stability there won't be some new fancy stuff outside these 2 version rumours have it that Ambari will disappear in favor of Cloudera Manager CM but I pray they put sentry to sleep as Ranger is far better and atlas. Having said that there are too many moving parts to have a final list of components of the new CDP... the Cloudera CEO is gone just too soon ...
... View more
06-06-2019
05:13 PM
@kailash salton Can you share the output of the successful backup! Try to display the contents first as hbase user $ bin/hbase backup show Then as hdfs user depending on the location of your backup directory in hdfs run $ hdfs dfs -ls /$backup-dir HTH
... View more
06-06-2019
12:37 PM
@Aishwarya Dixit To disable HTTPS for Ranger UI from Ambari go to: Ambari UI--> Ranger-->config filter with HTTPS Settings: Older HDP versions External URL https://<hostname>:6182 HTTPS enabled - Un-check HDP 2.6.x Advanced ranger-admin-site: ranger.service.https.attrib.ssl.enabled = false Hope that helps
... View more
06-05-2019
10:01 PM
1 Kudo
@kailash salton I have tried as much as possible to explain the process of doing a successful hbase backup. I think for sure is to enable hbase backup by adding some parameters documented below. There are a couple of things to do like copying the core-site.xml to the hbase/conf directory etc. I hope this process helps you achieve your target. I have not included the incremental that documentation you can easily find Check the directories in hdfs [hbase@nanyuki ~]$ hdfs dfs -ls /
Found 12 items
drwxrwxrwx - yarn hadoop 0 2018-12-17 13:07 /app-logs
drwxr-xr-x - hdfs hdfs 0 2018-09-24 00:22 /apps
.......
drwxr-xr-x - hdfs hdfs 0 2019-01-29 06:06 /test
drwxrwxrwx - hdfs hdfs 0 2019-06-04 23:14 /tmp
drwxr-xr-x - hdfs hdfs 0 2018-12-17 13:04 /user Created a backup directory in hdfs [root@nanyuki ~]# su - hdfs
Last login: Wed Jun 5 20:47:02 CEST 2019
[hdfs@nanyuki ~]$ hdfs dfs -mkdir /backup
[hdfs@nanyuki ~]$ hdfs dfs -chown hbase /backup Validate the backup directory was created with correct permissions [hdfs@nanyuki ~]$ hdfs dfs -ls /
Found 13 items
drwxrwxrwx - yarn hadoop 0 2018-12-17 13:07 /app-logs
drwxr-xr-x - hdfs hdfs 0 2018-09-24 00:22 /apps
drwxr-xr-x - yarn hadoop 0 2018-09-24 00:12 /ats
drwxr-xr-x - hbase hdfs 0 2019-06-05 21:11 /backup
.......
drwxr-xr-x - hdfs hdfs 0 2018-12-17 13:04 /user Invoked the hbase shell to check the tables [hbase@nanyuki ~]$ hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.1.2.2.6.5.0-292, r897822d4dd5956ca186974c10382e9094683fa29, Fri May 11 08:00:59 UTC 2018
hbase(main):001:0> list_namespace
NAMESPACE
PDFTable
default
hbase
3 row(s) in 4.6610 seconds Check the tables hbase(main):002:0> list_namespace_tables 'hbase'
TABLE
acl
meta
namespace
3 row(s) in 0.1970 seconds Hbase need a table called "backup" table in hbase namespace which was missing this table is created if hbase backup is enabled see below so I had to add the below parameters in Custom hbase-site at the same time enabling hbase backup hbase.backup.enable=true
hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner
hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager
hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager
hbase.coprocessor.region.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.BackupObserver After adding the above properties in Custom hbase-site and restarting hbase magically the backup table was created! [hbase@nanyuki ~]$ hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.1.2.2.6.5.0-292, r897822d4dd5956ca186974c10382e9094683fa29, Fri May 11 08:00:59 UTC 2018
hbase(main):001:0> list_namespace_tables 'hbase'
TABLE
acl
backup
meta
namespace
4 row(s) in 0.3230 seconds Error with the backup command [hbase@nanyuki ~]$ hbase backup create "full" hdfs://nanyuki.kenya.ke:8020/backup -set texas
2019-06-05 22:28:37,475 ERROR [main] util.AbstractHBaseTool: Error running command-line tool
java.io.IOException: Backup set 'texas' is either empty or does not exist
at org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:182)
at org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:111)
at org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:126)
at org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:131) To resolve the error "Backup set 'texas' is either empty or does not exist" I pre-emptively create the directory [hdfs@nanyuki ~]$ hdfs dfs -touchz /backup/texas
[hdfs@nanyuki ~]$ hdfs dfs -ls /backup
Found 1 items
-rw-r--r-- 3 hdfs hdfs 0 2019-06-05 22:51 /backup/texas Check the core-site.xml is in /.../hbase/conf directory if not like below I copied the core-site.xml over [root@nanyuki ~] cp /etc/hadoop/2.6.5.0-292/0/core-site.xml /etc/hbase/conf/ validate the copy [root@nanyuki conf]# ls -lrt /etc/hbase/conf/
total 60
-rw-r--r-- 1 root root 4537 May 11 2018 hbase-env.cmd
-rw-r--r-- 1 hbase hadoop 367 Sep 23 2018 hbase-policy.xml
-rw-r--r-- 1 hbase hadoop 4235 Sep 23 2018 log4j.properties
-rw-r--r-- 1 hbase root 18 Oct 1 2018 regionservers
............
-rw-r--r-- 1 root root 3946 Jun 5 22:38 core-site.xml Relaunched the hbase backup, it was like frozen but at last, I got a "SUCCESS" [hbase@nanyuki ~]$ hbase backup create "full" hdfs://nanyuki.kenya.ke:8020/backup
2019-06-05 22:52:57,024 INFO [main] util.BackupClientUtil: Using existing backup root dir: hdfs://nanyuki.kenya.ke:8020/backup
Backup session backup_1559767977522 finished. Status: SUCCESS Checked the YARN UI some backup process was going on see screenshot below hbase_backup.PNG After successful backup see below screenshot hbase_backup2.PNG Validate the backup was done this give some details like time, type "FULL" etc [hbase@nanyuki hbase-client]$ bin/hbase backup show
Unsupported command for backup: show
[hbase@nanyuki hbase-client]$ hbase backup history
ID : backup_1559767977522
Type : FULL
Tables : ATLAS_ENTITY_AUDIT_EVENTS,jina,atlas_titan,PDFTable:DOCID001,PDFTable,testtable3
State : COMPLETE
Start time : Wed Jun 05 22:52:57 CEST 2019
End time : Wed Jun 05 23:14:20 CEST 2019
Progress : 100 Backup in the backup directory in hdfs [hdfs@nanyuki backup]$ hdfs dfs -ls /backup
Found 2 items
drwxr-xr-x - hbase hdfs 0 2019-06-05 23:11 /backup/backup_1559767977522 Happy hadooping Reference Hbase backup commands
... View more
06-03-2019
05:54 AM
@Anurag Mishra I think this is your best option hbase region normalizer https://community.hortonworks.com/questions/87231/reduce-existing-hbase-table-regions.html#
... View more