Member since
08-04-2017
5
Posts
1
Kudos Received
0
Solutions
03-25-2019
04:35 PM
Dear All. Can I restore folder deleted with -skipTrash. We stopped hdfs 5min after. Our claster is in HA mode and we don't understand how we can use fsimage file for recovering. We tried to use these guides and had no success: https://community.hortonworks.com/articles/26181/how-to-recover-accidentally-deleted-file-in-hdfs.html http://www.openkb.info/2014/06/how-to-recover-namenode-from-secondary.html https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html
... View more
Labels:
04-03-2018
11:29 AM
Do you solve it?
... View more
04-03-2018
11:28 AM
We have the same problem after deleting kerberos authentication
... View more
08-04-2017
03:44 PM
1 Kudo
When I use com.hortonworks.shc-core in scala like this: spark.write.options(Map(HBaseTableCatalog.tableCatalog -> catalog, HBaseTableCatalog.newTable -> "5", HBaseTableCatalog.avro -> "")) .format("org.apache.spark.sql.execution.datasources.hbase") .save()
I have exception: 17/08/04 18:29:46 INFO ClientCnxn: Session establishment complete on server domain.company.com/11.111.111.111:2181, sessionid = 0x15c7820075c1bbd, negotiated timeout = 40000
17/08/04 18:29:46 WARN RpcControllerFactory: Cannot load configured "hbase.rpc.controllerfactory.class" (org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory) from hbase-site.xml, falling back to use default RpcControllerFactory
17/08/04 18:29:46 WARN RpcControllerFactory: Cannot load configured "hbase.rpc.controllerfactory.class" (org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory) from hbase-site.xml, falling back to use default RpcControllerFactory
Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the locations
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:312)
... View more
Labels:
08-04-2017
03:34 PM
This is incorrect solution. If we have incorrect zookeeper.znode.parent = "/hbase" we will have exception like this: INFO ZooKeeper: Initiating client connection ... baseZNode=/hbase ... ERROR ConnectionManager$HConnectionImplementation: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master. Current issue not solve problem with: ... RetriesExhaustedException: Can't get the locations
... View more