Member since
09-08-2016
3
Posts
3
Kudos Received
0
Solutions
08-11-2018
02:16 AM
Since it wasn't really described how exactly did you resolve it... The point is that on the client side (it's important that it's not on the server side), set "dfs.datanode.use.datanode.hostname" in the org.apache.hadoop.conf.Configuration object to value "true". If the Configuration object isn't created by your code (like if Spark creates it, in my case), then it depends on what creates it... see its documentation. But some guesses: Attempt 1: Set it inside $HADOOP_HOME/etc/hadoop/hdfs-site.xml. Hadoop command line tools use that, your Java application though... maybe not. Attempt 2: Put $HADOOP_HOME/etc/hadoop/ into the Java classpath (or pack hdfs-site.xml into your project under /src/main/resources/, but that's kind of dirty...). This works with Spark. Spark only: SparkSession.builder().config("spark.hadoop.dfs.client.use.datanode.hostname", "true").[...] Of course, you may also need to add the domain name of the DataNode-s (as the NameNode knows it) into the /etc/hosts on the computer running your application.
... View more
09-20-2016
06:00 AM
can you tell me how to connect through java api to hbase cluster ( 4 node cluster running in VM's). from windows to Distribution running in server vm/s
... View more