Member since
01-03-2018
11
Posts
0
Kudos Received
0
Solutions
04-18-2018
04:10 AM
Hi @Harald Berghoff Thanks for the information.
... View more
04-17-2018
09:51 AM
Hello All, My understanding was that Spark is an alternative to Hadoop. However, when trying to install Spark, the installation page asks for an existing Hadoop installation. I'm not able to find anything that clarifies that relationship. Secondly, Spark apparently has good connectivity to Cassandra and Hive. Both have sql style interface. However, Spark has its own sql. Why would one use Cassandra tutorial/Hive instead of Spark's native sql? Assuming that this is a brand new project with no existing installation? Help me out. Thanks
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Spark
12-07-2017
06:45 AM
After installing Hadoop when I am trying to start start-dfs.sh it is showing following error message.
I have searched a lot and found that WARN is because I am using UBUNTU 64bit OS and Hadoop is compiled against 32bit. So its not an issue to work on.
But the Incorrect configuration is something I am worried about. And also not able to start the primary and secondary namenodes. sameer@sameer-Compaq-610:~$ start-dfs.sh
15/07/27 07:47:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
localhost: ssh: connect to host localhost port 22: Connection refused
localhost: ssh: connect to host localhost port 22: Connection refused
Starting secondary namenodes [0.0.0.0]
0.0.0.0: ssh: connect to host 0.0.0.0 port 22: Connection refused**
15/07/27 07:47:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable My current configuration: hdfs-site.xml <configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/sameer/mydata/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/sameer/mydata/hdfs/datanode</value>
</property>
</configuration>
core-site.xml
<configuration>
<property>
<name>fs.default.name </name>
<value> hdfs://localhost:9000 </value>
</property>
</configuration>
yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration> Please find what I am doing wrong in configuration or somewhere else.? Thanks, Nicolewells
... View more
Labels:
- Labels:
-
Apache Hadoop