@Anurag Mishra
fs.defaultFS
The fs.defaultFS makes HDFS a file abstraction over a cluster, so that its root is not the same as the local system's. You need to change the value in order to create the distributed file system. The fs.defaultFS in core-site.xml gives the datanode address of namenode. The datanode looks here for the namenode address and tries to contact it using RPC.
Without setting the fs.defaultFS the command
$ hdfs dfs -ls /
would initially show the local root filesystem as below
$ hdfs hadoop fs -ls /
Warning: fs.defaultFS is not set when running "ls" command.
Found 21 items
dr-xr-xr-x - root root 4096 2017-05-16 20:03 /boot
drwxr-xr-x - root root 3040 2017-06-07 18:31 /dev
drwxr-xr-x - root root 8192 2017-06-10 07:22 /etc
drwxr-xr-x - root root 56 2017-06-10 07:22 /home
................
.............
drwxr-xr-x - root root 167 2017-06-07 19:43 /usr
drwxr-xr-x - root root 4096 2017-06-07 19:46 /var
dfs.namenode.http-address
The location for the NameNode URL in the hdfs-site.xml configuration file e.g
<property>
<name>dfs.namenode.http-address</name>
<value>node1.texas.us:50070</value>
<final>true</final>
</property>
The NameNode HTTP server address is controlled by configuration property dfs.namenode.http-address in hdfs-site.xml. Typically this specifies a hostname or IP address, and this maps to a single network interface like above but you can tell it to bind to all network interfaces by setting property dfs.namenode.http-bind-host to 0.0.0.0 (the wildcard address, matching all network interfaces).
This is the base port where the dfs namenode web ui will listens to.It's good to make the name node HTTP server listen on all interfaces by setting it to 0.0.0.0 this will require a reboot/restart of NameNode
Hope that clarifies for you the difference