Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

Error: JAVA_HOME is not set and could not be found. even though JAVA_HOME is set.

avatar
Not applicable

I am trying to run HDP inside a Ubuntu 14.04 docker image. HDP 2.3.2 was installed using the official repos.

I also installed Oracle Java 8 using the webup8 package: https://launchpad.net/~webupd8team/+archive/ubuntu...

JAVA_HOME is also set correctly:

	root@3b2af516c1ce:/# echo $JAVA_HOME
	/usr/lib/jvm/java-8-oracle
	root@3b2af516c1ce:/# su hdfs
	hdfs@3b2af516c1ce:/$ echo $JAVA_HOME
	/usr/lib/jvm/java-8-oracle

However, when I try to start the name node, I get an error saying JAVA_HOME could not be found:

root@3b2af516c1ce:/# sudo -u hdfs /usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start namenode

Error: JAVA_HOME is not set and could not be found.

My hadoop-env.sh has not been modified in any way:

# Set Hadoop-specific environment variables here.
# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.
# The java implementation to use.
# export JAVA_HOME=${JAVA_HOME}
# The jsvc implementation to use. Jsvc is required to run secure datanodes
# that bind to privileged ports to provide authentication of data transfer
# protocol.  Jsvc is not required if SASL is configured for authentication of
# data transfer protocol using non-privileged ports.
#export JSVC_HOME=${JSVC_HOME}
#export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}
# Extra Java CLASSPATH elements.  Automatically insert capacity-scheduler.
#for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
#  if [ "$HADOOP_CLASSPATH" ]; then
#    export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
#  else
#    export HADOOP_CLASSPATH=$f
#  fi
#done
# The maximum amount of heap to use, in MB. Default is 1000.
#export HADOOP_HEAPSIZE=
#export HADOOP_NAMENODE_INIT_HEAPSIZE=""
# Extra Java runtime options.  Empty by default.
#export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"
# Command specific options appended to HADOOP_OPTS when specified
#export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
#export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"
#export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"
#export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
#export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
#export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
#HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS"
# On secure datanodes, user to run the datanode as after dropping privileges.
# This **MUST** be uncommented to enable secure HDFS if using privileged ports
# to provide authentication of data transfer protocol.  This **MUST NOT** be
# defined if SASL is configured for authentication of data transfer protocol
# using non-privileged ports.
#export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}
# Where log files are stored.  $HADOOP_HOME/logs by default.
#export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER
# Where log files are stored in the secure data environment.
#export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}
###
# HDFS Mover specific parameters
###
# Specify the JVM options to be used when starting the HDFS Mover.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# export HADOOP_MOVER_OPTS=""
###
# Advanced Users Only!
###
# The directory where pid files are stored. /tmp by default.
# NOTE: this should be set to a directory that can only be written to by 
#       the user that will run the hadoop daemons.  Otherwise there is the
#       potential for a symlink attack.
#export HADOOP_PID_DIR=${HADOOP_PID_DIR}
#export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}
# A string representing this instance of hadoop. $USER by default.
#export HADOOP_IDENT_STRING=$USER

Why is this happening?

1 ACCEPTED SOLUTION

avatar

Please uncomment/configure the JAVA_HOME parameter in hadoop-env.sh

Make sure you have configured the minimal set of parameters according to this documentation: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_installing_manually_book/content/ch_setti...

Start HDFS by using the instructions explained here: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_installing_manually_book/content/format_a...

I think you might have forgotten step one=>

  1. Modify the JAVA_HOME value in the hadoop-env.sh file:
    export JAVA_HOME=/usr/java/default
  2. Execute the following commands on the NameNode host machine:
    su - $HDFS_USER
    /usr/hdp/current/hadoop-hdfs-namenode/../hadoop/bin/hdfs namenode -format
    /usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode
  3. Execute the following commands on the SecondaryNameNode:
    su - $HDFS_USER
    /usr/hdp/current/hadoop-hdfs-secondarynamenode/../hadoop/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR start secondarynamenode
  4. Execute the following commands on all DataNodes:
    su - $HDFS_USER
    /usr/hdp/current/hadoop-hdfs-datanode/../hadoop/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR start datanode

Here are the first lines from one of my Ambari installed HDP cluster:

# The java implementation to use.  Required.
export JAVA_HOME=/usr/jdk64/jdk1.8.0_40
export HADOOP_HOME_WARN_SUPPRESS=1

# Hadoop home directory
export HADOOP_HOME=${HADOOP_HOME:-/usr/hdp/current/hadoop-client}

View solution in original post

4 REPLIES 4

avatar

Please uncomment/configure the JAVA_HOME parameter in hadoop-env.sh

Make sure you have configured the minimal set of parameters according to this documentation: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_installing_manually_book/content/ch_setti...

Start HDFS by using the instructions explained here: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_installing_manually_book/content/format_a...

I think you might have forgotten step one=>

  1. Modify the JAVA_HOME value in the hadoop-env.sh file:
    export JAVA_HOME=/usr/java/default
  2. Execute the following commands on the NameNode host machine:
    su - $HDFS_USER
    /usr/hdp/current/hadoop-hdfs-namenode/../hadoop/bin/hdfs namenode -format
    /usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode
  3. Execute the following commands on the SecondaryNameNode:
    su - $HDFS_USER
    /usr/hdp/current/hadoop-hdfs-secondarynamenode/../hadoop/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR start secondarynamenode
  4. Execute the following commands on all DataNodes:
    su - $HDFS_USER
    /usr/hdp/current/hadoop-hdfs-datanode/../hadoop/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR start datanode

Here are the first lines from one of my Ambari installed HDP cluster:

# The java implementation to use.  Required.
export JAVA_HOME=/usr/jdk64/jdk1.8.0_40
export HADOOP_HOME_WARN_SUPPRESS=1

# Hadoop home directory
export HADOOP_HOME=${HADOOP_HOME:-/usr/hdp/current/hadoop-client}

avatar
Not applicable

Should I be using su on Ubuntu rather than sudo -u?

avatar

If you are the root user, you can use su to start the services in the hdfs-user's env.

avatar
New Member

@Jonas Straub @Vedant Jain Hi! I'm having similar issues.. I have a cluster running on 8 ec2 instances. I had to change the 'login as' to 'ec2-user' instead of 'root' when I used Ambari server to setup since ssh over root was giving me error.. Now I have 2 questions:

- Since ambari did not use root, does that mean some env variables were not set by ambari? (I had no errors installing everything)

- I can see jdk64 folder installed but its not in path.. So I cannot use 'java' commands.. Does that mean I have to go to every instance and do `export JAVA_HOME=/usr/jdk64/jdk1.8.0_40/bin/java` on every instance ? 😞

- Even after I set JAVA_HOME, I am unable to use 'jre' which I need for executing storm code!! What should I do??