Member since
09-15-2015
457
Posts
507
Kudos Received
90
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
15665 | 11-01-2016 08:16 AM | |
11082 | 11-01-2016 07:45 AM | |
8568 | 10-25-2016 09:50 AM | |
1918 | 10-21-2016 03:50 AM | |
3828 | 10-14-2016 03:12 PM |
12-14-2015
05:55 AM
If you are the root user, you can use su to start the services in the hdfs-user's env.
... View more
12-14-2015
05:41 AM
4 Kudos
Please uncomment/configure the JAVA_HOME parameter in hadoop-env.sh Make sure you have configured the minimal set of parameters according to this documentation: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_installing_manually_book/content/ch_setting_up_hadoop_configuration_chapter.html Start HDFS by using the instructions explained here: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_installing_manually_book/content/format_and_start_hdfs.html I think you might have forgotten step one=>
Modify the JAVA_HOME value in the hadoop-env.sh file: export JAVA_HOME=/usr/java/default
Execute the following commands on the NameNode host machine: su - $HDFS_USER
/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/bin/hdfs namenode -format
/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode
Execute the following commands on the SecondaryNameNode: su - $HDFS_USER
/usr/hdp/current/hadoop-hdfs-secondarynamenode/../hadoop/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR start secondarynamenode
Execute the following commands on all DataNodes: su - $HDFS_USER
/usr/hdp/current/hadoop-hdfs-datanode/../hadoop/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR start datanode Here are the first lines from one of my Ambari installed HDP cluster: # The java implementation to use. Required.
export JAVA_HOME=/usr/jdk64/jdk1.8.0_40
export HADOOP_HOME_WARN_SUPPRESS=1
# Hadoop home directory
export HADOOP_HOME=${HADOOP_HOME:-/usr/hdp/current/hadoop-client}
... View more
12-13-2015
10:00 PM
3 Kudos
@Davide Isoardi I was able to fix your problem, please try the following solution: 1)Create jaas-file, called jaas.conf This file can be empty, doesnt really matter since your env. is not kerberized. 2) Start your Job with the following command hadoop jar /opt/lucidworks-hdpsearch/job/lucidworks-hadoop-job-2.0.3.jar com.lucidworks.hadoop.ingest.IngestJob -Dlww.commit.on.close=true -Dlww.jaas.file=jaas.conf -cls com.lucidworks.hadoop.ingest.DirectoryIngestMapper --collection test -i file:///data/* -of com.lucidworks.hadoop.io.LWMapRedOutputFormat --zkConnect horton01.example.com:2181,horton02.example.com:2181,horton03.example.com:2181/solr The order of the parameters needs to be the same as in the above command, otherwise the job might not work. I believe this is a bug, could you please report this issue to Lucidworks? Thanks.
... View more
12-11-2015
06:03 AM
Could you elaborate a bit more what you are trying to do? Are you trying to access hbase shell or trying to start Hbase Service? What HDP version is this? What KDC is this (MIT,...) ?
... View more
12-10-2015
08:22 PM
What KDC are you using? MIT? AD? Whats your JDK version? There could be multiple reasons for that, here are some pointers: 1)Validate the generated keytabs, this will tell you right away if there is something wrong with your keytab files or not. kinit -kt /<path to keytabs>/<keytab file> <principal> Check if a valid ticket was created via klist 2) Validate JCE files: Are the JCE files available (/<jdk path>/jre/lib/security/....)? Do you need the Unlimited Strength JCEs? 3) Check permissions of the generated keytab files. For example hdfs-headless keytab should belong to hdfs:hadoop with permissions set to 0400. 4) Validate the krb5.conf file (usually under /etc/krb5.conf), make sure its available and sound. What are the results of the above? You might also want to read through this great guid=> https://github.com/steveloughran/kerberos_and_hado...
... View more
12-10-2015
02:34 PM
3 Kudos
Is your cluster kerberized? I have seen this error a couple days ago and there was an important piece missing in the Solr documentation until now. Your launch command should look similar to this: hadoop jar
/opt/lucidworks-hdpsearch/job/lucidworks-hadoop-job-2.0.3.jar
com.lucidworks.hadoop.ingest.IngestJob -Dlww.commit.on.close=true -Dlww.jaas.file=/opt/lucidworks-hdpsearch/solr/bin/jaas.conf -cls
com.lucidworks.hadoop.ingest.DirectoryIngestMapper --collection MyCollection -i
hdfs://hortoncluster/data/* -of
com.lucidworks.hadoop.io.LWMapRedOutputFormat --zkConnect
horton01.example.com:2181,horton02.example.com:2181,horton03.example.com:2181/solr Make sure you include the Jaas option in a kerberized enviornment: -Dlww.jaas.file=/opt/lucidworks-hdpsearch/solr/bin/jaas.conf
... View more
12-10-2015
12:21 PM
Could you share the code from the com.myCompany.Main class?
... View more