Member since
02-09-2016
12
Posts
12
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2790 | 06-05-2016 06:59 AM | |
36242 | 02-17-2016 08:05 PM |
06-05-2016
06:59 AM
1 Kudo
The command worked! I added the lib dir of Pig to the HADOOP_CLASSPATH in hadoop-env.sh.
... View more
06-03-2016
02:06 PM
It exists in the lib directory. : /home/hadoop1/Pig/lib/antlr-runtime-3.4.jar
... View more
06-03-2016
02:05 PM
/home/hadoop1/Pig/lib/antlr-runtime-3.4.jar The above line exists in the output generated while running the given command.
... View more
06-03-2016
12:48 PM
We recently installed Pig 0.15.0 on our system. We already have Hadoop 2.7.1 installed on our system. But while running the pig command we get the following exception :- [hadoop1@hp_proliant ~]$ pig
Exception in thread "main" java.lang.NoClassDefFoundError: org/antlr/runtime/RecognitionException
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
at java.lang.Class.privateGetMethodRecursive(Class.java:3048)
at java.lang.Class.getMethod0(Class.java:3018)
at java.lang.Class.getMethod(Class.java:1784)
at org.apache.hadoop.util.RunJar.run(RunJar.java:215)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.ClassNotFoundException: org.antlr.runtime.RecognitionException
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 7 more
Following is the content of our bashrc file for any reference to path settings :- export JAVA_HOME=/opt/jdk1.8.0_65
export HADOOP_HOME=/home/hadoop1/hadoop1
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HBASE_HOME=/home/hadoop1/hbase-1.1.4
export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$HBASE_HOME/lib/*
export CLASSPATH=$CLASSPATH:/home/hadoop1/hbase-1.1.4/lib/*
export PIG_HOME=/home/hadoop1/Pig
export PATH=$PATH:/home/hadoop1/Pig/bin
export PIG_CLASSPATH=$CLASSPATH:/home/hadoop1/Pig/lib/*
What possible could be the problem for this exception?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Pig
03-03-2016
04:14 PM
2 Kudos
We tried running teragen in a 5 node cluster, this time using Hadoop 2.7.1. The task is stuck at map 50%, reduce 0%. When viewed the logs for this job on a datanode it showed this error :- ntainerLauncher #1] org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:57252. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-03-03 21:33:12,426 INFO [ContainerLauncher #1] org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:57252. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-03-03 21:33:13,430 INFO [ContainerLauncher #1] org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:57252. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-03-03 21:33:14,437 INFO [ContainerLauncher #1] org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:57252. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-03-03 21:33:15,447 INFO [ContainerLauncher #1] org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:57252. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-03-03 21:33:16,448 INFO [ContainerLauncher #1] org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:57252. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-03-03 21:33:27,450 INFO [ContainerLauncher #1] org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:57252. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-03-03 21:33:28,452 INFO [ContainerLauncher #1] org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:57252. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-03-03 21:33:29,456 INFO [ContainerLauncher #1] org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:57252. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-03-03 21:33:30,463 INFO [ContainerLauncher #1] org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:57252. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-03-03 21:33:31,465 INFO [ContainerLauncher #1] org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:57252. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-03-03 21:33:32,476 INFO [ContainerLauncher #1] org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:57252. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-03-03 21:33:33,485 INFO [ContainerLauncher #1] org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:57252. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-03-03 21:33:34,493 INFO [ContainerLauncher #1] org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:57252. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-03-03 21:33:35,493 INFO [ContainerLauncher #1] org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:57252. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-03-03 21:33:36,494 INFO [ContainerLauncher #1] org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:57252. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
the yarn-site.xml on all datanode is as follows :- <?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop-master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop-master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop-master:8088</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop-master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop-master:8033</value>
</property>
</configuration>
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
02-17-2016
08:05 PM
3 Kudos
thanks everyone..solved the problem. Actually the namenode was listening at localhost:9000 and datanode tried to connect at hadoop-master:9000, hence the error in connectivity. Changed the listening IP:port of namenode to hadoop-master:9000.
... View more
02-09-2016
06:09 PM
2 Kudos
I installed a 2 node hadoop cluster. The master and slave node starts separately but the datanode isn't shown in namenode webUI. The log file for datanode shows the following error :
2016-02-09 23:30:53,920 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2016-02-09 23:30:53,920 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2016-02-09 23:30:54,976 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop-master/172.17.25.5:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-02-09 23:30:55,977 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoop-master/172.17.25.5:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-02-09 23:21:15,062 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: hadoop-master/172.17.25.5:9000 The hosts file in slave is 172.17.25.5 hadoop-master
127.0.0.1 silp-ProLiant-DL360-Gen9
172.17.25.18 hadoop-slave-1
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
the core-site.xml file is : <configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop-master:9000</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
Kindly help with this issue.
... View more
Labels:
- Labels:
-
Apache Hadoop