Member since
02-21-2016
35
Posts
19
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3217 | 04-12-2016 04:45 AM |
04-12-2016
04:45 AM
1 Kudo
Thank you all... I have got it resolved. I did 2 things.. 1. for name node : I simply changed the port to some other value 2. For data-node : I found an error message in logs saying that "/" has permission 777 , cannot start data-node. I remember I had changed it manually for some other problem. Reverting it resolved the issue.
... View more
04-12-2016
04:04 AM
Non of the above commands give anything suspicious... these are the log using last command.. anything suspious..? 2016-04-10 20:09:01,856 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(192)) - NameNode metrics system started
2016-04-10 20:09:01,858 INFO namenode.NameNode (NameNode.java:setClientNamenodeAddress(424)) - fs.defaultFS is hdfs://ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal:8020
2016-04-10 20:09:01,858 INFO namenode.NameNode (NameNode.java:setClientNamenodeAddress(444)) - Clients are to use ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal:8020 to access this namenode/service.
2016-04-10 20:09:02,025 INFO hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1726)) - Starting Web-server for hdfs at: http://ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal:50070
2016-04-10 20:09:02,072 INFO mortbay.log (Slf4jLog.java:info(67)) - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2016-04-10 20:09:02,081 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(294)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2016-04-10 20:09:02,086 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.namenode is not defined
2016-04-10 20:09:02,091 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(710)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-04-10 20:09:02,093 INFO http.HttpServer2 (HttpServer2.java:addFilter(685)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2016-04-10 20:09:02,093 INFO http.HttpServer2 (HttpServer2.java:addFilter(693)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2016-04-10 20:09:02,093 INFO http.HttpServer2 (HttpServer2.java:addFilter(693)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2016-04-10 20:09:02,114 INFO http.HttpServer2 (NameNodeHttpServer.java:initWebHdfs(86)) - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2016-04-10 20:09:02,116 INFO http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(609)) - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2016-04-10 20:09:02,127 INFO http.HttpServer2 (HttpServer2.java:openListeners(915)) - Jetty bound to port 50070
2016-04-10 20:09:02,128 INFO mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26.hwx
2016-04-10 20:09:02,381 INFO mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@ip-xxx-xxx-xxx-xxx.us-west-2.compute.internal:50070
2016-04-10 20:09:02,404 WARN common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2016-04-10 20:09:02,404 WARN common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2016-04-10 20:09:02,404 WARN namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(654)) - Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2016-04-10 20:09:02,405 WARN namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(659)) - Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2016-04-10 20:09:02,410 WARN common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
... View more
04-11-2016
12:16 AM
Looks like I have disabled both of the things firewall and selinux ...still no luck. [root@ip-xxxxxxxx ec2-user]# systemctl status firewalld.service ● firewalld.service Loaded: not-found (Reason: No such file or directory) Active: inactive (dead)
[root@ip-xx-xx-xx-xx ec2-user]# sestatus
SELinux status: disabled is there any logs or something which can help?
... View more
04-10-2016
11:42 PM
It is not working... looks like all of my commands are failing.. non of them are able to perform task here is the attached output of commnds I ran. errors.txt
... View more
04-09-2016
07:10 PM
Hi @Sunile Manjee It ws working finr that day. But today when i started my service, data node is not starting up. Can that be related? see the link below for my problem. https://community.hortonworks.com/questions/26802/data-node-process-not-starting-up.html
... View more
04-09-2016
06:42 PM
1 Kudo
Even I have right permission my data node not starting up. Gives following error :(Connection failed: [Errno 111] Connection refused to 0.0.0.0:50010) [root@ip pig]# cd /hadoop/hdfs/data [root@ip hdfs]# ls -lrt total 0 drwxr-x---. 3 hdfs hadoop 20 Apr 5 10:54 data drwxr-xr-x. 4 hdfs hadoop 63 Apr 9 14:08 namenode drwxr-xr-x. 3 hdfs hadoop 38 Apr 9 14:16 namesecondary Just want to add .. last time when the everything was running i did floowing.. added the mapred user to HDFS group and given him rwx permission on / directory... can this be a reason of failure? Just to add Not only datanode everything else is not running now.. is there any way to know what may have gone wrong?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
04-09-2016
06:34 PM
Sorry for late reply.. yes it resolved the issue.. Thank you very much
... View more
04-04-2016
10:21 PM
1 Kudo
I tried to enable history server using the link , I could do it only till hdfs dis -mkdir -p /app-logs as this was failing and i could not proceed. Now when I run the pig script which is creating map reduce it is failing.with below error. nay idea? Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=READ, inode="/mr-history/tmp/hdfs/job_1459806783854_0001-1459807556718-hdfs-PigLatin%3ADefaultJobName-1459807582179-1-1-SUCCEEDED-default-1459807564263.jhist":hdfs:hdfs:-rwxrwx--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Pig
04-04-2016
06:29 PM
[hdfs@ bin]$ ls -lrt
total 32
-rwxr-xr-x. 1 root root 1857 Dec 15 22:25 rcc
-rwxr-xr-x. 1 root root 6594 Dec 15 22:25 hadoop.distro
-rwxr-xr-x. 1 root root 775 Dec 15 22:26 yarn
-rwxr-xr-x. 1 root root 782 Dec 15 22:26 mapred
-rwxr-xr-x. 1 root root 775 Dec 15 22:26 hdfs
-rwxr-xr-x. 1 root root 807 Dec 15 22:26 hadoop-fuse-dfs
-rwxr-xr-x. 1 root root 772 Apr 4 14:21 hadoop
[hdfs@ bin]$ cat hadoop
#!/bin/bash
# Autodetect JAVA_HOME if not defined
if [ -e /usr/libexec/bigtop-detect-javahome ]; then
. /usr/libexec/bigtop-detect-javahome
elif [ -e /usr/lib/bigtop-utils/bigtop-detect-javahome ]; then
. /usr/lib/bigtop-utils/bigtop-detect-javahome
fi
export HADOOP_HOME=${HADOOP_HOME:-/usr/hdp/2.3.4.0-3485/hadoop}
export HADOOP_MAPRED_HOME=${HADOOP_MAPRED_HOME:-/usr/hdp/2.3.4.0-3485/hadoop-mapreduce}
export HADOOP_YARN_HOME=${HADOOP_YARN_HOME:-/usr/hdp/2.3.4.0-3485/hadoop-yarn}
export HADOOP_LIBEXEC_DIR=${HADOOP_HOME}/libexec
export HDP_VERSION=${HDP_VERSION:-2.3.4.0-3485}
export HADOOP_OPTS="${HADOOP_OPTS} -Dhdp.version=${HDP_VERSION}"
export YARN_OPTS="${YARN_OPTS} -Dhdp.version=${HDP_VERSION}"
exec /usr/hdp/2.3.4.0-3485//hadoop/bin/hadoop.distro "$@" [hdfs@ip- bin]$ echo $HADOOP_HOME
[hdfs@ip-bin]$ pwd
/usr/hdp/current/hadoop-client/bin
[hdfs@ip-bin]$
... View more
04-04-2016
06:28 PM
Can you or someone help me with the main question, I am not able to figure out the reason for job failure. after googling it i think it may be related to the variable not set ..I am adding more details about how the variables are showing null... Could you please help..
... View more
04-03-2016
06:15 AM
Thanks this is what i was looking for.... Although this logs does not give much informaiton. Do you know if there is way to make log more readable.? or probably i'll have to ask another question.
... View more
04-03-2016
03:25 AM
yes, you are right. It is creating logs into the the same file but it is when a pig script failing. It is not creating log where map reduce is failing. I need to analyse map reduce logs in order to resolve the issue.
... View more
04-03-2016
01:58 AM
2 Kudos
My pig script failed when I am trying to do order by. I want to check the logs but I can't see the location of the logs. I got to know from the post that it should be set in the file (/etc/hadoop/hadoop-env.sh) but the file has everything commented as shown below. Does that mean it is not being written? how can I set those values to look into the logs as my scripts is failing again and again and I want to check the logs to troubleshoot it.
also my file ( hadoop-env.sh)seems to be at different path(/etc/hadoop/conf.install) is there anything wrong ?? # On secure datanodes, user to run the datanode as after dropping privileges. # This **MUST** be uncommented to enable secure HDFS if using privileged ports
# to provide authentication of data transfer protocol. This **MUST NOT** be # defined if SASL is configured for authentication of data transfer protocol
# using non-privileged ports. #export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}
# Where log files are stored. $HADOOP_HOME/logs by default.
#export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Pig
03-15-2016
02:10 AM
I am trying ti access the command line interface from server where I have hortonworks install but I am getting below error.Can I not access the command line? [root@ip-xxx-xx-xx ec2-user]# hive
WARNING: Use "yarn jar" to launch YARN applications.
Logging initialized using configuration in file:/etc/hive/2.3.4.0-3485/0/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user/root":hdfs:hdfs:drwxr-xr-x
... View more
Labels:
- Labels:
-
Apache Hive
02-21-2016
10:53 PM
1 Kudo
Thank you very much for your help.. It is working now.. Will you be able to answer my question? you mentioned to add property hadoop.proxyuser.hive.hosts=* but hontor document says to add hadoop.proxyuser.root.hosts=* I have added both but not sure why and which one is working.
... View more
02-21-2016
10:30 PM
1 Kudo
I already have these property added. Let me give you the brief This were running fine until i restarted my aws server and its dns chnaged and I see these errors. Write now I am getting below errors : org.apache.ambari.view.hive.client.HiveClientException: H060 Unable to open Hive session: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
... View more
02-21-2016
09:43 PM
1 Kudo
I had to restart my AWS server which caused it's public dns changed . so for opening the ambari-server I had to connect to new dns:8080 port, and I was able to connect easily. But when I am connecting to the hive view it is giving below error. H060 Unable to open Hive session: org.apache.thrift.protocol.TProtocolException:Required field 'serverProtocolVersion'is unset!Struct:TOpenSessionResp(status:TStatus(statusCode:ERROR_STATUS, infoMessages:[*org.apache.
... View more
- Tags:
- Hadoop Core
- Hive
Labels:
- Labels:
-
Apache Hive