Member since
11-22-2013
35
Posts
0
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2920 | 07-21-2014 10:36 AM | |
2020 | 06-30-2014 09:05 AM | |
1401 | 06-27-2014 11:15 AM | |
3778 | 05-29-2014 10:17 AM | |
2389 | 12-19-2013 10:36 AM |
06-25-2014
10:25 AM
Hi Everyone, anyone has idea which version of cdh has the fix for below issue (HDFS-4301). i am using cdh 4.1.3, i am facing below issue, so i thought of upgrade it to the version which has fix for below issue. https://issues.apache.org/jira/browse/HDFS-4301 Best Regards, Bommuraj
... View more
Labels:
06-24-2014
12:21 PM
Dear All, Kindly share your thoughts on below issue, give your advice on this I am receiving below error (pls refer the log trace) in my namenode, There is no problem is network, i checked thoroughly. if i am right, I am hitting below issue. https://issues.apache.org/jira/browse/HDFS-4301 to fix above issue, i was trying to set "dfs.image.transfer.timeout", But i did not find where to set this as my cluster managed by cloudrea Manager. 2014-06-24 11:57:02,969 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user hive 2014-06-24 11:57:02,973 WARN org.mortbay.log: /getimage: java.io.IOException: GetImage failed. java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:129) at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) at java.io.BufferedInputStream.read1(BufferedInputStream.java:258) at java.io.BufferedInputStream.read(BufferedInputStream.java:317) at java.io.FilterInputStream.read(FilterInputStream.java:116) at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:2676) at java.security.DigestInputStream.read(DigestInputStream.java:144) at java.io.FilterInputStream.read(FilterInputStream.java:90) at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.doGetUrl(TransferFsImage.java:322) at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:222) at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.downloadImageToStorage(TransferFsImage.java:86) at org.apache.hadoop.hdfs.server.namenode.GetImageServlet$1.run(GetImageServlet.java:164) at org.apache.hadoop.hdfs.server.namenode.GetImageServlet$1.run(GetImageServlet.java:115) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) at org.apache.hadoop.hdfs.server.namenode.GetImageServlet.doGet(GetImageServlet.java:115) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221) at org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1056) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Best REGards, BOMmuraj
... View more
Labels:
- Labels:
-
HDFS
06-24-2014
12:22 AM
Dear All, I am planning to set larger value for "dfs.image.transfer.timeout" as i m facing some issue in transferring fsimage. But i did not find this parameter in cloudera manager configuration section for namenode. am i missing anything ? kindly share your suggestion. Best Regards, Bommuraj
... View more
Labels:
- Labels:
-
HDFS
06-23-2014
05:37 PM
Dear all, I REcently enabled HA With my namenode. i started to see issue with my CHECKPOINT process, Means, CHeckPOInt did not occur for past 5 hours. Here go my observation. Have you seen this case before. or am i hitting any BUG? Kind share your advice to crack this issue out ... As per checkpoint process, When the updated FSIMAGE get downloaded to "NAMENODE" from "STANDBY NAMENODE", The "FSIMAGE.ckpt_txid" must be renamed to "FSIMAGE_txid" But It's not happening in my case. I did not see any file named with "FSIMAGE_txid" in my namenode , All are looks like "FSIMAGE.ckpt_txid". So I just compared both "FSIMAGE.ckpt_txid" & "FSIMAGE_txid" ,Both got same checksum value. FSIMAGE.ckpt_txid is from NAMENODE FSIMAGE_txid is from SECONDARYNAMENODE namenode: ========= root@namenode:/mnt/sdb/name/current# cksum fsimage.ckpt_0000000000604392126 3708522794 2148716968 fsimage.ckpt_0000000000604392126 secondary-namenode: ================ root@secondary-namenode:/mnt/sdd/name/current# cksum fsimage_0000000000604392126 3708522794 2148716968 fsimage_0000000000604392126 NOTE: I did not see twork issueany ne, i am able to download the fsimage using "wget" Command. i am using cdh 4.1.3 & Cloudera Enterprise 4.6.3 Best Regards, BOMmuraj
... View more
Labels:
- Labels:
-
HDFS
05-29-2014
10:17 AM
Hi, As per HARSH.J suugestion, i added more HEAP size (5 GB to 8 GB) to Name Node & Secondary Name Node. issue resolved !!! Thank you HARSH. Best Regards, Bommuraj
... View more
05-27-2014
10:58 AM
Dear Folks, i am seeing "strange" issue with my secondary name node and Checking point is not happening as expected way. Chekckpoint occurs an every hour But For Past few days, Its not happening hourly, Means, Its Delayed to 8-12 hours. When i check the secondary name node log , I found this "Exception in doCheckpoint ; java.net.SocketTimeoutException: Read timed out". I checked Name Node Resource utilization , I did not see any issue, there are plenty of resources. But In Secondary name-node , I am seeing CPU utilization is 100% However there 80% idle CPU. (these are enterprise hardware , it has 8 CPU's core) I am suspecting , This issue due to Massive RPC However do we have any utility to Measure RPC in Name Node ? is there anyway to find what causing this delay in check point ? Also I am seeing "Name Node & Secondary Name Node" Becomes BAD health very frequently in CM by giving "Cloudera Manager agent is not able to communicate with this role's web server." (we recently configured FLUME, i am suspecting that would cause the Issue However I am not seeing any abnormal behavior in Name Node) NOTE: these issues started to visible only for past 5 days, We have running this cluster more than year , CDH 4.1.1 & CM Cloudera Enterprise 4.6.3 ) Best Regards, Bommuraj 1:31:51.953 PM INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader replaying edit log: 582444772/110965 transactions completed. (524891%) 11:32:35.978 PM INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader replaying edit log: 582449593/110965 transactions completed. (524895%) 11:32:36.325 PM INFO org.apache.hadoop.hdfs.server.namenode.FSImage Edits file /mnt/sda/dfs/snn/current/edits_0000000000582340622-0000000000582451586 of size 14029328 edits # 110965 loaded in 1059 seconds. 11:33:20.847 PM INFO org.apache.hadoop.hdfs.server.namenode.FSImage Saving image file /mnt/sda/dfs/snn/current/fsimage.ckpt_0000000000582451586 using no compression 5:20:28.285 AM INFO org.apache.hadoop.hdfs.server.namenode.FSImage Image file of size 2034249037 saved in 20827 seconds. 5:20:28.328 AM INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager Going to retain 2 images with txid >= 582122380 5:20:28.328 AM INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager Purging old image FSImageFile(file=/mnt/sda/dfs/snn/current/fsimage_0000000000582003081, cpktTxId=0000000000582003081) 5:20:28.741 AM INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager Purging old edit log EditLogFile(file=/mnt/sda/dfs/snn/current/edits_0000000000580991759-0000000000581050238,first=0000000000580991759,last=0000000000581050238,inProgress=false,hasCorruptHeader=false) 5:20:28.744 AM INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager Purging old edit log EditLogFile(file=/mnt/sda/dfs/snn/current/edits_0000000000581050239-0000000000581116511,first=0000000000581050239,last=0000000000581116511,inProgress=false,hasCorruptHeader=false) 5:20:28.769 AM INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage Opening connection to http://stats-2409.intranet.bit:50070/getimage?putimage=1&txid=582451586&port=50090&storageInfo=-40:907664250:1351227679575:CID-3cce1556-c0f0-4517-bbba-c2b62b256d44 5:21:28.802 AM ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode Exception in doCheckpoint java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:129) at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) at java.io.BufferedInputStream.read1(BufferedInputStream.java:258) at java.io.BufferedInputStream.read(BufferedInputStream.java:317) at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:632) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1195) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:379) at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.doGetUrl(TransferFsImage.java:244) at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:222) at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.uploadImageFromStorage(TransferFsImage.java:137) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:474) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:331) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$2.run(SecondaryNameNode.java:298) at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:452) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:294) at java.lang.Thread.run(Thread.java:662) 5:23:33.050 AM INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode Image has not changed. Will not download image. 5:23:33.051 AM INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage Opening connection to http://stats-2409.intranet.bit:50070/getimage?getedit=1&startTxId=582451587&endTxId=582519424&storageInfo=-40:907664250:1351227679575:CID-3cce1556-c0f0-4517-bbba-c2b62b256d44 5:23:33.348 AM INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage Transfer took 0.30s at 26855.22 KB/s 5:23:33.348 AM INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage Downloaded file edits_0000000000582451587-0000000000582519424 size 8167790 bytes. 5:23:33.349 AM INFO org.apache.hadoop.hdfs.server.namenode.Checkpointer Checkpointer about to load edits from 1 stream(s). 5:23:33.349 AM INFO org.apache.hadoop.hdfs.server.namenode.FSImage Reading /mnt/sda/dfs/snn/current/edits_0000000000582451587-0000000000582519424 expecting start txid #582451587 5:24:11.819 AM INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader replaying edit log: 582454371/67838 transactions completed. (858596%)
... View more
Labels:
- Labels:
-
HDFS
03-17-2014
11:17 AM
Thank you Harsh !!!
... View more
03-07-2014
04:00 PM
Team, I just got below question which lead me into great confusion, Please clarify me. I have "Hive-server & Metastore" running on one machine and "DataBase" (mysql) is running on Other Machine. "Hive Server & Metastore" config looks like below: ================================================== <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://DBSERVER.intranet.bit/hive</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>XXXXXXXXXXXX</value> </property> <property> <name>datanucleus.autoCreateSchema</name> <value>false</value> </property> <property> <name>datanucleus.fixedDatastore</name> <value>true</value> </property> <property> <name>hive.exec.scratchdir</name> <value>/tmp/hive-${user.name}</value> <description>Scratch space for Hive jobs</description> </property> <property> <name>hive.stats.autogather</name> <value>false</value> </property> QUESTION Here : I brought down the Metastore Service (/etc/init.d/metastore stop) on the hive server, Then I ran the command "Hive", Still I am able to see all the tables. (kindly Refer below command output) If i can access tables without Metastore service , Why there is "Metastore Service" ? Am i missing anything ? /etc/hive/conf# hive Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.properties Hive history file=/tmp/root/hive_job_log_aa0f2d2c-459c-43b5-8be4-da89a45e8473_558641078.txt SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/lib/hive/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. hive> show tables; OK impre impre_map impre_summary live_test sau_metrics sau_metrics_test bundle_events bundle_events_total cfu_for_crash_frequency dau_by_geo installs crash_frequency flume_test Also , i brought down the Hive (/etc/init.d/hive-server stop & /etc/init.d/hive-metastore stop), Still i am successfully read the HIVE table. Am i missing anything here ? KIndly share your veiw, and Correct me if my understanding is wrong. How "hive" command is allowing me to view HIVE tables after brought down "metastore" Service ? if this is true, I can connect "HIVE DB(mysql)" without Metastore service ? Best Regards, Bommuraj
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Zookeeper
02-26-2014
10:17 AM
Thank you Harsh !!!
... View more
02-18-2014
02:21 PM
Hi, I have hive server running with Remote Mysql DB. hive-server-0.10.0+198-1.cdh4.4.0.p0.15~squeeze-cdh4.4 I have group of users using HIVE and running their queries. I want something to control these users Slot usage (like want to regulate their Slot usage.) is there anyway to restrict/define the usage of M/R slot ? Kindly advice me how to deal with this requirement. Best Regards, bommuRaj
... View more
Labels:
- Labels:
-
Apache Hive