Member since
03-12-2014
23
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2168 | 08-12-2014 09:46 PM | |
3488 | 08-12-2014 09:43 PM |
09-01-2014
05:37 AM
Describe command doesnt give the complete rows and coluns in a hbase table
... View more
09-01-2014
04:42 AM
Hi, I created a HBase table, when i describe the table it will not showing the actual schema(rows and columns), Can any one can help me how can i see the actual schema of Hbase table(rows and columns)??????????????????????????? -Thankyou
... View more
Labels:
- Labels:
-
Apache HBase
08-29-2014
02:09 AM
Thanks for the reply. My problem is fixed finallly as you told it is necessary that flume sink must have the 777 permission to write the data to HDFS from webserver. Now my flime ng is working without having any issues. -Thankyou
... View more
08-24-2014
10:24 PM
Below is my cluster configuration: Cluster Summary 208423 files and directories, 200715 blocks = 409138 total. Heap Size is 369.31 MB / 2.67 GB (13%) Configured Capacity : 3.7 TB DFS Used : 1.93 TB Non DFS Used : 309.38 GB DFS Remaining : 1.47 TB DFS Used% : 52.19 % DFS Remaining% : 39.64 % Live Nodes : 3 Dead Nodes : 0 Decommissioning Nodes : 1 Number of Under-Replicated Blocks : 11025 It will happen consisitantly when i ran my flume agent to pull the logs to HDFS. thankyou
... View more
08-21-2014
05:35 AM
Can anyone know exactly what is the below error: 2014-08-21 17:37:11,578 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 1 2014-08-21 17:37:11,578 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:root cause:java.io.IOException: File /logs/prod/apache/2014/08/19/web07.prod.hs18.lan.1408622705069.tmp could only be replicated to 0 nodes, instead of 1 Presently i have the replication factor = 1. Here is the complete error log file: at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1637) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:757) at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) 2014-08-21 17:42:11,106 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 1 2014-08-21 17:42:11,106 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:root cause:java.io.IOException: File /logs/prod/apache/2014/08/19/web07.prod.hs18.lan.1408623004579.tmp could only be replicated to 0 nodes, instead of 1 2014-08-21 17:42:11,106 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000, call addBlock(/logs/prod/apache/2014/08/19/web07.prod.hs18.lan.1408623004579.tmp, DFSClient_NONMAPREDUCE_2060617957_26, [Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@26ac92f0) from 172.16.10.25:58118: error: java.io.IOException: File /logs/prod/apache/2014/08/19/web07.prod.hs18.lan.1408623004579.tmp could only be replicated to 0 nodes, instead of 1 java.io.IOException: File /logs/prod/apache/2014/08/19/web07.prod.hs18.lan.1408623004579.tmp could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1637) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:757) at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) -Thankyou
... View more
Labels:
- Labels:
-
HDFS
08-12-2014
09:46 PM
i gone through the logs and i come to know that this is purely HDFS issue not flume, so i fix it as follows: step1: stop all the services Step2: started name node then when am trying to start the data nodes on the 3 servers,one of the server throwig the error message /var/log/ ----No such file/directory /var/run --No such file/directory But these files are existing so i check the permissions on those two differ from second server to third server So given the permission to those directories to be in sink and then started all the services then flume working fine, thats it. -Thankyou
... View more
08-12-2014
09:43 PM
I fix the problem. I come to know that it is not a flume issue it is purely HDFS issue, then i done the below steps step1: stop all the services Step2: started name node then when am trying to start the data nodes on the 3 servers,one of the server throwig the error message /var/log/ ----No such file/directory /var/run --No such file/directory But these files are existing so i check the permissions on those two differ from second server to third server So given the permission to those directories to be in sink and then started all the services then flume working fine, thats it. -Thankyou
... View more
08-11-2014
09:39 PM
Harsh J please check my dfsadmin report once and help me to fix the problem. here is the report Configured Capacity: 4066320277504 (3.7 TB) Present Capacity: 3690038079488 (3.36 TB) DFS Remaining: 1233865269248 (1.12 TB) DFS Used: 2456172810240 (2.23 TB) DFS Used%: 66.56% Under replicated blocks: 66 Blocks with corrupt replicas: 0 Missing blocks: 0 ------------------------------------------------- Datanodes available: 3 (3 total, 0 dead) Name: 172.16.0.97:50010 Decommission Status : Normal Configured Capacity: 1159672115200 (1.05 TB) DFS Used: 807840399360 (752.36 GB) Non DFS Used: 177503387648 (165.31 GB) DFS Remaining: 174328328192(162.36 GB) DFS Used%: 69.66% DFS Remaining%: 15.03% Last contact: Mon Aug 11 12:05:22 IST 2014 Name: 172.16.0.106:50010 Decommission Status : Normal Configured Capacity: 1749056290816 (1.59 TB) DFS Used: 833225805824 (776 GB) Non DFS Used: 82293673984 (76.64 GB) DFS Remaining: 833536811008(776.29 GB) DFS Used%: 47.64% DFS Remaining%: 47.66% Last contact: Mon Aug 11 12:05:21 IST 2014 Name: 172.16.0.63:50010 Decommission Status : Normal Configured Capacity: 1157591871488 (1.05 TB) DFS Used: 815106605056 (759.13 GB) Non DFS Used: 116485136384 (108.49 GB) DFS Remaining: 226000130048(210.48 GB) DFS Used%: 70.41% DFS Remaining%: 19.52% Last contact: Mon Aug 11 12:05:21 IST 2014
... View more
08-10-2014
11:38 PM
here is the report Configured Capacity: 4066320277504 (3.7 TB) Present Capacity: 3690038079488 (3.36 TB) DFS Remaining: 1233865269248 (1.12 TB) DFS Used: 2456172810240 (2.23 TB) DFS Used%: 66.56% Under replicated blocks: 66 Blocks with corrupt replicas: 0 Missing blocks: 0 ------------------------------------------------- Datanodes available: 3 (3 total, 0 dead) Name: 172.16.0.97:50010 Decommission Status : Normal Configured Capacity: 1159672115200 (1.05 TB) DFS Used: 807840399360 (752.36 GB) Non DFS Used: 177503387648 (165.31 GB) DFS Remaining: 174328328192(162.36 GB) DFS Used%: 69.66% DFS Remaining%: 15.03% Last contact: Mon Aug 11 12:05:22 IST 2014 Name: 172.16.0.106:50010 Decommission Status : Normal Configured Capacity: 1749056290816 (1.59 TB) DFS Used: 833225805824 (776 GB) Non DFS Used: 82293673984 (76.64 GB) DFS Remaining: 833536811008(776.29 GB) DFS Used%: 47.64% DFS Remaining%: 47.66% Last contact: Mon Aug 11 12:05:21 IST 2014 Name: 172.16.0.63:50010 Decommission Status : Normal Configured Capacity: 1157591871488 (1.05 TB) DFS Used: 815106605056 (759.13 GB) Non DFS Used: 116485136384 (108.49 GB) DFS Remaining: 226000130048(210.48 GB) DFS Used%: 70.41% DFS Remaining%: 19.52% Last contact: Mon Aug 11 12:05:21 IST 2014
... View more
08-10-2014
10:24 PM
Hi, when am start the flume-ng am getting the below error, at java.lang.Thread.run(Thread.java:662) 2014-08-04 17:54:50,417 WARN hdfs.HDFSEventSink: HDFS IO error org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /logs/prod/jboss/2014/08/04/web07.prod.hs18.lan.1407154543459.tmp could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1637) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:757) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) i done the below changes as per the suggestions given in google user group: https://groups.google.com/a/cloudera.org/forum/#!topic/cdh-user/4saUW5MW53M Step1: try'd to set Flume-NGs hdfs sink parameter maxOpenFiles to 10 Step2: if you check in the above forum they given the below suggestion: Reducing the Block size from 64M to 5M fixed this problem Could you please suggest me how can i fix this problem? my current block size = 134217728, is it really works when i reduce my block size to 5 mb? -Thankyou
... View more
Labels:
- Labels:
-
HDFS