Member since
02-10-2015
84
Posts
2
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
13244 | 06-04-2015 06:09 PM | |
7257 | 05-22-2015 06:59 AM | |
5881 | 05-13-2015 03:19 PM | |
2376 | 05-11-2015 05:22 AM |
05-08-2015
11:49 AM
I have already cleared the /hbase/* HDFS folder. But I cannot clear the HBase zk nodes!! When I launch zookeeper client it hangs.... The output is same as before: [root@master ~]# hadoop fs -ls -R /hbase/data/ drwxr-xr-x - hbase hbase 0 2015-05-08 09:30 /hbase/data/hbase drwxr-xr-x - hbase hbase 0 2015-05-08 09:30 /hbase/data/hbase/meta drwxr-xr-x - hbase hbase 0 2015-05-08 09:30 /hbase/data/hbase/meta/.tabledesc -rw-r--r-- 2 hbase hbase 372 2015-05-08 09:30 /hbase/data/hbase/meta/.tabledesc/.tableinfo.0000000001 drwxr-xr-x - hbase hbase 0 2015-05-08 09:30 /hbase/data/hbase/meta/.tmp drwxr-xr-x - hbase hbase 0 2015-05-08 09:30 /hbase/data/hbase/meta/1588230740 -rw-r--r-- 2 hbase hbase 32 2015-05-08 09:30 /hbase/data/hbase/meta/1588230740/.regioninfo drwxr-xr-x - hbase hbase 0 2015-05-08 09:30 /hbase/data/hbase/meta/1588230740/info drwxr-xr-x - hbase hbase 0 2015-05-08 09:30 /hbase/data/hbase/meta/1588230740/recovered.edits -rw-r--r-- 2 hbase hbase 0 2015-05-08 09:30 /hbase/data/hbase/meta/1588230740/recovered.edits/3.seqid
... View more
05-08-2015
11:23 AM
I 've got many HDFS corrupt blocks and I have re-deployed the HBase service. I did clean the HBase env before I re-installed it. Here is the outcome: [root@master ~]# hadoop fs -ls -R /hbase/data/hbase/ drwxr-xr-x - hbase hbase 0 2015-05-08 09:30 /hbase/data/hbase/meta drwxr-xr-x - hbase hbase 0 2015-05-08 09:30 /hbase/data/hbase/meta/.tabledesc -rw-r--r-- 2 hbase hbase 372 2015-05-08 09:30 /hbase/data/hbase/meta/.tabledesc/.tableinfo.0000000001 drwxr-xr-x - hbase hbase 0 2015-05-08 09:30 /hbase/data/hbase/meta/.tmp drwxr-xr-x - hbase hbase 0 2015-05-08 09:30 /hbase/data/hbase/meta/1588230740 -rw-r--r-- 2 hbase hbase 32 2015-05-08 09:30 /hbase/data/hbase/meta/1588230740/.regioninfo drwxr-xr-x - hbase hbase 0 2015-05-08 09:30 /hbase/data/hbase/meta/1588230740/info drwxr-xr-x - hbase hbase 0 2015-05-08 09:30 /hbase/data/hbase/meta/1588230740/recovered.edits -rw-r--r-- 2 hbase hbase 0 2015-05-08 09:30 /hbase/data/hbase/meta/1588230740/recovered.edits/3.seqid
... View more
05-08-2015
06:49 AM
HBase Master cannot start up!! Here is the error (trace): <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><> 2015-05-08 09:31:02,675 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.library.path=/opt/cloudera/parcels/CDH-5.4.0-1.cdh5.4.0.p0.27/lib/hadoop/lib/native 2015-05-08 09:31:02,675 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp 2015-05-08 09:31:02,675 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA> 2015-05-08 09:31:02,675 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux 2015-05-08 09:31:02,675 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.arch=amd64 2015-05-08 09:31:02,675 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.version=2.6.32-431.el6.x86_64 2015-05-08 09:31:02,676 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.name=hbase 2015-05-08 09:31:02,676 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.home=/var/lib/hbase 2015-05-08 09:31:02,676 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.dir=/var/run/cloudera-scm-agent/process/1831-hbase-MASTER 2015-05-08 09:31:02,676 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=node1:2181,master:2181,node2:2181 sessionTimeout=60000 watcher=master:600000x0, quorum=node1:2181,master:2181,node2:2181, baseZNode=/hbase 2015-05-08 09:31:02,687 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server node1/10.15.230.41:2181. Will not attempt to authenticate using SASL (unknown error) 2015-05-08 09:31:02,690 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to node1/10.15.230.41:2181, initiating session 2015-05-08 09:31:02,695 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server node1/10.15.230.41:2181, sessionid = 0x24d3137ed890570, negotiated timeout = 60000 2015-05-08 09:31:02,729 INFO org.apache.hadoop.hbase.ipc.RpcServer: RpcServer.responder: starting 2015-05-08 09:31:02,729 INFO org.apache.hadoop.hbase.ipc.RpcServer: RpcServer.listener,port=60000: starting 2015-05-08 09:31:02,776 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2015-05-08 09:31:02,780 INFO org.apache.hadoop.hbase.http.HttpRequestLog: Http request log for http.requests.master is not defined 2015-05-08 09:31:02,788 INFO org.apache.hadoop.hbase.http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2015-05-08 09:31:02,790 INFO org.apache.hadoop.hbase.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2015-05-08 09:31:02,790 INFO org.apache.hadoop.hbase.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2015-05-08 09:31:02,791 INFO org.apache.hadoop.hbase.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2015-05-08 09:31:02,803 INFO org.apache.hadoop.hbase.http.HttpServer: Jetty bound to port 60010 2015-05-08 09:31:02,803 INFO org.mortbay.log: jetty-6.1.26.cloudera.4 2015-05-08 09:31:03,061 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:60010 2015-05-08 09:31:03,063 INFO org.apache.hadoop.hbase.master.HMaster: hbase.rootdir=hdfs://master:8020/hbase, hbase.cluster.distributed=true 2015-05-08 09:31:03,073 INFO org.apache.hadoop.hbase.master.HMaster: Adding backup master ZNode /hbase/backup-masters/master,60000,1431091861873 2015-05-08 09:31:03,142 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Deleting ZNode for /hbase/backup-masters/master,60000,1431091861873 from backup master directory 2015-05-08 09:31:03,149 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Registered Active Master=master,60000,1431091861873 2015-05-08 09:31:03,152 INFO org.apache.hadoop.conf.Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS 2015-05-08 09:31:03,161 INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x348d62d4 connecting to ZooKeeper ensemble=node1:2181,master:2181,node2:2181 2015-05-08 09:31:03,162 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=node1:2181,master:2181,node2:2181 sessionTimeout=60000 watcher=hconnection-0x348d62d40x0, quorum=node1.net:2181,master:2181,node2:2181, baseZNode=/hbase 2015-05-08 09:31:03,162 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server node2/10.15.230.42:2181. Will not attempt to authenticate using SASL (unknown error) 2015-05-08 09:31:03,163 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to node2/10.15.230.42:2181, initiating session 2015-05-08 09:31:03,164 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server node2/10.15.230.42:2181, sessionid = 0x34d3137ea45057a, negotiated timeout = 60000 2015-05-08 09:31:03,184 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: ClusterId : d72a7eb0-8dba-485b-a8bc-2fbf5a182ed7 2015-05-08 09:31:03,365 INFO org.apache.hadoop.hbase.fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2015-05-08 09:31:03,370 INFO org.apache.hadoop.hbase.coordination.SplitLogManagerCoordination: Found 0 orphan tasks and 0 rescan nodes 2015-05-08 09:31:03,383 INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7ae2a774 connecting to ZooKeeper ensemble=node1:2181,master:2181,node2:2181 2015-05-08 09:31:03,383 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=node1:2181,master:2181,node2:2181 sessionTimeout=60000 watcher=hconnection-0x7ae2a7740x0, quorum=node1:2181,master:2181,node2:2181, baseZNode=/hbase 2015-05-08 09:31:03,385 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server node1/10.15.230.41:2181. Will not attempt to authenticate using SASL (unknown error) 2015-05-08 09:31:03,385 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to node1/10.15.230.41:2181, initiating session 2015-05-08 09:31:03,386 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server node1/10.15.230.41:2181, sessionid = 0x24d3137ed890571, negotiated timeout = 60000 2015-05-08 09:31:03,398 INFO org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer: loading config 2015-05-08 09:31:03,439 INFO org.apache.hadoop.hbase.master.HMaster: Server active/primary master=master,60000,1431091861873, sessionid=0x24d3137ed890570, setting cluster-up flag (Was=true) 2015-05-08 09:31:03,452 INFO org.apache.hadoop.hbase.procedure.ZKProcedureUtil: Clearing all procedure znodes: /hbase/online-snapshot/acquired /hbase/online-snapshot/reached /hbase/online-snapshot/abort 2015-05-08 09:31:03,458 INFO org.apache.hadoop.hbase.procedure.ZKProcedureUtil: Clearing all procedure znodes: /hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached /hbase/flush-table-proc/abort 2015-05-08 09:31:03,485 INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Process identifier=replicationLogCleaner connecting to ZooKeeper ensemble=node1:2181,master:2181,node2:2181 2015-05-08 09:31:03,485 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=node1:2181,master:2181,node2:2181 sessionTimeout=60000 watcher=replicationLogCleaner0x0, quorum=node1:2181,master:2181,node2:2181, baseZNode=/hbase 2015-05-08 09:31:03,486 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server node1/10.15.230.41:2181. Will not attempt to authenticate using SASL (unknown error) 2015-05-08 09:31:03,486 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to node1/10.15.230.41:2181, initiating session 2015-05-08 09:31:03,487 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server node1/10.15.230.41:2181, sessionid = 0x24d3137ed890572, negotiated timeout = 60000 2015-05-08 09:31:03,495 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region servers count to settle; currently checked in 0, slept for 0 ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval of 1500 ms. 2015-05-08 09:31:04,998 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region servers count to settle; currently checked in 0, slept for 1503 ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval of 1500 ms. 2015-05-08 09:31:06,300 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=node2,60020,1431091828858 2015-05-08 09:31:06,300 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=node3,60020,1431091829281 2015-05-08 09:31:06,300 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=node5,60020,1431091828696 2015-05-08 09:31:06,301 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=node4,60020,1431091828684 2015-05-08 09:31:06,301 INFO org.apache.hadoop.hbase.master.ServerManager: Registering server=node1,60020,1431091828790 2015-05-08 09:31:06,301 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region servers count to settle; currently checked in 4, slept for 2806 ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval of 1500 ms. 2015-05-08 09:31:06,352 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region servers count to settle; currently checked in 5, slept for 2857 ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval of 1500 ms. 2015-05-08 09:31:07,855 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region servers count to settle; currently checked in 5, slept for 4360 ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval of 1500 ms. 2015-05-08 09:31:08,006 INFO org.apache.hadoop.hbase.master.ServerManager: Finished waiting for region servers count to settle; checked in 5, slept for 4511 ms, expecting minimum of 1, maximum of 2147483647, master is running 2015-05-08 09:31:08,012 INFO org.apache.hadoop.hbase.master.MasterFileSystem: Log folder hdfs://master:8020/hbase/WALs/node1,60020,1431091828790 belongs to an existing region server 2015-05-08 09:31:08,013 INFO org.apache.hadoop.hbase.master.MasterFileSystem: Log folder hdfs://master:8020/hbase/WALs/node2,60020,1431091828858 belongs to an existing region server 2015-05-08 09:31:08,015 INFO org.apache.hadoop.hbase.master.MasterFileSystem: Log folder hdfs://master:8020/hbase/WALs/node3,60020,1431091829281 belongs to an existing region server 2015-05-08 09:31:08,016 INFO org.apache.hadoop.hbase.master.MasterFileSystem: Log folder hdfs://master:8020/hbase/WALs/node4,60020,1431091828684 belongs to an existing region server 2015-05-08 09:31:08,017 INFO org.apache.hadoop.hbase.master.MasterFileSystem: Log folder hdfs://master:8020/hbase/WALs/node5,60020,1431091828696 belongs to an existing region server 2015-05-08 09:31:08,081 INFO org.apache.hadoop.hbase.master.RegionStates: Transition {1588230740 state=OFFLINE, ts=1431091868026, server=null} to {1588230740 state=OPEN, ts=1431091868081, server=node3,60020,1431091829281} 2015-05-08 09:31:08,083 INFO org.apache.hadoop.hbase.master.ServerManager: AssignmentManager hasn't finished failover cleanup; waiting 2015-05-08 09:31:08,084 INFO org.apache.hadoop.hbase.master.HMaster: hbase:meta assigned=0, rit=false, location=node3,60020,1431091829281 2015-05-08 09:31:08,128 INFO org.apache.hadoop.hbase.MetaMigrationConvertingToPB: hbase:meta doesn't have any entries to update. 2015-05-08 09:31:08,128 INFO org.apache.hadoop.hbase.MetaMigrationConvertingToPB: META already up-to date with PB serialization 2015-05-08 09:31:08,145 INFO org.apache.hadoop.hbase.master.AssignmentManager: Clean cluster startup. Assigning user regions 2015-05-08 09:31:08,150 INFO org.apache.hadoop.hbase.master.AssignmentManager: Joined the cluster in 22ms, failover=false 2015-05-08 09:31:08,161 INFO org.apache.hadoop.hbase.master.TableNamespaceManager: Namespace table not found. Creating... 2015-05-08 09:31:08,189 FATAL org.apache.hadoop.hbase.master.HMaster: Failed to become active master org.apache.hadoop.hbase.TableExistsException: hbase:namespace at org.apache.hadoop.hbase.master.handler.CreateTableHandler.checkAndSetEnablingTable(CreateTableHandler.java:152) at org.apache.hadoop.hbase.master.handler.CreateTableHandler.prepare(CreateTableHandler.java:125) at org.apache.hadoop.hbase.master.TableNamespaceManager.createNamespaceTable(TableNamespaceManager.java:233) at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:86) at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:897) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:739) at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:169) at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1469) at java.lang.Thread.run(Thread.java:745) 2015-05-08 09:31:08,195 FATAL org.apache.hadoop.hbase.master.HMaster: Master server abort: loaded coprocessors are: [] 2015-05-08 09:31:08,195 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown. <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>
... View more
Labels:
- Labels:
-
Apache HBase
05-07-2015
08:09 AM
I have run the fsck command on my HDFS and I am seeing a high number of Under-replicated blocks (over 30%)!!! My HDFS Replication Factor is set up to 2! What are the Best Practices / Recommended Methods to 'fix' this issue?? 1) Should I se "hadoop fs -setrep" to change the replication factor of certain files? 2) What's the manual way to 'force' the affected blocks to replicate themselves? 3) Should I remove permanetly certain types of files? For instance, in the fsch log report I am seeing a lot of files with of this type: /user/hue/.Trash/150507010000/user/hue/.cloudera_manager_hive_metastore_canary/hive0_hms/cm_test_table1430446320640/p1=p1/p2=421 <dir> /user/hue/.Trash/150507010000/user/hue/.cloudera_manager_hive_metastore_canary/hive0_hms/cm_test_table1430446620772 <dir> /user/hue/.Trash/150507010000/user/hue/.cloudera_manager_hive_metastore_canary/hive0_hms/cm_test_table1430446620772/p1=p0 <dir> 4) How about the /tmp/logs/ files? Dp I reset their setrep setting or periodically remove them? 5) I am also having quite a few Accumulo tables reporting under-replicated blocks!
... View more
Labels:
- Labels:
-
Apache Accumulo
-
HDFS
05-06-2015
10:51 AM
Thank you! So , the only way to overcome the issue is to execute the 'alter table...' command! I have to do that to ALL existing tables! Any other way?
... View more
05-06-2015
10:40 AM
All the existing Hive tables showing up the 8020 port twice in their HDFS Location!!! What might have caused this, during the CDH 5.4 upgrade??? Thanks for your assistance!
... View more
05-06-2015
10:35 AM
I have just created a brand new table. The HDFS location/path has the port 8020 ONE time!!!! How can I revert the existing tables back to report port 8020 one time and not twice???? <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><> hive> select * from users; OK 100 User1 passwd1 200 User2 passwd2 300 User3 passwd3 400 User4 passwd4 500 User5 passwd5 600 User6 passwd6 Time taken: 0.073 seconds, Fetched: 6 row(s) hive> describe formatted users; OK # col_name data_type comment user_id int username string passwd string # Detailed Table Information Database: default Owner: dast CreateTime: Wed May 06 13:32:05 EDT 2015 LastAccessTime: UNKNOWN Protect Mode: None Retention: 0 Location: hdfs://<host-name>:8020/user/hive/warehouse/users Table Type: MANAGED_TABLE Table Parameters: COLUMN_STATS_ACCURATE true comment This is the users table numFiles 1 totalSize 121 transient_lastDdlTime 1430933556 # Storage Information SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe InputFormat: org.apache.hadoop.mapred.TextInputFormat OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat Compressed: No Num Buckets: -1 Bucket Columns: [] Sort Columns: [] Storage Desc Params: field.delim , serialization.format , Time taken: 0.067 seconds, Fetched: 33 row(s) <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>
... View more
05-06-2015
10:12 AM
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://<FQN-host-name>:8020</value> </property> ................................. ................................. ................................. hive> describe formatted employees; OK # col_name data_type comment emp_id int name string salary double # Detailed Table Information Database: default Owner: dast CreateTime: Thu Apr 09 14:57:46 EDT 2015 LastAccessTime: UNKNOWN Protect Mode: None Retention: 0 Location: hdfs://<host-name>:8020:8020/user/hive/warehouse/employees Table Type: MANAGED_TABLE Table Parameters: COLUMN_STATS_ACCURATE true comment This is the employees table numFiles 1 numRows 0 rawDataSize 0 totalSize 142 transient_lastDdlTime 1428607184 # Storage Information SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe InputFormat: org.apache.hadoop.mapred.TextInputFormat OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat Compressed: No Num Buckets: -1 Bucket Columns: [] Sort Columns: [] Storage Desc Params: field.delim , serialization.format , Time taken: 0.405 seconds, Fetched: 35 row(s)
... View more
05-06-2015
07:49 AM
/user/hive/warehouse
... View more
05-06-2015
07:29 AM
Port 8020 is used by NameNode. I am not sure why the HDFS path has it twice!!! Does it pick up the "fs.defaultFS" property from HDFS service?
... View more
- « Previous
- Next »