Member since
12-03-2014
14
Posts
0
Kudos Received
0
Solutions
03-25-2015
05:36 AM
Dear all, I installed CDH5 with all services installed upon 3 servers, one as namenode and other two as datanodes. Recently I wrote a MapReduce program which transfer data from HDFS to HBase table, when I ran the program under Hbase master node, it worked quite well: .
.
/parcels/CDH/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/.//hadoop-streaming-2.5.0-cdh5.3.2.jar
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hadoop/lib/native
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-504.8.1.el6.x86_64
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Client environment:user.name=root
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/maintainer/maprtest/busdatahbase
15/03/25 14:06:02 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x8eae88f, quorum=localhost:2181, baseZNode=/hbase
15/03/25 14:06:02 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/25 14:06:02 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
15/03/25 14:06:02 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x14be4ad837908bd, negotiated timeout = 60000
15/03/25 14:06:03 INFO mapreduce.HFileOutputFormat2: Looking up current regions for table Busdatatest
15/03/25 14:06:03 INFO mapreduce.HFileOutputFormat2: Configuring 1 reduce partitions to match current region count
15/03/25 14:06:03 INFO mapreduce.HFileOutputFormat2: Writing partition information to /tmp/partitions_2f60547f-f4ef-437b-af98-56e12c3cc121
15/03/25 14:06:03 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
15/03/25 14:06:03 INFO compress.CodecPool: Got brand-new compressor [.deflate]
15/03/25 14:06:03 INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
15/03/25 14:06:03 INFO mapreduce.HFileOutputFormat2: Incremental table Busdatatest output configured.
15/03/25 14:06:04 INFO client.RMProxy: Connecting to ResourceManager at trafficdata0.sis.uta.fi/153.1.62.179:8032
15/03/25 14:06:05 INFO input.FileInputFormat: Total input paths to process : 6
15/03/25 14:06:06 INFO mapreduce.JobSubmitter: number of splits:6
15/03/25 14:06:06 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1426162558224_0095
15/03/25 14:06:06 INFO impl.YarnClientImpl: Submitted application application_1426162558224_0095
15/03/25 14:06:06 INFO mapreduce.Job: The url to track the job: http://trafficdata0.sis.uta.fi:8088/proxy/application_1426162558224_0095/
15/03/25 14:06:06 INFO mapreduce.Job: Running job: job_1426162558224_0095
15/03/25 14:06:19 INFO mapreduce.Job: Job job_1426162558224_0095 running in uber mode : false
15/03/25 14:06:19 INFO mapreduce.Job: map 0% reduce 0%
15/03/25 14:06:30 INFO mapreduce.Job: map 5% reduce 0%
15/03/25 14:06:31 INFO mapreduce.Job: map 13% reduce 0%
15/03/25 14:06:32 INFO mapreduce.Job: map 21% reduce 0%
^C15/03/25 14:06:33 INFO mapreduce.Job: map 23% reduce 0% . . And I can check the result, it's right. However, when I tried to run the same program under regionserver node, it shows cannot connect, with the log follows: .
.
15/03/25 10:16:54 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
15/03/25 10:16:54 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
15/03/25 10:16:54 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
15/03/25 10:16:54 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-504.el6.x86_64
15/03/25 10:16:54 INFO zookeeper.ZooKeeper: Client environment:user.name=hdfs
15/03/25 10:16:54 INFO zookeeper.ZooKeeper: Client environment:user.home=/var/lib/hadoop-hdfs
15/03/25 10:16:54 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/maintainer/myjars
15/03/25 10:16:54 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x474fc0cb, quorum=localhost:2181, baseZNode=/hbase
15/03/25 10:16:54 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/25 10:16:54 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
15/03/25 10:16:54 WARN zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=localhost:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
15/03/25 10:16:54 INFO util.RetryCounter: Sleeping 1000ms before retry #0...
15/03/25 10:16:55 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/25 10:16:55 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
15/03/25 10:16:55 WARN zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=localhost:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
15/03/25 10:16:55 INFO util.RetryCounter: Sleeping 2000ms before retry #1...
15/03/25 10:16:56 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/25 10:16:56 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
.
.
. Actually I checked the client configuration under '/etc/hbase/conf/hbase-site.xml', it has set the variable to connect to master node: .
.
<property>
<name>zookeeper.znode.rootserver</name>
<value>root-region-server</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>trafficdata0.sis.uta.fi</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
.
. But when running program, it always connect to localhost, and it seems to be the problem of zookeeper? do you know why this happened, please help. Thanks. Br, YIibin
... View more
02-26-2015
01:43 PM
Hi all, I installed CDH5 on three servers with a cluster which has one namenode and two datanodes, including all services installed. I tried to create through command like this: create 'test', 'cf' It shows the error after a long time: ERROR: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/153.1.62.179:41461 remote=trafficdata0.sis.uta.fi/153.1.62.179:60000]
Here is some help for this command:
Creates a table. Pass a table name, and a set of column family
specifications (at least one), and, optionally, table configuration.
Column specification can be a simple string (name), or a dictionary
(dictionaries are described below in main help output), necessarily
including NAME attribute.
Examples:
Create a table with namespace=ns1 and table qualifier=t1
hbase> create 'ns1:t1', {NAME => 'f1', VERSIONS => 5}
Create a table with namespace=default and table qualifier=t1
hbase> create 't1', {NAME => 'f1'}, {NAME => 'f2'}, {NAME => 'f3'}
hbase> # The above in shorthand would be the following:
hbase> create 't1', 'f1', 'f2', 'f3'
hbase> create 't1', {NAME => 'f1', VERSIONS => 1, TTL => 2592000, BLOCKCACHE => true}
hbase> create 't1', {NAME => 'f1', CONFIGURATION => {'hbase.hstore.blockingStoreFiles' => '10'}}
Table configuration options can be put at the end.
Examples:
hbase> create 'ns1:t1', 'f1', SPLITS => ['10', '20', '30', '40']
hbase> create 't1', 'f1', SPLITS => ['10', '20', '30', '40']
hbase> create 't1', 'f1', SPLITS_FILE => 'splits.txt', OWNER => 'johndoe'
hbase> create 't1', {NAME => 'f1', VERSIONS => 5}, METADATA => { 'mykey' => 'myvalue' }
hbase> # Optionally pre-split the table into NUMREGIONS, using
hbase> # SPLITALGO ("HexStringSplit", "UniformSplit" or classname)
hbase> create 't1', 'f1', {NUMREGIONS => 15, SPLITALGO => 'HexStringSplit'}
hbase> create 't1', 'f1', {NUMREGIONS => 15, SPLITALGO => 'HexStringSplit', CONFIGURATION => {'hbase.hregion.scan.loadColumnFamiliesOnDemand' => 'true'}} And the main problem inside the HMaster log file is: 2015-02-26 22:42:08,959 INFO org.apache.hadoop.hbase.catalog.CatalogTracker: Failed verification of hbase:meta,,1 at address=trafficdata1.sis.uta.fi,60020,1424979581468, exception=org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region hbase:meta,,1 is not online on trafficdata1.sis.uta.fi,60020,1424983335514
at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2761)
at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4256)
at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionInfo(HRegionServer.java:3623)
at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:20158)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
at java.lang.Thread.run(Thread.java:701) And the main errors inside Regionserver log file: 2015-02-26 22:42:35,051 ERROR org.apache.hadoop.jmx.JMXJsonServlet: getting attribute DiagnosticOptions of com.sun.management:type=HotSpotDiagnostic threw an exception
javax.management.RuntimeMBeanException: java.lang.NullPointerException
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:876)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:889)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:686)
at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:682)
at org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:346)
at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:324)
at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:217)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1122)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:767)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Caused by: java.lang.NullPointerException
at sun.management.Flag.getVMOption(Flag.java:67)
at sun.management.HotSpotDiagnostic.getDiagnosticOptions(HotSpotDiagnostic.java:58)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:74)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:277)
at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:181)
at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:114)
at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:51)
at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:226)
at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:207)
at javax.management.StandardMBean.getAttribute(StandardMBean.java:372)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:682)
... 29 more I dont know how to fix this problem, anyone please help. Thanks in advance. Br, Yibin
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
-
Security
01-07-2015
04:26 AM
Hei Jim, Thank you so much, I wil follow this to d omy job. 🙂 Yibin
... View more
01-05-2015
06:46 AM
Hei, I need help. I have already installed Clouera including two hosts, one works as namenode and datanode at the same time, the other one just as datanode. The cloudera has all services installed like Hdfs, hue, hbase, and so on. Now I have a new server, and I want to move the namenode to the new server, is it ok for me to do that, if yes how can I do that? Attachment is my coudera information and two hosts information, please check and help me. Thanks in advance. Br, Yibin
... View more
Labels:
- Labels:
-
Apache HBase
-
Cloudera Hue
-
HDFS