Member since
05-04-2016
32
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1927 | 04-29-2019 07:40 PM |
07-31-2019
08:16 PM
@Wendell Bu thanks for your article. At the end you mentioned that you will discuss the detailed configuration in other articles. Can you please share the other articles.
... View more
04-29-2019
07:40 PM
Hi Team, Got the answer, we have to use SPARK_HOME and then run the spark-shell. export SPARK_HOME=/usr/hdp/2.6.0.23-2/spark2 export SPARK_MAJOR_VERSION=2 spark-shell then it will connect to 2.6.0.23
... View more
04-29-2019
07:18 PM
Hi Team, We upgraded our spark to 2.3.0.2.6.5.0-292. When I export the spark_major_version=2, and if I type spark-shell then it will be connecting to 2.3.0.2.6.5.0-292. I want to use spark 2.1.0.2.6.0.23-2 version, how can I do that. Please help us on this.
... View more
Labels:
- Labels:
-
Apache Spark
05-07-2018
08:14 PM
Hi Team, One of my developer executing count query with TEZ and MR engine. While executing with MR it is giving results, but while executing with TEZ I am getting fixed length error. I thought this is because of file format. We are using our own Serde Library and different Input Format. Is this because of file format. Please let me know if you need any other information.
SerDe Library:
com.XXXX.hadoop.hive.serde3.cobol.CobolSerDe
InputFormat:
org.apache.hadoop.mapred.FixedLengthInputFormat
OutputFormat:
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat Is this set hive.execution.engine=mr;
select count(*) from cfm001_stg;
TaskAttempt 1 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: java.io.IOException: Fixed record length 0 is invalid. It should be set to a value greater than zero
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:173)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:347)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:194)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:185)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:185)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:181)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: java.io.IOException: java.io.IOException: Fixed record length 0 is invalid. It should be set to a value greater than zero
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:196)
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.<init>(TezGroupedSplitsInputFormat.java:135)
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:101)
at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:149)
at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:80)
at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:674)
at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:633)
at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:145)
at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:109)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:405)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:124)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:149)
... 14 more
Caused by: java.io.IOException: java.io.IOException: Fixed record length 0 is invalid. It should be set to a value greater than zero
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:253)
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:193)
... 25 more
Caused by: java.io.IOException: Fixed record length 0 is invalid. It should be set to a value greater than zero
at org.apache.hadoop.mapred.FixedLengthInputFormat.getRecordReader(FixedLengthInputFormat.java:84)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:251)
... 26 more
============================================================================================================
set hive.execution.engine=mr;
select count(*) from cfm001_stg;
INFO : Ended Job = job_1525591994238_0456
+------+--+
| _c0 |
+------+--+
| 20 |
+------+--+
1 row selected (26.254 seconds)
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
Apache Tez
03-05-2018
05:35 PM
@rtrivedi Thank for your reply, I figure it out. There is a partition issue. I have dropped and add the partition, now it is working fine. Thanks for the help.
... View more
03-02-2018
07:33 PM
Team, When I am executing the below query it is failing. When I execute the same query in other environments (cluster) it is displaying the results. I have checked the table description using "describe extended table" this is also same in other environments also. Help me on this hive> SELECT * FROM udasv_XXX.udas_dtv_XXXX_XXX limit 10;
FAILED: RuntimeException org.apache.hadoop.hive.ql.metadata.HiveException: Failed with exception Property serialization.lib cannot be nulljava.lang.IllegalStateException: Property serialization.lib cannot be null
at org.apache.hadoop.hive.ql.plan.PartitionDesc.getDeserializerClassName(PartitionDesc.java:130)
at org.apache.hadoop.hive.ql.exec.FetchOperator.needConversion(FetchOperator.java:637)
at org.apache.hadoop.hive.ql.exec.FetchOperator.setupOutputObjectInspector(FetchOperator.java:595)
at org.apache.hadoop.hive.ql.exec.FetchOperator.initialize(FetchOperator.java:181)
... View more
Labels:
- Labels:
-
Apache Hive
11-03-2017
12:29 AM
In the below configuration /usr/bin/python is accessing python2.6. ps -ef | grep python
root 11947 1 0 Oct31 ? 00:00:00 /usr/bin/python /usr/lib/python2.6/site-packages/ambari_agent/AmbariAgent.py start
root 11955 11947 11 Oct31 ? 06:07:25 /usr/bin/python /usr/lib/python2.6/site-packages/ambari_agent/main.py start
How /usr/lib/python is using /usr/lib/python2.6/site-packages/ambari_agent/AmbariAgent.py start script. Is this because of any symlink or do we need to do any changes in the configuration file. If there is any configuration changes then in which file we need to change. @Abdelkrim Hadjidj
... View more
Labels:
02-16-2017
06:24 PM
dr-elephanterror.jpg dr-elephant.pdf Daily usually we ran 1000's of jobs's. I am unable to see the job's on Dr.Elephant UI. We have restarted yarn-server on Jan28 and also we restarted Dr.Elephant but from next moment I am unable to see the jobs on UI. I am getting the connection error shown in the below. I am attaching the Dr.Elephant log file and screen shot of Dr.Elephant UI. 02-14-2017 10:24:08 INFO com.linkedin.drelephant.analysis.AnalyticJobGeneratorHadoop2 : The list of RM IDs are rm1,rm2
02-14-2017 10:24:08 INFO com.linkedin.drelephant.analysis.AnalyticJobGeneratorHadoop2 : Checking RM URL: http://ylpd269.kmdc.att.com:8088/ws/v1/cluster/info
02-14-2017 10:24:08 INFO com.linkedin.drelephant.analysis.AnalyticJobGeneratorHadoop2 : ylpd269.kmdc.att.com:8088 is ACTIVE
02-14-2017 10:24:08 INFO com.linkedin.drelephant.ElephantRunner : Fetching analytic job list...
02-14-2017 10:24:08 INFO com.linkedin.drelephant.analysis.AnalyticJobGeneratorHadoop2 : Fetching recent finished application runs between last time: 1487085728997, and current time: 1487085788998
02-14-2017 10:24:08 INFO com.linkedin.drelephant.analysis.AnalyticJobGeneratorHadoop2 : The succeeded apps URL is http://ylpd269.kmdc.att.com:8088/ws/v1/cluster/apps?finalStatus=SUCCEEDED&finishedTimeBegin=1487085728997&finishedTimeEnd=1487085788998
02-14-2017 10:24:09 INFO com.linkedin.drelephant.ElephantRunner : Executor thread 2 analyzing MAPREDUCE application_1486843207585_79341
02-14-2017 10:24:09 INFO com.linkedin.drelephant.analysis.AnalyticJobGeneratorHadoop2 : The failed apps URL is http://ylpd269.kmdc.att.com:8088/ws/v1/cluster/apps?finalStatus=FAILED&finishedTimeBegin=1487085728997&finishedTimeEnd=1487085788998
02-14-2017 10:24:09 INFO com.linkedin.drelephant.ElephantRunner : Job queue size is 4432
02-14-2017 10:24:09 INFO com.linkedin.drelephant.ElephantRunner : Executor thread 3 analyzing MAPREDUCE application_1486843207585_79340
02-14-2017 10:24:11 INFO com.linkedin.drelephant.ElephantRunner : Executor thread 2 analyzing MAPREDUCE application_1486843207585_79343
02-14-2017 10:24:12 INFO com.linkedin.drelephant.ElephantRunner : Executor thread 2 analyzing MAPREDUCE application_1486843207585_79344
02-14-2017 10:24:13 INFO com.linkedin.drelephant.ElephantRunner : Executor thread 2 analyzing MAPREDUCE application_1486843207585_79384
02-14-2017 10:24:14 INFO com.linkedin.drelephant.ElephantRunner : Executor thread 2 analyzing SPARK application_1486843207585_79387
02-14-2017 10:24:14 ERROR com.linkedin.drelephant.ElephantRunner :
02-14-2017 10:24:14 ERROR com.linkedin.drelephant.ElephantRunner : java.security.PrivilegedActionException: java.net.ConnectException: Connection refused
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1689)
at com.linkedin.drelephant.security.HadoopSecurity.doAs(HadoopSecurity.java:99)
at org.apache.spark.deploy.history.SparkFSFetcher.fetchData(SparkFSFetcher.scala:99)
at org.apache.spark.deploy.history.SparkFSFetcher.fetchData(SparkFSFetcher.scala:48)
at com.linkedin.drelephant.analysis.AnalyticJob.getAnalysis(AnalyticJob.java:232)
at com.linkedin.drelephant.ElephantRunner$ExecutorThread.run(ElephantRunner.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:998)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:934)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:852)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:686)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:638)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:711)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:559)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:588)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:584)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:1436)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:312)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getAuthParameters(WebHdfsFileSystem.java:524)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toUrl(WebHdfsFileSystem.java:545)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractFsPathRunner.getUrl(WebHdfsFileSystem.java:801)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:709)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:559)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:588)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:584)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:948)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:963)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
at org.apache.spark.deploy.history.SparkFSFetcher.org$apache$spark$deploy$history$SparkFSFetcher$$isLegacyLogDirectory(SparkFSFetcher.scala:186)
at org.apache.spark.deploy.history.SparkFSFetcher$$anon$1.run(SparkFSFetcher.scala:143)
at org.apache.spark.deploy.history.SparkFSFetcher$$anon$1.run(SparkFSFetcher.scala:99)
... 13 more 02-14-2017 10:24:14 ERROR com.linkedin.drelephant.ElephantRunner : Add analytic job id [application_1486843207585_79387] into the retry list.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
02-15-2017
09:30 PM
@Ajay Thanks for your reply. Actually I am installing opentsdb on one of the cluster. This cluster is secured cluster. The hbase configuration in that cluster are secure. Please find the property which is present in hbase-site.xml . I have seen a article in Goolge forum, in that they said that it is bug and they come up with a solution. https://groups.google.com/forum/#!searchin/opentsdb/java.lang.IndexOutOfBoundsException|sort:relevance/opentsdb/sDWQmsNPakM/JJmm9YaQBgAJ [sg865w@blpd214 ~]$ cd /opt/app/opentsdb/opentsdb-2.3.0/build [sg865w@blpd214 build]$ sudo -su zkeeper [zkeeper@blpd214 build]$ ./tsdb tsd --config=/opt/app/opentsdb/opentsdb-2.3.0/src/opentsdb.conf --zkbasedir=/hbase-secure 2017-02-15 15:15:04,189 INFO [main] TSDMain: Starting 2017-02-15 15:15:04,192 INFO [main] TSDMain: net.opentsdb.tools 2.3.0 built at revision cac608a (MINT) 2017-02-15 15:15:04,192 INFO [main] TSDMain: Built on 2016/12/29 13:57:15 +0000 by root@centos.localhost:/home/hobbes/opentsdb_OFFICIAL/build 2017-02-15 15:15:04,196 INFO [main] Config: Successfully loaded configuration file: /opt/app/opentsdb/opentsdb-2.3.0/src/opentsdb.conf 2017-02-15 15:15:04,250 INFO [main] Config: Successfully loaded configuration file: /opt/app/opentsdb/opentsdb-2.3.0/src/opentsdb.conf 2017-02-15 15:15:04,344 WARN [main] PluginLoader: Unable to locate any plugins of the type: net.opentsdb.query.filter.TagVFilter 2017-02-15 15:15:04,360 INFO [main] ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT 2017-02-15 15:15:04,360 INFO [main] ZooKeeper: Client environment:host.name=blpd214.bhdc.att.com 2017-02-15 15:15:04,360 INFO [main] ZooKeeper: Client environment:java.version=1.7.0_111 2017-02-15 15:15:04,360 INFO [main] ZooKeeper: Client environment:java.vendor=Oracle Corporation 2017-02-15 15:15:04,360 INFO [main] ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.111.x86_64/jre 2017-02-15 15:15:04,360 INFO [main] ZooKeeper: Client environment:java.class.path=/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/jexl/commons-logging-1.1.1.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/guava/guava-18.0.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/slf4j/log4j-over-slf4j-1.7.7.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/logback/logback-classic-1.0.13.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/logback/logback-core-1.0.13.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/jackson/jackson-annotations-2.4.3.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/jackson/jackson-core-2.4.3.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/jackson/jackson-databind-2.4.3.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/javacc/javacc-6.1.2.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/jexl/commons-jexl-2.1.1.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/jgrapht/jgrapht-core-0.9.1.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/netty/netty-3.9.4.Final.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/slf4j/slf4j-api-1.7.7.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/suasync/async-1.4.0.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/apache/commons-math3-3.4.1.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/hbase/asynchbase-1.7.2.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/protobuf/protobuf-java-2.5.0.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/zookeeper/zookeeper-3.4.6.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/tsdb-2.3.0.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/../src 2017-02-15 15:15:04,360 INFO [main] ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2017-02-15 15:15:04,361 INFO [main] ZooKeeper: Client environment:java.io.tmpdir=/tmp 2017-02-15 15:15:04,361 INFO [main] ZooKeeper: Client environment:java.compiler=<NA> 2017-02-15 15:15:04,361 INFO [main] ZooKeeper: Client environment:os.name=Linux 2017-02-15 15:15:04,361 INFO [main] ZooKeeper: Client environment:os.arch=amd64 2017-02-15 15:15:04,361 INFO [main] ZooKeeper: Client environment:os.version=2.6.32-642.6.2.el6.x86_64 2017-02-15 15:15:04,361 INFO [main] ZooKeeper: Client environment:user.name=zkeeper 2017-02-15 15:15:04,361 INFO [main] ZooKeeper: Client environment:user.home=/home/zkeeper 2017-02-15 15:15:04,361 INFO [main] ZooKeeper: Client environment:user.dir=/opt/app/opentsdb/opentsdb-2.3.0/build 2017-02-15 15:15:04,361 INFO [main] ZooKeeper: Initiating client connection, connectString=blpd214.bhdc.att.com:2181 sessionTimeout=5000 watcher=org.hbase.async.HBaseClient$ZKClient@53cd7dc7 2017-02-15 15:15:04,372 INFO [main] HBaseClient: Need to find the -ROOT- region 2017-02-15 15:15:04,394 INFO [main-SendThread(blpd214.bhdc.att.com:2181)] Login: successfully logged in. 2017-02-15 15:15:04,395 INFO [Thread-1] Login: TGT refresh thread started. 2017-02-15 15:15:04,399 INFO [main-SendThread(blpd214.bhdc.att.com:2181)] ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism. 2017-02-15 15:15:04,405 INFO [Thread-1] Login: TGT valid starting at: Wed Feb 15 13:24:23 CST 2017-02-15 15:15:04,405 INFO [Thread-1] Login: TGT expires: Thu Feb 16 13:24:23 CST 2017
2017-02-15 15:15:04,406 INFO [Thread-1] Login: TGT refresh sleeping until: Thu Feb 16 08:55:02 CST 2017
2017-02-15 2017-02-15 15:15:04,635 ERROR [AsyncHBase I/O Worker #1] RegionClient: Unexpected exception from downstream on [id: 0xa673b30d, /130.5.106.4:52229 => /130.5.106.11:16020]
java.lang.IndexOutOfBoundsException: Not enough readable bytes - Need 132, maximum is 120
at org.jboss.netty.buffer.AbstractChannelBuffer.checkReadableBytes(AbstractChannelBuffer.java:668) ~[netty-3.9.4.Final.jar:na]
at org.jboss.netty.buffer.AbstractChannelBuffer.readBytes(AbstractChannelBuffer.java:338) ~[netty- I can't submit the entore error. The remaining error is show in the above comment. Please help me on this Thanks & Regards Shyam Gurram
... View more
02-15-2017
08:30 PM
I checked that in hbase-site.xml we have a property of <property> <name>hbase.rpc.protection</name> <value>privacy</value> </property> I am getting the new error. 2017-02-15 14:05:20,900 ERROR [AsyncHBase I/O Worker #1] RegionClient: Unexpected exception from downstream on [id: 0xa34c6fdf, /130.5.106.4:45943 => /130.5.106.11:16020] java.lang.IndexOutOfBoundsException: Not enough readable bytes - Need 132, maximum is 120 at org.jboss.netty.buffer.AbstractChannelBuffer.checkReadableBytes(AbstractChannelBuffer.java:668) ~[netty-3.9.4.Final.jar:na] at org.jboss.netty.buffer.AbstractChannelBuffer.readBytes(AbstractChannelBuffer.java:338) ~[netty-3.9.4.Final.jar:na] at org.jboss.netty.buffer.AbstractChannelBuffer.readBytes(AbstractChannelBuffer.java:344) ~[netty-3.9.4.Final.jar:na] at org.hbase.async.SecureRpcHelper.wrap(SecureRpcHelper.java:235) ~[asynchbase-1.7.2.jar:na] at org.hbase.async.RegionClient.encode(RegionClient.java:1385) ~[asynchbase-1.7.2.jar:na] at org.hbase.async.RegionClient.sendRpc(RegionClient.java:998) ~[asynchbase-1.7.2.jar:na] at org.hbase.async.RegionClient.sendQueuedRpcs(RegionClient.java:1141) ~[asynchbase-1.7.2.jar:na] at org.hbase.async.RegionClient.becomeReady(RegionClient.java:664) ~[asynchbase-1.7.2.jar:na] at org.hbase.async.SecureRpcHelper96.sendRPCHeader(SecureRpcHelper96.java:190) ~[asynchbase-1.7.2.jar:na] at org.hbase.async.SecureRpcHelper96.handleResponse(SecureRpcHelper96.java:148) ~[asynchbase-1.7.2.jar:na] at org.hbase.async.RegionClient.decode(RegionClient.java:1416) ~[asynchbase-1.7.2.jar:na] at org.hbase.async.RegionClient.decode(RegionClient.java:88) ~[asynchbase-1.7.2.jar:na] at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500) ~[netty-3.9.4.Final.jar:na] at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435) ~[netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [netty-3.9.4.Final.jar:na] at org.hbase.async.RegionClient.handleUpstream(RegionClient.java:1223) ~[asynchbase-1.7.2.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142) [netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88) [netty-3.9.4.Final.jar:na] at org.jboss.netty.handler.timeout.IdleStateAwareChannelHandler.handleUpstream(IdleStateAwareChannelHandler.java:36) [netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [netty-3.9.4.Final.jar:na] at org.jboss.netty.handler.timeout.IdleStateHandler.messageReceived(IdleStateHandler.java:294) [netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) [netty-3.9.4.Final.jar:na] at org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:3121) [asynchbase-1.7.2.jar:na] at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) [netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) [netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) [netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) [netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) [netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) [netty-3.9.4.Final.jar:na] at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) [netty-3.9.4.Final.jar:na] at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.9.4.Final.jar:na] at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [netty-3.9.4.Final.jar:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_111] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_111] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111] Please help me on this. Thanks ℜgards Shyam gurram
... View more
Labels:
- Labels:
-
Apache HBase
02-15-2017
08:07 PM
@Josh Elser Thanks for your reply. Yeah I checked that in hbase-site.xml we have a property of <property>
<name>hbase.rpc.protection</name>
<value>privacy</value>
</property> I am getting the new error. 2017-02-15 14:05:20,900 ERROR [AsyncHBase I/O Worker #1] RegionClient: Unexpected exception from downstream on [id: 0xa34c6fdf, /130.5.106.4:45943 => /130.5.106.11:16020]
java.lang.IndexOutOfBoundsException: Not enough readable bytes - Need 132, maximum is 120
at org.jboss.netty.buffer.AbstractChannelBuffer.checkReadableBytes(AbstractChannelBuffer.java:668) ~[netty-3.9.4.Final.jar:na]
at org.jboss.netty.buffer.AbstractChannelBuffer.readBytes(AbstractChannelBuffer.java:338) ~[netty-3.9.4.Final.jar:na]
at org.jboss.netty.buffer.AbstractChannelBuffer.readBytes(AbstractChannelBuffer.java:344) ~[netty-3.9.4.Final.jar:na]
at org.hbase.async.SecureRpcHelper.wrap(SecureRpcHelper.java:235) ~[asynchbase-1.7.2.jar:na]
at org.hbase.async.RegionClient.encode(RegionClient.java:1385) ~[asynchbase-1.7.2.jar:na]
at org.hbase.async.RegionClient.sendRpc(RegionClient.java:998) ~[asynchbase-1.7.2.jar:na]
at org.hbase.async.RegionClient.sendQueuedRpcs(RegionClient.java:1141) ~[asynchbase-1.7.2.jar:na]
at org.hbase.async.RegionClient.becomeReady(RegionClient.java:664) ~[asynchbase-1.7.2.jar:na]
at org.hbase.async.SecureRpcHelper96.sendRPCHeader(SecureRpcHelper96.java:190) ~[asynchbase-1.7.2.jar:na]
at org.hbase.async.SecureRpcHelper96.handleResponse(SecureRpcHelper96.java:148) ~[asynchbase-1.7.2.jar:na]
at org.hbase.async.RegionClient.decode(RegionClient.java:1416) ~[asynchbase-1.7.2.jar:na]
at org.hbase.async.RegionClient.decode(RegionClient.java:88) ~[asynchbase-1.7.2.jar:na]
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500) ~[netty-3.9.4.Final.jar:na]
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435) ~[netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [netty-3.9.4.Final.jar:na]
at org.hbase.async.RegionClient.handleUpstream(RegionClient.java:1223) ~[asynchbase-1.7.2.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.handler.timeout.IdleStateAwareChannelHandler.handleUpstream(IdleStateAwareChannelHandler.java:36) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.handler.timeout.IdleStateHandler.messageReceived(IdleStateHandler.java:294) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) [netty-3.9.4.Final.jar:na]
at org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:3121) [asynchbase-1.7.2.jar:na]
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [netty-3.9.4.Final.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_111]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111] Please help me on this. Thanks &Regards Shyam gurram
... View more
02-15-2017
07:52 PM
Is this is a bug for the tool. I have these configurations in core-site.xml.
<property> <name>hadoop.rpc.protection</name> <value>authentication</value> </property>
SecureRpcHelper: Failed Sasl challenge
javax.security.sasl.SaslException: No common protection layer between client and server
at com.sun.security.sasl.gsskerb.GssKrb5Client.doFinalHandshake(GssKrb5Client.java:251) ~[na:1.7.0_111]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:186) ~[na:1.7.0_111]
at org.hbase.async.SecureRpcHelper$1PrivilegedAction.run(SecureRpcHelper.java:286) [asynchbase-1.7.2.jar:na]
at org.hbase.async.SecureRpcHelper$1PrivilegedAction.run(SecureRpcHelper.java:282) [asynchbase-1.7.2.jar:na]
at java.security.AccessController.doPrivileged(Native Method) [na:1.7.0_111]
at javax.security.auth.Subject.doAs(Subject.java:415) [na:1.7.0_111]
at org.hbase.async.SecureRpcHelper.processChallenge(SecureRpcHelper.java:298) [asynchbase-1.7.2.jar:na]
at org.hbase.async.SecureRpcHelper96.handleResponse(SecureRpcHelper96.java:130) [asynchbase-1.7.2.jar:na]
at org.hbase.async.RegionClient.decode(RegionClient.java:1416) [asynchbase-1.7.2.jar:na]
at org.hbase.async.RegionClient.decode(RegionClient.java:88) [asynchbase-1.7.2.jar:na]
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [netty-3.9.4.Final.jar:na]
at org.hbase.async.RegionClient.handleUpstream(RegionClient.java:1223) [asynchbase-1.7.2.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.handler.timeout.IdleStateAwareChannelHandler.handleUpstream(IdleStateAwareChannelHandler.java:36) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.handler.timeout.IdleStateHandler.messageReceived(IdleStateHandler.java:294) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) [netty-3.9.4.Final.jar:na]
at org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:3121) [asynchbase-1.7.2.jar:na]
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.9.4.Final.jar:na]
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [netty-3.9.4.Final.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_111]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_111] Please help me on this. Thanks & Regards
Shyam gurram
... View more
Labels:
- Labels:
-
Apache HBase
02-14-2017
06:46 PM
@Josh Elser Still I am getting the same error. :-(. In github also they said that it is not working. [zkeeper@blpd214 opentsdb-2.3.0]$ ./build/tsdb tsd --port=4242 --staticroot=build/staticroot --cachedir=/tmp/opentsdb --zkquorum=blpd213.bhdc.att.com:2181,blpd214.bhdc.att.com:2181,blpd215.bhdc.att.com:2181 --zkbasedir=/hbase-unsecure 2017-02-14 12:43:21,624 INFO [main] TSDMain: Starting. 2017-02-14 12:43:21,627 INFO [main] TSDMain: net.opentsdb.tools 2.3.0 built at revision cac608a (MINT)
2017-02-14 12:43:21,627 INFO [main] TSDMain: Built on 2016/12/29 13:57:15 +0000 by root@centos.localhost:/home/hobbes/opentsdb_OFFICIAL/build
2017-02-14 12:43:21,631 INFO [main] Config: No configuration found, will use defaults
2017-02-14 12:43:21,774 WARN [main] PluginLoader: Unable to locate any plugins of the type: net.opentsdb.query.filter.TagVFilter 2017-02-14 12:43:21,790 INFO [main] ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT 2017-02-14 12:43:21,790 INFO [main] ZooKeeper: Client environment:host.name=blpd214.bhdc.att.com 2017-02-14 12:43:21,790 INFO [main] ZooKeeper: Client environment:java.version=1.7.0_111 2017-02-14 12:43:21,790 INFO [main] ZooKeeper: Client environment:java.vendor=Oracle Corporation 2017-02-14 12:43:21,790 INFO [main] ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.111.x86_64/jre 2017-02-14 12:43:21,790 INFO [main] ZooKeeper: Client environment:java.class.path=/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/jexl/commons-logging-1.1.1.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/guava/guava-18.0.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/slf4j/log4j-over-slf4j-1.7.7.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/logback/logback-classic-1.0.13.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/logback/logback-core-1.0.13.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/jackson/jackson-annotations-2.4.3.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/jackson/jackson-core-2.4.3.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/jackson/jackson-databind-2.4.3.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/javacc/javacc-6.1.2.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/jexl/commons-jexl-2.1.1.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/jgrapht/jgrapht-core-0.9.1.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/netty/netty-3.9.4.Final.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/slf4j/slf4j-api-1.7.7.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/suasync/async-1.4.0.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/apache/commons-math3-3.4.1.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/hbase/asynchbase-1.7.2.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/protobuf/protobuf-java-2.5.0.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/zookeeper/zookeeper-3.4.6.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/tsdb-2.3.0.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/../src 2017-02-14 12:43:21,790 INFO [main] ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2017-02-14 12:43:21,790 INFO [main] ZooKeeper: Client environment:java.io.tmpdir=/tmp 2017-02-14 12:43:21,790 INFO [main] ZooKeeper: Client environment:java.compiler=<NA> 2017-02-14 12:43:21,790 INFO [main] ZooKeeper: Client environment:os.name=Linux
2017-02-14 12:43:21,790 INFO [main] ZooKeeper: Client environment:os.arch=amd64 2017-02-14 12:43:21,790 INFO [main] ZooKeeper: Client environment:os.version=2.6.32-642.6.2.el6.x86_64
2017-02-14 12:43:21,790 INFO [main] ZooKeeper: Client environment:user.name=zkeeper
2017-02-14 12:43:21,790 INFO [main] ZooKeeper: Client environment:user.home=/home/zkeeper 2017-02-14 12:43:21,790 INFO [main] ZooKeeper: Client environment:user.dir=/opt/app/opentsdb/opentsdb-2.3.0
2017-02-14 12:43:21,791 INFO [main] ZooKeeper: Initiating client connection, connectString=blpd213.bhdc.att.com:2181,blpd214.bhdc.att.com:2181,blpd215.bhdc.att.com:2181 sessionTimeout=5000 watcher=org.hbase.async.HBaseClient$ZKClient@5987b427 2017-02-14 12:43:21,825 INFO [main] HBaseClient: Need to find the -ROOT- region
2017-02-14 12:43:21,839 INFO [main-SendThread(blpd213.bhdc.att.com:2181)] ClientCnxn: Opening socket connection to server blpd213.bhdc.att.com/130.5.106.3:2181. Will not attempt to authenticate using SASL (unknown error) 2017-02-14 12:43:21,843 INFO [main-SendThread(blpd213.bhdc.att.com:2181)] ClientCnxn: Socket connection established to blpd213.bhdc.att.com/130.5.106.3:2181, initiating session
2017-02-14 12:43:21,849 INFO [main-SendThread(blpd213.bhdc.att.com:2181)] ClientCnxn: Session establishment complete on server blpd213.bhdc.att.com/130.5.106.3:2181, sessionid = 0x15a396fea2300aa, negotiated timeout = 5000 2017-02-14 12:43:21,856 ERROR [main-EventThread] HBaseClient: The znode for the -ROOT- region doesn't exist!
2017-02-14 12:43:22,878 ERROR [main-EventThread] HBaseClient: The znode for the -ROOT- region doesn't exist!
2017-02-14 12:43:23,897 ERROR [main-EventThread] HBaseClient: The znode for the -ROOT- region doesn't exist!
... View more
02-13-2017
07:12 PM
Hi Team, As per the installation steps that are given in the http://opentsdb.net/docs/build/html/installation.html#id1 . I followed all the steps, but I am getting an error. when I am executing the following command. I have checked the core-site.xml file also same hosts are present. Please help me on this.
Command I have Executed:
. /build/tsdb tsd --port=4242 --staticroot=build/staticroot --cachedir=/tmp/opentsdb --zkquorum=blpd213.bhdc.att.com:2181,blpd214.bhdc.att.com:2181,blpd215.bhdc.att.com:2181 value of quorun in Core-site.xml:
<property> <name>ha.zookeeper.quorum</name>
<value>blpd213.bhdc.att.com:2181,blpd214.bhdc.att.com:2181,blpd215.bhdc.att.com:2181</value> </property>
Error I am facing: ./build/tsdb tsd --port=4242 --staticroot=build/staticroot --cachedir=/tmp/opentsdb --zkquorum=blpd213.bhdc.att.com:2181,blpd214.bhdc.att.com:2181,blpd215.bhdc.att.com:2181
2017-02-13 12:50:55,088 INFO [main] TSDMain: Starting. 2017-02-13 12:50:55,091 INFO [main] TSDMain: net.opentsdb.tools 2.3.0 built at revision cac608a (MINT)
2017-02-13 12:50:55,091 INFO [main] TSDMain: Built on 2016/12/29 13:57:15 +0000 by root@centos.localhost:/home/hobbes/opentsdb_OFFICIAL/build 2017-02-13 12:50:55,095 INFO [main] Config: No configuration found, will use defaults
2017-02-13 12:50:55,241 WARN [main] PluginLoader: Unable to locate any plugins of the type: net.opentsdb.query.filter.TagVFilter
2017-02-13 12:50:55,257 INFO [main] ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT 2017-02-13 12:50:55,257 INFO [main] ZooKeeper: Client environment:host.name=blpd215.bhdc.att.com
2017-02-13 12:50:55,258 INFO [main] ZooKeeper: Client environment:java.version=1.7.0_111
2017-02-13 12:50:55,258 INFO [main] ZooKeeper: Client environment:java.vendor=Oracle Corporation
2017-02-13 12:50:55,258 INFO [main] ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.111.x86_64/jre
2017-02-13 12:50:55,258 INFO [main] ZooKeeper: Client environment:java.class.path=/opt/app/opentsdb/opentsdb- 2.3.0/build/third_party/jexl/commons-logging-1.1.1.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/guava/guava-18.0.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/slf4j/log4j-over-slf4j-1.7.7.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/logback/logback-classic-1.0.13.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/logback/logback-core-1.0.13.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/jackson/jackson-annotations-2.4.3.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/jackson/jackson-core-2.4.3.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/jackson/jackson-databind-2.4.3.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/javacc/javacc-6.1.2.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/jexl/commons-jexl-2.1.1.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/jgrapht/jgrapht-core-0.9.1.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/netty/netty-3.9.4.Final.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/slf4j/slf4j-api-1.7.7.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/suasync/async-1.4.0.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/apache/commons-math3-3.4.1.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/hbase/asynchbase-1.7.2.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/protobuf/protobuf-java-2.5.0.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/third_party/zookeeper/zookeeper-3.4.6.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/tsdb-2.3.0.jar:/opt/app/opentsdb/opentsdb-2.3.0/build/../src
2017-02-13 12:50:55,258 INFO [main] ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2017-02-13 12:50:55,258 INFO [main] ZooKeeper: Client environment:java.io.tmpdir=/tmp
2017-02-13 12:50:55,258 INFO [main] ZooKeeper: Client environment:java.compiler=<NA> 2017-02-13 12:50:55,258 INFO [main] ZooKeeper: Client environment:os.name=Linux
2017-02-13 12:50:55,258 INFO [main] ZooKeeper: Client environment:os.arch=amd64 2017-02-13 12:50:55,258 INFO [main] ZooKeeper: Client environment:os.version=2.6.32-642.6.2.el6.x86_64 2017-02-13 12:50:55,258 INFO [main] ZooKeeper: Client environment:user.name=root 2017-02-13 12:50:55,258 INFO [main] ZooKeeper: Client environment:user.home=/root 2017-02-13 12:50:55,258 INFO [main] ZooKeeper: Client environment:user.dir=/opt/app/opentsdb/opentsdb-2.3.0
2017-02-13 12:50:55,259 INFO [main] ZooKeeper: Initiating client connection, connectString=blpd213.bhdc.att.com:2181,blpd214.bhdc.att.com:2181,blpd215.bhdc.att.com:2181 sessionTimeout=5000 watcher=org.hbase.async.HBaseClient$ZKClient@6a4cae18 2017-02-13 12:50:55,270 INFO [main] HBaseClient: Need to find the -ROOT- region 2017-02-13 12:50:55,274 INFO [main-SendThread(blpd213.bhdc.att.com:2181)] ClientCnxn: Opening socket connection to server blpd213.bhdc.att.com/130.5.106.3:2181. Will not attempt to authenticate using SASL (unknown error)
2017-02-13 12:50:55,278 INFO [main-SendThread(blpd213.bhdc.att.com:2181)] ClientCnxn: Socket connection established to blpd213.bhdc.att.com/130.5.106.3:2181, initiating session 2017-02-13 12:50:55,284 INFO [main-SendThread(blpd213.bhdc.att.com:2181)] ClientCnxn: Session establishment complete on server blpd213.bhdc.att.com/130.5.106.3:2181, sessionid = 0x15a293572a00199, negotiated timeout = 5000
2017-02-13 12:50:55,291 ERROR [main-EventThread] HBaseClient: The znode for the -ROOT- region doesn't exist!
2017-02-13 12:50:56,314 ERROR [main-EventThread] HBaseClient: The znode for the -ROOT- region doesn't exist!
... View more
Labels:
- Labels:
-
Apache HBase
02-11-2017
06:38 AM
@Jay SenSharma
Thanks for the solution and it is working fine. I am facing one more error. As per the installation document there is a installation step mentioned in the below and also please find the error. Please help me on this. Awaiting for your reply. Error: [sg865w@blpd215 opentsdb-2.3.0]$ ./build/tsdb
tsdb: error: unknown command '' usage: tsdb <command> [args]
Valid commands: fsck, import, mkmetric, query, tsd, scan, search, uid, version Installation Step: You can now execute command-line tool by invoking ./build/tsdb
... View more
02-10-2017
08:43 PM
@apappu Thanks for the reply. I am sorry I didn't get that. Please where I need to add that path. When I am installing this opentsdb the current directory is /opt/app/opentsdb/opentsdb-2.3.0/build. This build directory is created by the ./build.sh script. So, please help me where I need to add the path and also what path I need to add. Thanks & Regards Shyam
... View more
02-10-2017
08:23 PM
Hi Team, I am trying to install ./build.sh script. I am getting javacc not found. Javac is installed on my node.Please find the error details below.
I have downloaded the opentsdb-2.3.0.tar.gz and extracted the zip file. then later compiled the ./build.sh script. [sg865w@blpd215 64bit]$ cd bin
[sg865w@blpd215 bin]$ ls -lrt | grep java
-rwxr-xr-x 1 bin bin 128695 May 16 2016 javaws
-rwxr-xr-x 1 bin bin 1809 May 16 2016 java-rmi.cgi
-rwxr-xr-x 1 bin bin 2293 May 16 2016 javapackager -rwxr-xr-x 1 bin bin 7941 May 16 2016 javap -rwxr-xr-x 1 bin bin 7941 May 16 2016 javah
-rwxr-xr-x 1 bin bin 2293 May 16 2016 javafxpackager
-rwxr-xr-x 1 bin bin 7941 May 16 2016 javadoc -rwxr-xr-x 1 bin bin 7941 May 16 2016 javac -rwxr-xr-x 1 bin bin 7734 May 16 2016 java
[sg865w@blpd215 bin]$ pwd
//opt/app/java/jdk/jdk180_77/64bit/bin Error: test -f configure test -d build cd build test -f Makefile MAKE=make
++ uname -s '[' Linux = FreeBSD ']' exec make
make all-am
make[1]: Entering directory /opt/app/opentsdb/opentsdb-2.3.0/build' /usr/bin/java -cp third_party/javacc/javacc-6.1.2.jar javacc -OUTPUT_DIRECTORY:./src/net/opentsdb/query/expression/parser ../src/parser.jj; echo PWD: pwd ; Error: Could not find or load main class javacc PWD: /opt/app/opentsdb/opentsdb-2.3.0/build /usr/bin/javac -Xlint -source 6 -encoding utf-8 -d . -cp ../third_party/jexl/commons-logging-1.1.1.jar:../third_party/guava/guava-18.0.jar:../third_party/slf4j/log4j-over-slf4j-1.7.7.jar:../third_party/logback/logback-classic-1.0.13.jar:../third_party/logback/logback-core-1.0.13.jar:../third_party/jackson/jackson-annotations-2.4.3.jar:.. /third_party/jackson/jackson-core-2.4.3.jar:../third_party/jackson/jackson-databind-2.4.3.jar:.. /third_party/javacc/javacc-6.1.2.jar:../third_party/jexl/commons-jexl-2.1.1.jar:../third_party/jgrapht/jgrapht-core-0.9.1.jar:../third_party/netty/netty-3.9.4.Final.jar:../third_party/slf4j/slf4j-api-1.7.7.jar:../third_party/suasync/async-1.4.0.jar:../third_party/apache/commons-math3-3.4.1.jar:../third_party/hbase/asynchbase-1.7.2.jar:../third_party/protobuf/protobuf-java-2.5.0.jar:../third_party/zookeeper/zookeeper-3.4.6.jar: ../src/core/AggregationIterator.java ../src/core/Aggregator.java ../src/core/Aggregators.java ../src/core/AppendDataPoints.java ../src/core/BatchedDataPoints.java ../src/core/ByteBufferList.java ../src/core/ColumnDatapointIterator.java ../src/core/CompactionQueue.java ../src/core/Const.java ../src/core/DataPoint.java ../src/core/DataPoints.java ../src/core/DataPointsIterator.java ../src/core/Downsampler.java ../src/core/DownsamplingSpecification.java ../src/core/FillingDownsampler.java ../src/core/FillPolicy.java ../src/core/IncomingDataPoint.java ../src/core/IncomingDataPoints.java ../src/core/IllegalDataException.java ../src/core/Internal.java ../src/core/MutableDataPoint.java ../src/core/Query.java ../src/core/QueryException.java ../src/core/RateOptions.java ../src/core/RateSpan.java ../src/core/RowKey.java ../src/core/RowSeq.java ../src/core/SaltScanner.java ../src/core/SeekableView.java ../src/core/Span.java ../src/core/SpanGroup.java ../src/core/TSDB.java ../src/core/Tags.java ../src/core/TsdbQuery.java ../src/core/TSQuery.java ../src/core/TSSubQuery.java ../src/core/WritableDataPoints.java ../src/core/WriteableDataPointFilterPlugin.java ../src/graph/Plot.java ../src/meta/Annotation.java ../src/meta/MetaDataCache.java ../src/meta/TSMeta.java ../src/meta/TSUIDQuery.java ../src/meta/UIDMeta.java ../src/query/QueryUtil.java ../src/query/expression/Absolute.java ../src/query/expression/Alias.java ../src/query/expression/DiffSeries.java ../src/query/expression/DivideSeries.java ../src/query/expression/EDPtoDPS.java ../src/query/expression/Expression.java ../src/query/expression/ExpressionDataPoint.java ../src/query/expression/ExpressionFactory.java ../src/query/expression/ExpressionIterator.java ../src/query/expression/ExpressionReader.java ../src/query/expression/Expressions.java ../src/query/expression/ExpressionTree.java ../src/query/expression/HighestCurrent.java ../src/query/expression/HighestMax.java ../src/query/expression/IntersectionIterator.java ../src/query/expression/ITimeSyncedIterator.java ../src/query/expression/NumericFillPolicy.java ../src/query/expression/MovingAverage.java ../src/query/expression/MultiplySeries.java ../src/query/expression/PostAggregatedDataPoints.java ../src/query/expression/Scale.java ../src/query/expression/SumSeries.java ../src/query/expression/TimeShift.java ../src/query/expression/TimeSyncedIterator.java ../src/query/expression/UnionIterator.java ../src/query/expression/VariableIterator.java ../src/query/filter/TagVFilter.java ../src/query/filter/TagVLiteralOrFilter.java ../src/query/filter/TagVNotKeyFilter.java ../src/query/filter/TagVNotLiteralOrFilter.java ../src/query/filter/TagVRegexFilter.java ../src/query/filter/TagVWildcardFilter.java ../src/query/pojo/Downsampler.java ../src/query/pojo/Expression.java ../src/query/pojo/Filter.java ../src/query/pojo/Join.java ../src/query/pojo/Metric.java ../src/query/pojo/Output.java ../src/query/pojo/Query.java ../src/query/pojo/Timespan.java ../src/query/pojo/Validatable.java ../src/search/SearchPlugin.java ../src/search/SearchQuery.java ../src/search/TimeSeriesLookup.java ../src/stats/Histogram.java ../src/stats/StatsCollector.java ../src/stats/QueryStats.java ../src/tools/ArgP.java ../src/tools/CliOptions.java ../src/tools/CliQuery.java ../src/tools/CliUtils.java ../src/tools/DumpSeries.java ../src/tools/Fsck.java ../src/tools/FsckOptions.java ../src/tools/MetaPurge.java ../src/tools/MetaSync.java ../src/tools/Search.java ../src/tools/StartupPlugin.java ../src/tools/TSDMain.java ../src/tools/TextImporter.java ../src/tools/TreeSync.java ../src/tools/UidManager.java ../src/tree/Branch.java ../src/tree/Leaf.java ../src/tree/Tree.java ../src/tree/TreeBuilder.java ../src/tree/TreeRule.java ../src/tsd/AbstractHttpQuery.java ../src/tsd/AnnotationRpc.java ../src/tsd/BadRequestException.java ../src/tsd/ConnectionManager.java ../src/tsd/DropCachesRpc.java ../src/tsd/GnuplotException.java ../src/tsd/GraphHandler.java ../src/tsd/HttpJsonSerializer.java ../src/tsd/HttpSerializer.java ../src/tsd/HttpQuery.java ../src/tsd/HttpRpc.java ../src/tsd/HttpRpcPlugin.java ../src/tsd/HttpRpcPluginQuery.java ../src/tsd/LineBasedFrameDecoder.java ../src/tsd/LogsRpc.java ../src/tsd/PipelineFactory.java ../src/tsd/PutDataPointRpc.java ../src/tsd/QueryExecutor.java ../src/tsd/QueryRpc.java ../src/tsd/RpcHandler.java ../src/tsd/RpcPlugin.java ../src/tsd/RpcManager.java ../src/tsd/RpcUtil.java ../src/tsd/RTPublisher.java ../src/tsd/SearchRpc.java ../src/tsd/StaticFileRpc.java ../src/tsd/StatsRpc.java ../src/tsd/StorageExceptionHandler.java ../src/tsd/SuggestRpc.java ../src/tsd/TelnetRpc.java ../src/tsd/TreeRpc.java ../src/tsd/UniqueIdRpc.java ../src/tsd/WordSplitter.java ../src/uid/FailedToAssignUniqueIdException.java ../src/uid/NoSuchUniqueId.java ../src/uid/NoSuchUniqueName.java ../src/uid/RandomUniqueId.java ../src/uid/UniqueId.java ../src/uid/UniqueIdFilterPlugin.java ../src/uid/UniqueIdInterface.java ../src/utils/ByteArrayPair.java ../src/utils/ByteSet.java ../src/utils/Config.java ../src/utils/DateTime.java ../src/utils/Exceptions.java ../src/utils/FileSystem.java ../src/utils/JSON.java ../src/utils/JSONException.java ../src/utils/Pair.java ../src/utils/PluginLoader.java ../src/utils/Threads.java ../src/tools/BuildData.java ./src/net/opentsdb/query/expression/parser/*.java javac: file not found: ./src/net/opentsdb/query/expression/parser/*.java Usage: javac <options> <source files> use -help for a list of possible options make[1]: *** [.javac-stamp] Error 2 make[1]: Leaving director /opt/app/opentsdb/opentsdb-2.3.0/build'
make: *** [all] Error 2 [sg865w@blpd215 opentsdb-2.3.0]$ cat build.sh
#!/usr/bin/env bash
set -xe
test -f configure || ./bootstrap
test -d build || mkdir build
cd build
test -f Makefile || ../configure "$@"
MAKE=make
[ uname -s = "FreeBSD" ] && MAKE=gmake
exec ${MAKE} "$@" Thanks & Regards Shyam Gurram
... View more
Labels:
- Labels:
-
Apache HBase
11-30-2016
08:43 PM
Thanks @Kuldeep Kullarni, If I am moving Oozie server from one node to another node, do I need to move the Falcon also to the same node. Regards Shyam Gurram
... View more
11-30-2016
06:40 PM
1) When I am deleting the oozie service from one node on HDP 2.4 it is throwing an error. Saying that Prior to deleting oozie you must delete the following dependent services (Service is falcon).
... View more
Labels:
06-06-2016
03:03 PM
Hi Alex, Thanks for the post, In a text file I have two properties. The script is working for one of the property, but as per the logic it should for the other property also. env:KERBEROS_KADMINST_PASSWORD=t.w48oJj-
env:KERBEROS_MASTERDB_PASSWORD=xtW2+OGi- The above are the two properties that i need to mask their passwords. {
"name":"Hive_Pass_Phrase", "path":"hive_config.txt", "pattern": ".*PASSWORD.*$", "operation":"REPLACE",
"value":"PASSWORD Hidden"
}, As per the above code it should work for the both the properties, it is working only for env:KERBEROS_KADMINST_PASSWORD, not for other. Please help me on this. Regards Shyam
... View more
05-31-2016
02:58 PM
Hi Kuldeep Kulkarni, Awaiting for your comments.
... View more
05-30-2016
11:04 PM
1 Kudo
Hello Team, I am using Ambari HDP 2.3 version. One of the host I installed Smart Sense, I need to do masking for some of the password fields. I have done masking for the fields which are present in XML files. But, I have passwords in text file as well. I am not aware how to mask those passwords also in text files. Please help me on this. 1) For the XML files masking is done by using below setup in Anonymization rule. {
"name":"Delete_Yarn_Hbase_Hive_MR_Trust_Store_Password", "path":"ssl-server.xml", "property": "ssl.server.truststore.password", "operation":"REPLACE",
"value":"Hidden"
} I am not aware to how to mask the password, in a text file. Awaiting for your reply.
... View more
Labels:
- Labels:
-
Hortonworks SmartSense