Member since
02-24-2016
87
Posts
18
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2622 | 12-18-2017 11:47 AM | |
7825 | 11-08-2017 01:54 PM | |
47927 | 05-06-2016 11:48 AM |
01-23-2018
02:24 PM
I am running Spark job on Hortonworks cluster . In source we have Oracle DB . we are fetching records from Oracle and analyse. I am getting below error message , I refer many solution accordingly I am using ojdbc7.jar . Please guide me on this 18/01/22 07:04:09 ERROR Executor: Exception in task 0.0 in stage 15.0 (TID 1069)
java.sql.SQLException: Protocol violation: [ 6, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, 21, 7, ]
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:536)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
at oracle.jdbc.driver.T4CPreparedStatement.fetch(T4CPreparedStatement.java:1066)
at oracle.jdbc.driver.OracleStatement.fetchMoreRows(OracleStatement.java:3716)
at oracle.jdbc.driver.InsensitiveScrollableResultSet.fetchMoreRows(InsensitiveScrollableResultSet.java:1015)
at oracle.jdbc.driver.InsensitiveScrollableResultSet.absoluteInternal(InsensitiveScrollableResultSet.java:979)
at oracle.jdbc.driver.InsensitiveScrollableResultSet.next(InsensitiveScrollableResultSet.java:579)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.getNext(JDBCRDD.scala:369)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.hasNext(JDBCRDD.scala:498)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.sql.execution.joins.BroadcastNestedLoopJoin$$anonfun$2.apply(BroadcastNestedLoopJoin.scala:105)
at org.apache.spark.sql.execution.joins.BroadcastNestedLoopJoin$$anonfun$2.apply(BroadcastNestedLoopJoin.scala:96)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$22.apply(RDD.scala:717)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$22.apply(RDD.scala:717)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
End of LogType:stderr. This log file belongs to a running container (container_e13_1503300866021_880451_01_000002) and so may not be complete.
LogType:launch_container.sh
Log Upload Time:Tue Jan 23 02:09:02 -0500 2018
... View more
Labels:
- Labels:
-
Apache Spark
12-18-2017
11:47 AM
Issue got resolved . Follow this checklists -- 1. Check Zookeeper running . 2. Check Kafka Producer and Consumer running fine on console, create one topic and list it this is to ensure that kafka running fine . 3. Similar version use in sbt like for Kafka 0.9 below should be use : org.apache.flink" %% "flink-connector-kafka-0.9" % flinkVersion % "provided" and import in scala program : import org.apache.flink.streaming.connectors.kafka.{FlinkKafkaProducer09, FlinkKafkaConsumer09} For Kafka 0.10 org.apache.flink" %% "flink-connector-kafka-0.10" % flinkVersion % "provided"
And Import in scala program : import org.apache.flink.streaming.connectors.kafka.{FlinkKafkaProducer10, FlinkKafkaConsumer10}
... View more
12-18-2017
08:46 AM
@Fabian Hueske Please guide here
... View more
12-18-2017
08:44 AM
0favorite
I am writing a Flink Kafka integration program as below but getting timeout error for kafka : <code>import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.connectors.kafka.{FlinkKafkaConsumer010,
FlinkKafkaProducer010}
import org.apache.flink.streaming.util.serialization.SimpleStringSchema
import java.util.Properties
object StreamKafkaProducer {
def main(args: Array[String]) {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val properties = new Properties()
properties.setProperty("bootstrap.servers", "localhost:9092")
properties.setProperty("zookeeper.connect", "localhost:2181")
properties.setProperty("serializer.class", "kafka.serializer.StringEncoder")
val stream: DataStream[String] =env.fromElements(
("Adam"),
("Sarah"))
val kafkaProducer = new FlinkKafkaProducer010[String](
"localhost:9092",
"output",
new SimpleStringSchema
)
// write data into Kafka
stream.addSink(kafkaProducer)
env.execute("Flink kafka integration ")
}
}
From terminal I can see kafka and zookeeper are running but when I run above program from Intellij it is showing this error : <code>C:\Users\amdass\workspace\flink-project-master>sbt run
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m;
support was removed in 8.0
[info] Loading project definition from C:\Users\amdass\workspace\flink-
project-master\project
[info] Set current project to Flink Project (in build
file:/C:/Users/amdass/workspace/flink-project-master/)
[info] Compiling 1 Scala source to C:\Users\amdass\workspace\flink-project-
master\target\scala-2.11\classes...
[info] Running org.example.StreamKafkaProducer
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
details.
Connected to JobManager at Actor[akka://flink/user/jobmanager_1#-563113020]
with leader session id 5a637740-5c73-4f69-a19e-c8ef7141efa1.
12/15/2017 14:41:49 Job execution switched to status RUNNING.
12/15/2017 14:41:49 Source: Collection Source(1/1) switched to SCHEDULED
12/15/2017 14:41:49 Sink: Unnamed(1/4) switched to SCHEDULED
12/15/2017 14:41:49 Sink: Unnamed(2/4) switched to SCHEDULED
12/15/2017 14:41:49 Sink: Unnamed(3/4) switched to SCHEDULED
12/15/2017 14:41:49 Sink: Unnamed(4/4) switched to SCHEDULED
12/15/2017 14:41:49 Source: Collection Source(1/1) switched to DEPLOYING
12/15/2017 14:41:49 Sink: Unnamed(1/4) switched to DEPLOYING
12/15/2017 14:41:49 Sink: Unnamed(2/4) switched to DEPLOYING
12/15/2017 14:41:49 Sink: Unnamed(3/4) switched to DEPLOYING
12/15/2017 14:41:49 Sink: Unnamed(4/4) switched to DEPLOYING
12/15/2017 14:41:50 Source: Collection Source(1/1) switched to RUNNING
12/15/2017 14:41:50 Sink: Unnamed(2/4) switched to RUNNING
12/15/2017 14:41:50 Sink: Unnamed(4/4) switched to RUNNING
12/15/2017 14:41:50 Sink: Unnamed(3/4) switched to RUNNING
12/15/2017 14:41:50 Sink: Unnamed(1/4) switched to RUNNING
12/15/2017 14:41:50 Source: Collection Source(1/1) switched to FINISHED
12/15/2017 14:41:50 Sink: Unnamed(3/4) switched to FINISHED
12/15/2017 14:41:50 Sink: Unnamed(4/4) switched to FINISHED
12/15/2017 14:42:50 Sink: Unnamed(1/4) switched to FAILED
<b> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata
after 60000 ms. </b>
12/15/2017 14:42:50 Sink: Unnamed(2/4) switched to FAILED
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata
after 60000 ms.
12/15/2017 14:42:50 Job execution switched to status FAILING.
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. 12/15/2017 14:42:50 Job execution switched to status FAILED. [error] (run-main-0) org.apache.flink.runtime.client.JobExecutionException: Job execution failed. org.apache.flink.runtime.client.JobExecutionException: Job execution failed. at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply$mcV$sp(JobManager.scala:933) at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:876) at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:876) at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107 ) Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. [trace] Stack trace suppressed: run last *:run for the full output. java.lang.RuntimeException: Nonzero exit code: 1 at scala.sys.package$.error(package.scala:27) [trace] Stack trace suppressed: run last compile:run for the full output. [error] (compile:run) Nonzero exit code: 1 [error] Total time: 75 s, completed Dec 15, 2017 2:42:51 PM
... View more
Labels:
- Labels:
-
Apache Kafka
12-15-2017
01:09 PM
@Fabian Hueske Need your help for below : https://stackoverflow.com/questions/47831484/flink-kafka-program-in-scala-giving-timeout-error-org-apache-kafka-common-errors
... View more
12-15-2017
01:09 PM
@Fabian Hueske Need your help for below : https://stackoverflow.com/questions/47831484/flink-kafka-program-in-scala-giving-timeout-error-org-apache-kafka-common-errors
... View more
12-14-2017
04:21 PM
Adding this detail : In its current version (1.4.0, Dec. 2017), Flink does not provide a built-in TableSource to ingest data from a relational database.
... View more
12-14-2017
10:29 AM
Hi , I am referring "https://flink.apache.org/news/2017/04/04/dynamic-tables.html" in that database configuration is not shown `val sensorTable = ??? // can be a CSV file, Kafka topic, database, or ... // register the table source tEnv.registerTableSource("sensors", sensorTable) I have to connect with relational source, do Flink have API for database or JDBC approach we need to use ? (Did we have any similar like for Apache Spark we have sqlcontext or sparksession object )
... View more
Labels:
- Labels:
-
Apache Flink
-
Apache Kafka
-
Apache Spark
11-13-2017
03:35 PM
Do we know default partitions and replications for this topic __consumer_offsets (internal topic) ? By any chance if we loses this topic then do we have anything other then this to recover ? As you explained above , what I understand is now there is no current offset and commit offset concepts . Please add your valuable knowledge if it is not ?
... View more
11-08-2017
01:54 PM
1 Kudo
I got the answer for this question : Number of partitions for a topic can only be increased, never decreased Reason for this as if we decrease the partition it will be data loss . We can delete current topic and recreate new one with required partition to achieve this .
... View more
11-08-2017
01:36 PM
Suppose at the time of creation of Topic we created a topic XX with 5 Partition and later we recognized that we dont need 2 , so can we reduce the number of partition for a Topic ?
... View more
Labels:
- Labels:
-
Apache Kafka
11-07-2017
03:15 PM
Do we know for what purpose these files which will be created after starting Kafka server and all contains __consumer_offsets information in log . files are : state-change.log ,kafka-request.log,kafka-authorizer.log,controller.log,server.log,log-cleaner.log I have read that "__consumer_offsets" is the topic what type of details it holds ?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Kafka
11-07-2017
03:09 PM
Please refer this for solution : https://community.hortonworks.com/questions/147131/kafka-hdp-doubt-current-offset-and-commit-offset.html?childToView=147156#comment-147156
... View more
11-07-2017
10:37 AM
I gone with this HDP Kafka PDF (https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.3/bk_kafka-component-guide/bk_kafka-component-guide.pdf) Can someone brief on this line : "Kafka consumers keep track of which messages have already been consumed by storing the current offset." Please also correct my understanding on Current Offset and Commit Offset : Current Offset is taken care by Kafka Broker . Commit Offset is taken care by Consumer .
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Kafka
11-07-2017
10:36 AM
I gone with this HDP Kafka PDF (https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.3/bk_kafka-component-guide/bk_kafka-component-guide.pdf) Can someone brief on this line : "Kafka consumers keep track of which messages have
already been consumed by storing the current offset." Please also correct my understanding on Current Offset and Commit Offset : Current Offset is taken care by Kafka Broker . Commit Offset is taken care by Consumer .
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Kafka
10-25-2017
02:00 PM
Can someone explain following lines about num-executors and executor-memory Below written statement on hortonworks site (https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_spark-guide/content/ch_tuning-spark.html#spark-job-status): There are tradeoffs between num-executors and executor-memory . Large executor memory does not imply better performance, due to JVM garbage collection. Sometimes it is better to configur a larger number of small JVMs than a small number of large JVMs.
... View more
Labels:
- Labels:
-
Apache Spark
03-03-2017
05:32 AM
Provided solution for "java.sql.SQLSyntaxErrorException: ORA-00933: SQL command not properly ended"
... View more
03-03-2017
05:31 AM
Issue in Sqoop : Oracle database to hdfs
17/03/02 19:44:28 WARN sqoop.ConnFactory: Parameter --driver is set to an explicit driver however appropriate connection manager is not being set (via --connection-manager). Sqoop is going to fall back to org.apache.sqoop.manager.GenericJdbcManager. Please specify explicitly which connection manager should be used next time.
17/03/02 19:44:28 INFO manager.SqlManager: Using default fetchSize of 1000
17/03/02 19:44:28 INFO tool.CodeGenTool: Beginning code generation
17/03/02 19:44:40 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM ABCSCHEMA.ABCTABLE AS t WHERE 1=0
17/03/02 19:44:40 ERROR manager.SqlManager: Error executing statement: java.sql.SQLSyntaxErrorException: ORA-00933: SQL command not properly ended
java.sql.SQLSyntaxErrorException: ORA-00933: SQL command not properly ended
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:445)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:879)
Solution :
1.Table name should be capital :
SqoopOptions.setTableName("ABCSCHEMA.ABCTABLE");
String[] cols={"E_DATE","E_CATEGORY","E_COMPUTER","U_NAME","IPADDRESS","D_NAME","R_INSERT_DATE"};
SqoopOptions.setColumns(cols);
SqoopOptions.setWhereClause("R_INSERT_DATE >=to_date('31-12-1900','DD-MM-YYYY')");
2. Dont Specify Driver name in your program using below commands (refer this : https://issues.apache.org/jira/browse/SQOOP-457 )
So below lines not required in your program :
// SqoopOptions.setDriverClassName(DriverDomain);
//SqoopOptions.setConnManagerClassName("org.apache.sqoop.manager.GenericJdbcManager");
... View more
02-09-2017
05:38 AM
When we run from terminal its running fine . We are getting issue while submitting program from local (using eclipse) and phoenix is in these servers(10.40.17.183,10.40.17.155,10.40.17.129). Can you please validate hbase-site.xml ( Do I need to place this file to hbase/conf and Phoenix/bin for all three servers/nodes) <?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://mycluster:8020/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.config.read.zookeeper.config</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>namenode1,namenode2,datanode2</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/DBHADOOP/installations/zookeeper-3.4.5/zookeeper/zk1</value>
</property>
<property>
<name>hbase.client.keyvalue.maxsize</name>
<value>0</value>
</property>
<property>
<name>hbase.client.scanner.timeout.period</name>
<value>600000</value>
</property>
<property>
<name>hbase.master</name>
<value>mycluster:60000</value>
<description>The host and port the HBase master runs at.
</description>
</property>
<property>
<name>hbase.regionserver.port</name>
<value>60020</value>
<description>The host and port the HBase master runs at.
</description>
</property>
<property>
<name>zookeeper.znode.parent</name>
<value>/hbase</value>
</property>
<property>
<name>hbase.zookeeper.peerport</name>
<value>2888</value>
</property>
<property>
<name>hbase.zookeeper.leaderport</name>
<value>3888</value>
</property>
<property>
<name>zookeeper.znode.clusterId</name>
<value>12345</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
<description>Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
</description>
</property>
<property>
<name>hbase.rpc.timeout</name>
<value>120000</value>
<source>hbase-site.xml</source>
</property>
</configuration>
... View more
02-08-2017
01:56 PM
Do we also need to open port 2888,3888 which is for peer and lead?
... View more
02-08-2017
01:31 PM
As per jps HMaster running .. Please advice Server 1 :
[hadoop@CDCUDHDPDB3 ~]$ sudo jps 5992 RunJar 23297 QuorumPeerMain 23781 Jps 13428 RunJar 13086 JobHistoryServer 2137 ResourceManager 22604 HMaster 7266 Master Server 2 : [hadoop@CDCUDHDPDB2 ~]$ sudo jps 19481 Worker 26187 HRegionServer 22091 NodeManager 30755 Jps 21979 DataNode 21797 NameNode 30017 QuorumPeerMain 30680 -- process information unavailable 21896 DFSZKFailoverController 26312 HMaster Server 3: [hadoop@CDCUDHDPDB1 ~]$ sudo jps 11820 Jps 8936 HRegionServer 18514 QuorumPeerMain 8365 NodeManager 6455 Worker 11695 -- process information unavailable 8254 DataNode
... View more
02-08-2017
09:09 AM
Josh Elser Please guide us
... View more
02-08-2017
07:48 AM
This program we are submitting from Eclipse local and Phoenix store in remote server.
---------------------------------------------------------------------------------------
Java code:
public static String driverName = "org.apache.phoenix.jdbc.PhoenixDriver";
try {
Class.forName(driverName);
System.out.println("In try");
conn = DriverManager.getConnection("jdbc:phoenix:10.40.17.183,10.40.17.155,10.40.17.129:2181:/hbase","","");
System.out.println("Connected");
conn.createStatement().execute("create table IF NOT EXISTS TEST0702(mykey integer not null primary key, asset_name varchar(50),item_name varchar(50),country_org_name varchar(100),branch_org_name varchar(100),dc_name varchar(100),building_name varchar(100),floor_desc varchar(500),cubicle_id varchar(500),ws_no varchar(500),iou_name varchar(500),monthly_count integer,month_year varchar(200),project_no varchar(200),project_name varchar(500),project_id varchar(200),project_status varchar(200),completion_date varchar(200),start_date varchar(200))"); conn.commit();
System.out.println("Table Created!!");
}
Console Message : In try
log4j:WARN
No appenders could be found for logger
(org.apache.hadoop.conf.Configuration.deprecation).
log4j:WARN
Please initialize the log4j system properly.
log4j:WARN
See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more
info. org.apache.phoenix.exception.PhoenixIOException:
org.apache.hadoop.hbase.MasterNotRunningException:
Can't get connection to ZooKeeper: KeeperErrorCode = OperationTimeout at
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) at
org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:890) at
org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1224) at
org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:113) at
org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1937) at
org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:751) at
org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:186) at
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:314) at
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:306) at
org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:304) at
org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1363) at
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1911) at
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1880) at
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:77) at
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1880) at
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:180) at
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:132) at
org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:151) at
java.sql.DriverManager.getConnection(DriverManager.java:571) at
java.sql.DriverManager.getConnection(DriverManager.java:215) at
PhoenixConnection.getConnection(PhoenixConnection.java:38) at
PhoenixConnection.main(PhoenixConnection.java:17) Caused
by: org.apache.hadoop.hbase.MasterNotRunningException:
org.apache.hadoop.hbase.MasterNotRunningException:
Can't get connection to ZooKeeper: KeeperErrorCode = OperationTimeout at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1698) at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(HConnectionManager.java:1724) at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getKeepAliveMasterService(HConnectionManager.java:1931) at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHTableDescriptor(HConnectionManager.java:2732) at
org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:426) at
org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:431) at
org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:824) ...
21 more Caused
by: org.apache.hadoop.hbase.MasterNotRunningException:
Can't get connection to ZooKeeper: KeeperErrorCode = OperationTimeout at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.checkIfBaseNodeAvailable(HConnectionManager.java:934) at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.access$600(HConnectionManager.java:597) at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(HConnectionManager.java:1624) at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$StubMaker.makeStub(HConnectionManager.java:1670) ...
27 more Caused
by: org.apache.zookeeper.KeeperException$OperationTimeoutException:
KeeperErrorCode = OperationTimeout at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk(RecoverableZooKeeper.java:145) at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:222) at
org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:481) at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.checkIfBaseNodeAvailable(HConnectionManager.java:923) ...
30 more
... View more
Labels:
- Labels:
-
Apache Phoenix
11-21-2016
06:44 AM
This issue is solved now . Thanks
... View more
11-15-2016
11:15 AM
@Josh Elser Please guide here
... View more
11-15-2016
11:14 AM
Hi @Raja Ray We are also getting the same issue when I try to fix the hbase issue using -fix . we are getting error "ERROR: java.io.IOException: Table Namespace Manager not ready yet, try again later" while creating a hbase table
... View more
11-11-2016
01:50 PM
I am looking for the easy explanation on Map Reduce phase - From InputSplit to Reducer . Role of InputSplit ,RecordReader for Map Phase When Shuffle/Sort Phase run Partition phase How the data goes to reducer
... View more
Labels:
- Labels:
-
Apache Hadoop
10-20-2016
07:24 PM
As both data resides on hdfs . So which is best in terms of Storage / Memory etc.
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Hive
10-20-2016
07:21 PM
1 Kudo
Do any one help me to write Sqoop script with Java API ?
... View more
Labels:
- Labels:
-
Apache Sqoop
10-20-2016
07:14 PM
2 Kudos
Can anyone clarify this : Apache Cassandra is a key-value or a column-oriented (or tabular) database management system?
... View more