Member since
04-08-2019
115
Posts
97
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4174 | 04-16-2016 03:39 AM | |
2238 | 04-14-2016 11:13 AM | |
3886 | 04-13-2016 12:31 PM | |
4891 | 04-08-2016 03:47 AM | |
3882 | 04-07-2016 05:05 PM |
10-07-2015
08:27 PM
Getting following error while connecting to Hbase. HConnection connection = HConnectionManager.createConnection(conf); Error in Log file: 2015-10-07 20:19:33 o.a.z.ClientCnxn [INFO] Session establishment complete on server localhost.localdomain/127.0.0.1:2181, sessionid = 0x15043a5de090013, negotiated timeout = 4000 2015-10-07 20:19:33 STDIO [ERROR] java.io.IOException: java.lang.reflect.InvocationTargetException 2015-10-07 20:19:33 STDIO [ERROR] at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:426) 2015-10-07 20:19:33 STDIO [ERROR] at org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInternal(ConnectionManager.java:319) 2015-10-07 20:19:33 STDIO [ERROR] at org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:292) 2015-10-07 20:19:33 STDIO [ERROR] at com.opensoc.enrichment.adapters.write.dhcp.DynamicLookupWriterBolt.initializeAdapter(DynamicLookupWriterBolt.java:64) 2015-10-07 20:19:33 STDIO [ERROR] at com.opensoc.enrichment.adapters.write.dhcp.DynamicLookupWriterBolt.execute(DynamicLookupWriterBolt.java:115) 2015-10-07 20:19:33 STDIO [ERROR] at backtype.storm.topology.BasicBoltExecutor.execute(BasicBoltExecutor.java:50) 2015-10-07 20:19:33 STDIO [ERROR] at backtype.storm.daemon.executor$fn__5697$tuple_action_fn__5699.invoke(executor.clj:659) 2015-10-07 20:19:33 STDIO [ERROR] at backtype.storm.daemon.executor$mk_task_receiver$fn__5620.invoke(executor.clj:415) 2015-10-07 20:19:33 STDIO [ERROR] at backtype.storm.disruptor$clojure_handler$reify__1741.onEvent(disruptor.clj:58) 2015-10-07 20:19:33 STDIO [ERROR] at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:125) 2015-10-07 20:19:33 STDIO [ERROR] at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99) 2015-10-07 20:19:33 STDIO [ERROR] at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) 2015-10-07 20:19:33 STDIO [ERROR] at backtype.storm.daemon.executor$fn__5697$fn__5710$fn__5761.invoke(executor.clj:794) 2015-10-07 20:19:33 STDIO [ERROR] at backtype.storm.util$async_loop$fn__452.invoke(util.clj:465) 2015-10-07 20:19:33 STDIO [ERROR] at clojure.lang.AFn.run(AFn.java:24) 2015-10-07 20:19:33 STDIO [ERROR] at java.lang.Thread.run(Thread.java:745) 2015-10-07 20:19:33 STDIO [ERROR] Caused by: java.lang.reflect.InvocationTargetException 2015-10-07 20:19:33 STDIO [ERROR] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 2015-10-07 20:19:33 STDIO [ERROR] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 2015-10-07 20:19:33 STDIO [ERROR] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 2015-10-07 20:19:33 STDIO [ERROR] at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 2015-10-07 20:19:33 STDIO [ERROR] at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:424) 2015-10-07 20:19:33 STDIO [ERROR] ... 15 more 2015-10-07 20:19:33 STDIO [ERROR] Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hbase.protobuf.ProtobufUtil 2015-10-07 20:19:33 STDIO [ERROR] at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64) 2015-10-07 20:19:33 STDIO [ERROR] at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75) 2015-10-07 20:19:33 STDIO [ERROR] at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:106) 2015-10-07 20:19:33 STDIO [ERROR] at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:858) 2015-10-07 20:19:33 STDIO [ERROR] at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:662) 2015-10-07 20:19:33 STDIO [ERROR] ... 20 more
... View more
Labels:
- Labels:
-
Apache HBase
10-07-2015
03:00 PM
Are you talking about HDFS NFS Gateway ?.
... View more
10-07-2015
01:12 PM
I want to view how many times Kafka died in the last 10 days . Is there any ways to view it from Ambari ?
... View more
Labels:
- Labels:
-
Apache Ambari
10-07-2015
10:11 AM
1 Kudo
@bsaini@hortonworks.com. What i usually do in this case is to open the data file(CSV) and copy the row to problematic row excel sheet and also place the row which is inserting without any issues and compare these 2 rows on individual columns values against the datatype. its most likely either the datatype is not matching with the values are you are inserting a null / no value for a primary key column.
... View more
10-06-2015
03:39 PM
Sqoop2 can do the move to kafka.But we are not there yet to support it .
... View more
10-06-2015
03:38 PM
@orenault@hortonworks.com. yes but doesnt appear to provide any opensource options.
... View more
10-06-2015
03:37 PM
Thanks @agrande@hortonworks.com. Yes. the source data is just a logs of event. Customer do not want to use any paid software(eg GG )
... View more
10-06-2015
09:54 AM
1 Kudo
I want to ingest data from Oracle database to Kafka using Logstash. Wondering if NiFi can perform the ingestion from relational database to Kafka ?
... View more
Labels:
10-06-2015
09:47 AM
Customer wants to ingest data from Oracle Database to Kafka. it appears that sqoop2 is supported to ingest the data to Kafka. Since we dont have sqoop2 support yet. Kafka customer is looking for using logstash to ingest the data. Is there any other better options available ?
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache Sqoop
10-01-2015
10:17 PM
1 Kudo
@schintalapani@hortonworks.com. Documentation says "The leader handles all read and write requests for the partition while the followers passively replicate the leader." Whereas we are talking about "Kafka client does a round robin picking of each node and writes to that topic partition and moves on to next one.". If only leader partition can handle read and write , how can Kafka client perform round robin here on all the partition. Aren't these mutually exclusive ?
... View more
- « Previous
- Next »