Member since
04-04-2016
41
Posts
6
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
303 | 04-28-2016 01:28 AM | |
378 | 04-05-2016 05:06 PM |
10-03-2017
01:06 AM
I am trying to write a hive udf which connects to hbase table. But, the program is failing to access due to security exception and throwing below:
javax.security.auth.login.LoginException: Unable to obtain password from user at Below is the code i am trying ... config = HBaseConfiguration.create();
config.set("hadoop.security.authentication", "Kerberos");
config.set("hbase.security.authentication", "kerberos");
config.addResource("src/main/resources/hbase-site.xml");
// Point to the krb5.conf file.
System.setProperty("java.security.krb5.conf", "src/main/resources/krb5.conf");
System.setProperty("sun.security.krb5.debug", "true");
UserGroupInformation.setConfiguration(config);
UserGroupInformation.loginUserFromKeytab("pricipal", "keytab");
connection = ConnectionFactory.createConnection(config);
can someone please share ur thoughts and help me fix the issue ....
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
-
Apache Hive
09-29-2016
09:50 AM
Hi @Predrag Minovic Thanks a lot for the response. this works!! However, this is working only when i am having 2 columns in the input... but not working when there are more columns !!
... View more
09-17-2016
06:40 AM
I have built a storm topology, that consumes data from kafka and writes into hdfs. When using storm & kafka dependencies like below, <storm.version>0.10.0.2.3.4.0-3485</storm.version>
<kafka.version>0.8.2.1</kafka.version> topology is failing with below stated exception: **java.lang.NoSuchMethodError: kafka.javaapi.consumer.SimpleConsumer.<init>(Ljava/lang/String;IIILjava/lang/String;Ljava/lang/String;)V at storm.kafka.DynamicPartitionConnections.register(DynamicPartitionConnections.java:60) at storm.kafka.PartitionManager.<init>(PartitionManager.java:66) at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:138) at backtype.storm.daemon.executor$fn__7098$fn__7113$fn__7142.invoke(executor.clj:596) at backtype.storm.util$async_loop$fn__543.invoke(util.clj:475) at clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745)**
But, When using storm & kafka dependencies like below topology is running fine and downloading messages and writing to hdfs , without any issues!! <storm.version>0.9.3.2.2.4.0-2633</storm.version>
<kafka.version>0.8.2.1</kafka.version> Can someone please help what is the causing the issue and help me fix the issue !
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Storm
08-01-2016
09:05 PM
Thanks for your response and time @mqureshi I tried for ^^... its still not working...!! And, sorry, my above comment had a typo !! Below is what i am trying !! create external table <table_name> ( col1 string, col2 string )
ROW FORMAT SERDE "org.apache.hadoop.hive.contrib.serde2.RegexSerDe"
WITH SERDEPROPERTIES (
"input.regex" = "^^"
)
STORED AS TEXTFILE
LOCATION "<hdfs_path>";
... View more
08-01-2016
06:52 PM
Thanks for the response @mqureshi I Tried below two ways, but that did not work... "input.regex" = "\\^\^\\^\\^\\^\\^\\^\\^\\^\\^" "input.regex" = "(\\^\^\\^\\^\\^\\^\\^\\^\\^\\^)" Any other thoughts i can try of ?
... View more
08-01-2016
05:35 PM
Can someone pls help me creating regex pattern to use for creating a hive table with RegEx Serde.... I want hive regex table to be created with the pattern ^^^^^^^^^^ [10 anchor characters] as a delimiter! I am not sure what would be the regex pattern of hive table for this!! please help. Thanks a lot in advance
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
05-16-2016
08:16 PM
@pshah okay will check by logging in. Anyways i decompiled in editor and verified the code, and i am good! Do we have any sample application that does the editing of stored offsets in zookeer ?
... View more
05-16-2016
05:54 PM
I have a github account, but not really active on itt!! please let me know what information you are looking for ? @pshah
... View more
05-16-2016
05:52 PM
with the help of the classes you mentioned i am able to check the committed kafka offsets!! I am trying to write an application now to be able to edit the stored offsets!! thanks a lot for your help!! @pshah
... View more
05-12-2016
06:39 PM
Thanks very much @pshah, i will check those classes!!
But, unfortunately, that links you shared navigating me to "page not found"! But i can still check the classes by decompiling my storm version jars....!!
... View more
05-12-2016
04:00 PM
Thanks for the response @pshah i am using the storm version, 0.9.3.2.2.4.0-2633 can you pls help me with what classes in this version are doing this fetching offsets/editing them into zookeeper. Thanks very much. My whole intension is to see the committed offsets and edit them as necessary. Also, in which version of Storm i can say to storm to not use the zookeeper for offsets ?
... View more
05-11-2016
10:01 PM
Hi, Would it be possible to see the offsets committed to zookeeper of a kafka topic when we consumer it using "storm-kafka" integration & is it possbile to edit the offset committed to zookeeper to re-consume the topic messages again?
And can someone help me identify, Which class of "storm-kafka" does actually write/commits the offset to zookeeper ? &
which class actually fetches the last committed offset when we re-run a storm topology with same client id ?
Thanks a lot.
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache Storm
04-28-2016
01:28 AM
After following the link https://issues.apache.org/jira/browse/STORM-1521 i was able to fix the authentication problem i observed!!
... View more
04-28-2016
01:28 AM
After following the link https://issues.apache.org/jira/browse/STORM-1521 i was able to fix the authentication problem i observed!!
... View more
04-28-2016
01:26 AM
Thanks a lot @Josh Elser. That solved my issue. Thanks very much
... View more
04-27-2016
10:28 PM
Can you please me the git link for what changes have been done in that issue, to fix it. I could n't locate the exact chagnes from the link you shared. @Josh Elser. Thanks a lot.
... View more
04-27-2016
10:09 PM
Hi, I had written a custom bolt extending, BaseBasicBolt NOT RichBolt, where i wrote my custom code to connect to HBase and write input stream into HBase table. To do this, i am creating connections to HBase in my prepare method, as below and closing connections in cleanup(). public void prepare(Map stormConf, TopologyContext context) {
try {
fmt = DateTimeFormat.forPattern("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'");
config = HBaseConfiguration.create();
config.set("hadoop.security.authentication", "Kerberos");
config.set("hbase.security.authentication", "kerberos");
config.addResource("hdfs-site.xml");
config.addResource("core-site.xml");
config.addResource("hbase-site.xml");
UserGroupInformation.setConfiguration(config);
UserGroupInformation.loginUserFromKeytab(KEYTAB_PRINC, KEYTAB_PATH);
LOG.info("HBase Cnxn Done.");
connection = ConnectionFactory.createConnection(config);
table = connection.getTable(TableName.valueOf(HBASE_TABLE_NAME));
}catch (Exception e) {
System.out.println("Exception occured during hbase connection preparations" + e.getMessage());
e.printStackTrace();
}
} The issue is, when i am running my storm topology with 1executor and 1task, topology is running fine without any issues, but when i increase the executors/tasks, say 15executors & 60tasks, bolt where i am doing hbase connections is failing with authentication issues, like GSS exceptions!!!. Can some one pls share your knowledge/insight into what gng wrong with my code. Thanks a lot...
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
-
Apache Storm
04-27-2016
10:05 PM
Hi, I had written a custom bolt extending, BaseBasicBolt NOT RichBolt, where i wrote my custom code to connect to HBase and write input stream into HBase table. To do this, i am creating connections to HBase in my prepare method, as below and closing connections in cleanup(). public void prepare(Map stormConf, TopologyContext context) {
try {
fmt = DateTimeFormat.forPattern("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'");
config = HBaseConfiguration.create();
config.set("hadoop.security.authentication", "Kerberos");
config.set("hbase.security.authentication", "kerberos");
config.addResource("hdfs-site.xml");
config.addResource("core-site.xml");
config.addResource("hbase-site.xml");
UserGroupInformation.setConfiguration(config);
UserGroupInformation.loginUserFromKeytab(KEYTAB_PRINC, KEYTAB_PATH);
LOG.info("HBase Cnxn Done.");
connection = ConnectionFactory.createConnection(config);
table = connection.getTable(TableName.valueOf(HBASE_TABLE_NAME));
}catch (Exception e) {
System.out.println("Exception occured during hbase connection preparations" + e.getMessage());
e.printStackTrace();
}
} The issue is, when i am running my storm topology with 1executor and 1task, topology is running fine without any issues, but when i increase the executors/tasks, say 15executors & 60tasks, bolt where i am doing hbase connections is failing with authentication issues, like GSS exceptions!!!. Can some one pls share your knowledge/insight into what gng wrong with my code. Thanks a lot...
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
-
Apache Storm
04-05-2016
05:06 PM
Found the reason for showing wrong summary in UI, Below setting in my code is the reason to show more no.of executors/tasks in topology summary. configured.setNumAckers(30); Once i commented that line, topology summary is showing correct count!
... View more
04-04-2016
10:08 PM
1 Kudo
I am seeing topology summary of my topology is giving wrong information of showing more number of executors and more no.of tasks than actually it is taking to run!!!
Please refer to the screenshot attached and help me fix the issue...
I am not sure, what went wrong... why is the topology summary showing 33 executors & 33 tasks, when i actually configured code to use only 1 executor and 1 task!! Also, as you can see, only summary part of the topology page is showing more executors/taks, but the actual reporting of Spouts/Bolts is showing correctly 1 exector/task for each respective spout/bolt..!!
Can someone help me with your thoughs, on i am seeing wrong information in StormUI ? Thanks!!
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Storm
04-04-2016
08:54 PM
Unable to build maven application, getting the error:
Could not resolve dependencies for project : The following artifacts could not be resolved: org.apache.storm:storm-core:jar:0.9.3.2.2.4.0-2633, org.apache.storm:storm-hdfs:jar:0.9.3.2.2.4.0-2633, org.apache.storm:storm-kafka:jar:0.9.3.2.2.4.0-2633, org.apache.storm:storm-hbase:jar:0.9.3.2.2.4.0-2633, org.apache.zookeeper:zookeeper:jar:3.4.6.2.2.4.0-2633, org.apache.kafka:kafka_2.10:jar:0.8.1.2.2.4.0-2633: Could not find artifact org.apache.storm:storm-core:jar:0.9.3.2.2.4.0-2633 in HDPReleases (http://repo.hortonworks.com/content/repositories/releases/) -> [Help 1]
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Storm