Member since
04-25-2016
579
Posts
609
Kudos Received
111
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2939 | 02-12-2020 03:17 PM | |
| 2140 | 08-10-2017 09:42 AM | |
| 12496 | 07-28-2017 03:57 AM | |
| 3444 | 07-19-2017 02:43 AM | |
| 2535 | 07-13-2017 11:42 AM |
06-24-2016
10:14 AM
2 Kudos
do you have default query setup with yarn scheduler? try to submit to other available queue using set mapred.job.queue.name=<queue_name> and see if issue persist
... View more
06-24-2016
10:06 AM
you are running topology in local cluster mode better to run it this using StormSubmitter which will create topology and submit it to the remote cluster StormSubmitter.submitTopology("topology_name", config, builder.createTopology()); to understand how localcluster and stormsubmitter work refer the main method https://github.com/rajkrrsingh/StormSampleApp/blob/master/src/main/java/com/rajkrrsingh/storm/SampleTopology.java
... View more
06-24-2016
07:52 AM
1 Kudo
can you brief me about your usecase here.in spark awaitTermination is to stop steamingContext while allowing the executors to complete the processing.in spark application you do awaitTermination so that main thread can block until executors are done. in storm we have kill topology which you can call from ui as well as cli to end the topology
... View more
06-21-2016
06:42 AM
java.lang.RuntimeException: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago it seems you have connectivity issue from the node to the mysql server. can you check whether you are able to run simple jdbc program to connect to your mysql server.
... View more
06-19-2016
03:02 PM
normally mapper dont fail with OOM and 8192M is pretty good, I suspect that if you have some big records while reading from csv, are you doing some memory intensive operation inside mapper. could you please share the task log for this attempt attempt_1466342436828_0001_m_000008_2
... View more
06-19-2016
02:22 PM
1 Kudo
looks your mapred.child.java.opts is insufficient to run the job,try running this job again after increasing mapred.child.java.opts value.
... View more
06-18-2016
03:15 PM
@Jan Horton I am not sure about the other ready solution available but mostly people rely on hue and webHdfs to achieve this.I want to share my views on your problem statement where you want a REST webservice to query HBase tables managed by Hive. for a sake of simplicity you can write a hive jdbc client and expose it as REST service. here is the sample program to query hive using jdbc https://github.com/rajkrrsingh/HiveServer2JDBCSample/blob/master/src/main/java/HiveJdbcClient.java to get the formatted resultset in json format your rest service can do like this public static JSONObject getFormattedResult(ResultSet res) throws JSONException {
List<JSONObject> resList = new ArrayList<JSONObject>();
JSONObject hd = new JSONObject();
try {
// get column names
ResultSetMetaData rsMeta = res.getMetaData();
int columnCnt = rsMeta.getColumnCount();
List<String> columnNames = new ArrayList<String>();
for(int i=1;i<=columnCnt;i++) {
columnNames.add(rsMeta.getColumnName(i).toUpperCase());
}
while(res.next()) { // convert each object to an human readable JSON object
JSONObject obj = new JSONObject();
for(int i=1;i<=columnCnt;i++) {
String key = columnNames.get(i - 1);
String value = res.getString(i);
obj.put(key, value);
}
resList.add(obj);
hope it will be much clearer and simple now.
... View more
06-18-2016
02:03 PM
@Jan Horton Ambari view and Hue do similar things but they are different in terms of implementation. hue uses webHcat and webHDFS REST api to query on hive while ambari view do it through thrift client. if you follow the ambari view code base and a sample program given here you can adept those to expose your REST service. on your last comment you (I have a HBase table where I want to read data from different clients through http and it seems to be not possible?!) it's a possible through HBase REST api, furthur to this you can read from here . https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/rest/package-summary.html#operation_query_schema
... View more
06-18-2016
11:41 AM
@Jan Horton HUE: which takes advantages of webHCat and webHDFS to fullfil the desired outcome. AMBARI HIVE View : which takes advantage of Thrift service achieve the same.you can also take the advantage of thrift service to implement your webservice. https://github.com/apache/ambari/tree/trunk/contrib/views/hive/src/main/java/org/apache/ambari/view/hive one more approach I can suggest here is to write a simple hive thrift client which can send request (TFetchResultsReq) to hiveserver2 and got query result (TFetchResultsResp), please find along the sample program, https://gist.github.com/rajkrrsingh/4ab7153ca90969dcad21
... View more
06-17-2016
06:41 PM
I dont think it is related to meta caching, you are querying ORC which are on HDFS not in metastore. can you try query only specific bucket where the file consisting of this record exists? the second thing you need to verify is that if the delta files are merged in base file in that partition or not. if not then try to run manual compaction and see if it helps.
... View more