Member since
07-25-2018
174
Posts
29
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5414 | 03-19-2020 03:18 AM | |
3457 | 01-31-2020 01:08 AM | |
1338 | 01-30-2020 05:45 AM | |
2595 | 06-01-2016 12:56 PM | |
3075 | 05-23-2016 08:46 AM |
09-20-2022
07:30 PM
1 Kudo
@ManusUse the below Ranger rest API to fetch the audit logs : curl -v --insecure --anyauth --user username:password -H "Accept: application/json" -H "Content-Type: application/json" -X GET https://RANGER_HOST:6182/service/assets/accessAudit
... View more
04-20-2021
05:06 AM
Here is a URL, written by David W. Streever https://www.streever.com/post/2019/filter-hive-compactions/ It's not official steps. All the ‘extras’ you create against the metastore DB can/may break with the next release. This isn’t a supported method of accessing metadata.
... View more
02-24-2021
06:50 AM
"If your data has a range of 0 to 100000 then RMSE value of 3000 is small, but if the range goes from 0 to 1." Range going from 0 to 1 means?
... View more
01-03-2021
07:36 AM
Here's a sample problem and a custom Accumulator solution in java, you could use this as a sample to your own use case. Input: HashMap<String, String> Output: HashMap<String, Int> that will contain the count for each key in the input HashMaps, Example: Input HashMaps: 1. {"key1", "Value1"}, {"key2", "Value2"} 2. {"key1", "Value2"} Output: {"key1", 2}, {"key2", 1} //Since key1 is repeated 2 times. Code: import org.apache.spark.util.AccumulatorV2;
import java.util.HashMap;
public class CustomAccumulator extends AccumulatorV2<HashMap<String, String>, HashMap<String, Integer>> {
private HashMap<String, Integer> outputHashMap;
public CustomAccumulator(){
this.outputHashMap = new HashMap<>();
}
@Override
public boolean isZero() {
return outputHashMap.size() == 0;
}
@Override
public AccumulatorV2<HashMap<String, String>, HashMap<String, Integer>> copy() {
CustomAccumulator customAccumulatorCopy = new CustomAccumulator();
customAccumulatorCopy.merge(this);
return customAccumulatorCopy;
}
@Override
public void reset() {
this.outputHashMap = new HashMap<>();
}
@Override
public void add(HashMap<String, String> v) {
v.forEach((key, value) -> {
this.outputHashMap.merge(key, 1, (oldValue, newValue) -> oldValue + newValue);
});
}
@Override
public void merge(AccumulatorV2<HashMap<String, String>, HashMap<String, Integer>> other) {
other.value().forEach((key, value) -> {
this.outputHashMap.merge(key, value, (oldValue, newValue) -> oldValue + newValue);
});
}
@Override
public HashMap<String, Integer> value() {
return this.outputHashMap;
}
}
... View more
03-19-2020
03:18 AM
Initially , My query was not converted in one line.Due to which i was facing syntax error in beeline itself. I converted complete big query into one line and working now.
... View more
02-13-2020
03:06 AM
Accidently , I marked this answer as resolved . @rajkumar_singh I am getting below output after executing "hdfs groups <username>" command. <username>@<kerberose principle > : domain users dev_sudo As i am not much aware of cluster configuration So , Could you please help me to understand the output of this command.
... View more
02-04-2020
09:18 AM
Hi all, Above solution is failing at one scenario, Scenario: if multiple flow files processed at a time and landed in the nifi queue which is used after update query ( i.e. puthiveql which increment processed_file_cnt by one for every flow file ) processor ,then there might be chances of triggering the next flow multiple times and that is wrong. Because we do select processed_file_cnt first and then doing the comparison for processed_file_cnt with input_file_cnt.
... View more
01-31-2020
08:49 PM
Hi , My assumptions was wrong , putsql processor does execute update query per flowfile
... View more
01-30-2020
05:45 AM
Thanks for reaching out to me. It was my mistake , I was redirecting/connecting the wrong out port to pg2_in. It's resolved now
... View more
01-07-2017
03:22 PM
Thank you Rguruvannagari. This solution really worked for me.
... View more