Member since
01-09-2019
401
Posts
163
Kudos Received
80
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1989 | 06-21-2017 03:53 PM | |
3032 | 03-14-2017 01:24 PM | |
1937 | 01-25-2017 03:36 PM | |
3101 | 12-20-2016 06:19 PM | |
1510 | 12-14-2016 05:24 PM |
11-08-2022
10:26 AM
what about CDP ?
... View more
05-09-2022
06:21 AM
Similar to this, I have a use case to compare Ansible Code with the Ambari Configs. The reason we are doing this is that we found several inconsistencies w.r.t to Ansible code and Ambari configs. But comparing both is a big task as there are many playbooks where we have Hadoop code so checking all the code base a heck. Any other option to do the comparison.....
... View more
06-27-2021
05:49 AM
1 Kudo
A high performance ASN.1 decoder for NiFi. https://github.com/BonyanSystem/ASN1Processor
... View more
08-10-2020
06:12 AM
Hi All, is there specific method to follow for installing ambari on python3..any one installed on python3 base
... View more
07-07-2020
06:42 AM
I know this is a bit late to post but i have a web app that scans the table and gets results based on the rowkey provided in the call so it needs to support multi threading, here's a snip of the scan: try(ResultScanner scanner = myTable.getScanner(scan)) {
for (Result result : scanner) {
//logic of result.getValue() and result.getRow()
}
} i just saw https://hbase.apache.org/1.2/devapidocs/org/apache/hadoop/hbase/client/Result.html is one of those classes that is not thread-safe among others mentioned in this article. Is there an example of a fully thread-safe hbase app that scans results based on the rowkey provided or anything similar? I'm looking for an efficient and good example i can use for reference. I am now concerned that this piece of code might not yield proper results when i get simultaneous requests.
... View more
01-16-2020
12:21 AM
Could you please explain why we need to add extra -- in giving command for --schema.
... View more
06-21-2017
04:03 PM
1 Kudo
Hi @Sami Ahmad Normally, master services can be spread across the master nodes to ensure proper resource allocation depending on the cluster. If you have two datanodes/worker nodes that you do not want to run master services on, then, no problem, just allocate the host you want and move on to the next step. In Ambari, you click on the Hosts tab to see what services are installed on what host, but, you may need to go through them host by host.
... View more
03-23-2017
03:00 PM
If want to do one time import,use the following command,It will use hcatalog to create the table and import the data in ORC format sqoop import --connect jdbc:sqlserver://11.11.111.11;databaseName=dswFICO --driver com.microsoft.sqlserver.jdbc.SQLServerDriver --username sqoop --password sqoop --table KNA1 --hcatalog-database default --hcatalog-table KNA1 --create-hcatalog-table --hcatalog-storage-stanza "stored as orcfile"
... View more
12-27-2016
10:56 AM
2 Kudos
One work around that I just tested - run the beeline with the following queue parameter: beeline -u "jdbc:hive2://local:10001/default;transportMode=http;httpPath=cliservice;principal=hive/_HOST@local.COM" -e "SELECT count(*) FROM log;" --hiveconf tez.queue.name=prd_am This will request the query to be executed in the prd_am queue. If the user has access to that queue allowed in Ranger, it will work fine. Still looking for a solution to use the default mapping defined in the YARN Capacity Scheduler configuration like: yarn.scheduler.capacity.queue-mappings=u:user1:dev_devs, g:devs:dev_devs
... View more