Member since
02-16-2016
89
Posts
24
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
10059 | 05-14-2018 01:54 PM | |
1573 | 05-08-2018 05:07 PM | |
1122 | 05-08-2018 04:46 PM | |
2951 | 02-13-2018 08:53 PM | |
3539 | 11-09-2017 04:24 PM |
05-14-2018
02:05 PM
Check firewall on all servers.
... View more
05-14-2018
01:54 PM
1 Kudo
Please follow the steps in link below: https://hortonworks.com/tutorial/sandbox-deployment-and-install-guide/section/3/
... View more
05-10-2018
04:23 PM
What user are you seeing logged in? Is logged in user same as one defined in file authorizers.xml > "Initial Admin Identiry" ?
... View more
05-10-2018
04:05 PM
You can use RounteOnAttribute if you have some way to identifying which files goes to which destination like in filename, path etc. Check attributes of the flow file to see if any can be used. Then use those attributes to create 3 paths, something like: https://community.hortonworks.com/questions/54811/redirecting-flow-based-on-certain-condition-nifi.html
... View more
05-08-2018
05:07 PM
A "multi node HDP cluster in Azure VM" can be created using normal Ambari managed HDP installation guide, this is not much different from setting a cluster on on-premise hardware or VMs on your desktop: https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.1.5/bk_ambari-installation/content/ch_Getting_Ready.html Prerequisite will be to set up Azure VMs with storage and networking. You only need to pay Hortonworks for support, if needed.
... View more
05-08-2018
04:46 PM
1 Kudo
Enable LDAP for Ambari: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_security/content/_synchronizing_ldap_users_and_groups.html Use Ambari View > Files to access HDFS. If you need access from command line, you will need to enable LDAP on operating system through ldap/sssd etc.
... View more
05-08-2018
02:17 PM
As mentioned before, make sure your Hive Service is connected to the new location of MySQL. Removing MySQL without will result in losing all you metadata. You can use Ambari REST API to remove services manually: curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE http: //AMBARI_SERVER_HOST:8080/api/v1/clusters/c1/services/SERVICENAME https://cwiki.apache.org/confluence/display/AMBARI/Using+APIs+to+delete+a+service+or+all+host+components+on+a+host
... View more
04-05-2018
12:27 AM
1 Kudo
Pierre solution is correct. If you installed Atlas after Ranger UserSync has been configured to use LDAP, new local users will not get synced in ranger like atlas. This user is needed to setup hbase tables. To fix, revert UserSync to UNIX, restart only Ranger UserSync, Switch back to UserSync LDAP config. In Ranger add user atlas to HBase all policy. Restart Atlas.
... View more
02-13-2018
09:11 PM
Usually by default the ticket expires 24 hours and cache expires 7 days. Depends on your directory services policies. Within 7 days you can do kinit -R for users. klist will show ticket and cache expiry time. Or you can use keytabs to automate ticket renewal. You don't have to kinit for hadoop services (ever), ticket renewal is managed automatically.
... View more
02-13-2018
08:53 PM
1 Kudo
We usually use Nifi Content and Provenance repository to troubleshoot failed flow files. Both are set to 7 days of retention. Plus you can replay content to debug. From zero API reporting perspective you can use row counts as attributes collecting total rows, success rows, failed rows etc. Depending on format of your source file this can be as simple as executing wc -l. Later converting these attributes to JSON and use MergeContent, Schema registry and query record to create an email report.
... View more