Member since
02-16-2016
176
Posts
197
Kudos Received
17
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4286 | 11-18-2016 08:48 PM | |
| 7753 | 08-23-2016 04:13 PM | |
| 2028 | 03-26-2016 12:01 PM | |
| 1889 | 03-15-2016 12:12 AM | |
| 17795 | 03-14-2016 10:54 PM |
02-24-2016
04:43 PM
@J. David Can you confirm ? Value of JobTracker property is value of yarn.resourcemanager.cluster-id in YARN settings in Ambari
... View more
02-24-2016
01:25 PM
@Sunile Manjee @Neeraj Sabharwal @Predrag Minovic I think Sunile's question is role based access to users with admin roles. Currently any user with admin role will have access to all policy repos. There is no way to control access to policies for users with admin role. That should be high on enhancement list for Ranger to support role based access to policy repos.
... View more
02-24-2016
12:37 PM
6 Kudos
DB Visualizer is a popular free tool that allows developers to organize development tools for RDBMS development. With Apache Phoenix, that allows SQL like capability for Hbase, we can use DBVisualizer to connect to Apache Phoenix layer for HBase. Verified with following versions. DBVisualizer version 9.2.12 hbase-client-1.1.2.2.3.2.0-2950.jar phoenix-4.4.0.2.3.2.0-2950-client.jar First Add Phoenix driver to DBVisualizer. In DBVisualizer, go to Tools->Driver Manager and add a new driver. Add both hbase-client and phoenix-client jar. This will add a new Phoenix driver. 1. Connecting to Non-Kerberos cluster To connect to a Non-Kerberos cluster, use jdbc:phoenix:<zookeeper host>:<zookeeper port>:<hbase_z_node> as connection string where hbase_z_node is :/hbase by default. 2. Connecting to Kerberos cluster using cached ticket To connect to a Kerberos cluster, a. add following files to DBVisualizer resources directory. hdfs-site.xml hbase-site.xml core-site.xml b. Copy krb5.conf file to local workstation. c. Create a jaas file with following entry. Client {
com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true renewTicket=true
serviceName="zookeeper";}; Modify dbvisgui.bat file to add following parameters for launching DBVisualizer -Djava.security.auth.login.config="<path-to-jaas-file>" -Djava.security.krb5.conf="<path-to-krb5-file>" d. Connection string for cached keytab will be jdbc:phoenix:<zookeeper host>:<zookeeper port>:/hbase-secure:<path-to-jaas file> 3. Connecting to Kerberos cluster using keytab a. add following files to DBVisualizer resources directory.
hdfs-site.xml
hbase-site.xml
core-site.xml b. Copy krb5.conf file to local workstation. c. copy keytab file use for connecting to Hbase. d. Create a jaas file with following entry. Client {
com.sun.security.auth.module.Krb5LoginModule
requireduseTicketCache=false
useKeytab=true
serviceName="zookeeper";}; Connection string for this case will be jdbc:phoenix:<zookeeper host>:<zookeeper port>:/hbase-secure:<Principal>:<path-to-keytab> Sample connection string jdbc:phoenix:host0001:2181:/hbase-secure:<principal>:\users\z_hadoop_test.keytab Test your connection !
... View more
Labels:
02-23-2016
05:12 PM
3 Kudos
Combination of ListFile and FetchFile processor can resolve this problem. ListFile processor accepts any network drive. Combination of ListFile processor with FetchFile processor provides equivalent functionality. Thank You @Neeraj Sabharwal for pointing in right direction.
... View more
02-23-2016
05:07 PM
Thank You @Neeraj Sabharwal. I was able to use a combination of ListFile and FetchFile processors to workaround this issue.
... View more
02-23-2016
04:28 PM
2 Kudos
I have NiFi instance 0.4.1 running on my local Windows workstation. I want to copy a file from a different network drive to HDFS. I have been able to copy files from my C: drive to HDFS using getFile and putHDFS processors, but getFile doesn't accept my network drive path. I am specifying network filepath as //server/shared/path/to/file
... View more
Labels:
- Labels:
-
Apache NiFi
02-22-2016
05:18 PM
2 Kudos
Thank You @Artem Ervits for pointing me in right direction. While looking at policy json file, I noticed that it had a null in path for my new policy. It looks like somehow a null was being added to policy file due to some keystroke combination. Once I delete this policy, policy sync starts working correctly. In policycache directory, hdfs_<policy>.json file had following line for my new policy. "resources": {
"path": {
"values": [ null
... View more
02-22-2016
03:19 PM
2 Kudos
If I add a new policy in Ranger for HDFS, sometimes it doesn't sync. Other times it will sync properly. If I delete my new policy, sync starts working again. I am checking timestamps on policies in policy sync directory.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Ranger
02-22-2016
02:39 AM
1 Kudo
Thanks. Looks like that is my only choice.
... View more
02-21-2016
01:13 AM
1 Kudo
Thank You Neeraj. I am running benchmarks on our cluster. Just wanted to understand what max upper limit I can target. Thank you again for quick response and so much help.
... View more