Member since
03-07-2019
158
Posts
53
Kudos Received
33
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6396 | 03-08-2019 08:46 AM | |
4354 | 10-17-2018 10:25 AM | |
2782 | 10-16-2018 07:46 AM | |
2126 | 10-16-2018 06:57 AM | |
1773 | 10-12-2018 09:55 AM |
08-16-2018
08:02 AM
I think SmartSense Activity Analyzer would be helpful to you. Have a look through this; https://docs.hortonworks.com/HDPDocuments/SS1/SmartSense-1.4.5/bk_installation/content/activity_analysis.html You could also see reports of the following; • Top N Longest Running Jobs
• Top N Resource Intensive Jobs
• Top N Resource Wasting Jobs
• Job Distribution By Type
• Top N Data IO Users
• CPU Usage By Queue
• Job Submission Trend By Day.Hour https://docs.hortonworks.com/HDPDocuments/SS1/SmartSense-1.4.5/bk_user-guide/content/mr_notebook.html You can also look at TEZ view in ambari, to analyze the hive queries; https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.2/bk_ambari-views/content/analyze_details_hive_queries.html
... View more
08-16-2018
06:29 AM
Hi @Junfeng Chen Yes, in both cases the namenodes still need to be able to look up the user and find the groups, after that ranger can take care of the authorization. You could use ranger group sync to pull this sort of information into ranger automatically; https://hortonworks.com/apache/ranger/#section_2 PS. If your question has been answered, please take a moment to mark the answer as accepted. This will make it easy for folks landing on this page to find the solution quickly!
... View more
08-15-2018
03:04 PM
@prashanth ramesh did you add fs.s3a.access.key and fs.s3a.secret.key to custom hive-site also? Noticed a similar post here; https://stackoverflow.com/questions/50710582/write-to-s3-from-hive-fails
... View more
08-15-2018
02:42 PM
Is the fs.s3a.impl property set up also? Noticed it wasn't mentioned on the docu you linked. Also can we verify that there are no typos inside the jceks file?
... View more
08-15-2018
10:44 AM
Hi @Junfeng Chen The namenode still has to be able to map the user to the group either locally (/etc/passwd) or from AD/LDAP. Keep in mind ranger is for authorization, not authentication. Did you verify that the OS of your namenode is able to resolve the user and the groups of the user?
... View more
08-15-2018
09:24 AM
If you log in to the CB Web UI, there should be a button to reset the password at the login. Screenshot of that is visible on this guide just after "8. Log in with the administrator email address and password created in the first step of the Azure wizard." ; https://community.hortonworks.com/articles/122086/get-started-with-cloudbreak-on-the-azure-marketpla.html
... View more
08-15-2018
09:06 AM
Hi @Junfeng Chen Is your namenode able to resolve the same group/user mapping? Is your test_group01 an internal or external group in ranger? Did you create the group on the OS end and add the user there? Below is an example of the behaviour with an external group, which exists only on my rangerhost; [root@RANGERHOST]# groupadd test_group01
[root@RANGERHOST]# useradd test01
[root@RANGERHOST]# usermod -a -G test_group01 test01
[hdfs@RANGERHOST]$ hdfs dfs -mkdir /testpath
[hdfs@RANGERHOST]$ hdfs dfs -chmod 000 /testpath
followed by -> add new policy in ranger web UI using ONLY test_group01 this you might expect to work, but I'd see; [test01@RANGERHOST]$ hdfs dfs -ls /testpath
ls: Permission denied: user=test01, access=READ_EXECUTE, inode="/testpath":hdfs:hdfs:d--------- After adding the local group and member to the NameNode; [root@NN]# groupadd test_group01
[root@NN]# useradd test01
[root@NN]# usermod -a -G test_group01 test01 It's working now; [test01@RANGERHOST]$ hdfs dfs -put testfile /testpath
[test01@RANGERHOST]$ hdfs dfs -ls /testpath
Found 1 items
-rw-r--r-- 3 test01 hdfs 0 2018-08-15 09:04 /testpath/testfile
... View more
08-14-2018
08:39 AM
There are a variety of options for memory. I just tried the same for my cluster and using for example fields=metrics/memory/Use._avg[1534235091,1534235391,15] worked well. Further options quoted from the source code; /clusters/{clusterName}/?fields=metrics/memory/Buffer._avg[{fromSeconds},{toSeconds},{stepSeconds}],metrics/memory/Cache._avg[{fromSeconds},{toSeconds},{stepSeconds}],metrics/memory/Share._avg[{fromSeconds},{toSeconds},{stepSeconds}],metrics/memory/Swap._avg[{fromSeconds},{toSeconds},{stepSeconds}],
metrics/memory/Total._avg[{fromSeconds},{toSeconds},{stepSeconds}], metrics/memory/Use._avg[{fromSeconds},{toSeconds},{stepSeconds}]',
... View more
08-14-2018
07:31 AM
1 Kudo
Hi @naveen r If you have ambari metrics installed you can do this with REST API. For example; curl -u admin:admin -H 'X-Requested-By:ambari' -X GET "https//localhost:8080/api/v1/clusters/<your cluster name>/?fields=metrics/load/CPUs._avg[1534230000,1534231500,15]" Note that ; CPUs._avg[X,Y,15]
X = start time, Y = end time, 15 = step value used for zero padding or null padding More examples and official documentation on: https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Metrics+API+specification
... View more
08-13-2018
08:57 AM
If your hive table is in ORC format, you can give this a try; https://orc.apache.org/docs/java-tools.html
... View more