Member since
07-23-2019
5
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1662 | 07-25-2019 01:09 AM |
07-26-2019
02:11 AM
1 Kudo
Hello! DISCLAIMER: I don't have this knowledge, but want to add my two cents First, I don't think is really necessary. When you delete a service via UI, it automatically refresh and the service dissapears, therefore, by using the REST API, should trigger the same mechanism (if not, I feel is bad design). Second: Ambari Server is independent from HDP. What I mean is: Ambari is used for monitoring/installing/configuring/etc, but it doesnt affect on your cluster. You can power off you Ambari, and your cluster is going to work without problems, therefore, I think restarting or not, is transparent (besides triggering alarms and loading (a tiny bit the system on restart)). My question here is why do you need to use the REST api to delete a service. I mean, a service is normally a "once in a lifetime, besides updates". Why not use GUI, and simplify your life?
... View more
07-25-2019
01:09 AM
1 Kudo
Hello! Seems the disk balancer utility was introduced after HDP 3.0.0-alpha1 , see here. Someone was talking that technically was possible to port back to previous HDP versions, but seems there is no progress on here. As far as I know, we have not backported this change to HDP 2.1 or 2.4.2. There is nothing technically preventing us from doing so; Disk balancer does not depend on any of the newer 3.0 features. In another discussion, they suggest decommissioning the done, and commissioning again. Yes, is an arduous task, but, better than nothing Apache documentation for Disk rebalancing
... View more
07-25-2019
12:59 AM
1 Kudo
I will strongly suggest to reinstall the OS. They extra time you will spend (few hours max) is not worth it the thousand hours you will spend debuging in future. You will never be able to isolate or asses that the previous installation is not affecting you. Unless your case is very particular (e.g. You really DO NOT have access to the machine, etc... ), reinstall the OS and give yourself peace of mind. Nice guide though @Geoffrey Shelton Okot
... View more
07-24-2019
01:26 PM
Hello all! This is my first question here! I am trying to set a secure cluster using Kerberos. I have already install my own Kerberos server, and works like a charm on the console. The problem comes when I am trying to access to the Hadoop components UI's (HDFS, Hive, etc). I know I need to configure my browsers, and there is the problem. I have downloaded MIT Kerberos ticket system for Windows 10, installed it, and configure the krb5.ini file. It is perfectly generating the kerberos ticket (visually I can see it generated it). Following the instructions for configuring browsers to access Kerberized cluster, link, also, from external sources, like this one, (this last one made me realize I need to write down the kdc address, but I actually have included all), or this . Firefox network.negotiate-auth.delegation-uris = http://192.168.0.30, http://192.168.0.50, http://192.168.0.81, http://192.168.0.101, http://192.168.0.102, 192.168.0.30, 192.168.0.81, 192.168.0.101, 192.168.0.102, 192.168.0.50
network.negotiate-auth.trusted-uris = http://192.168.0.30, http://192.168.0.81, http://192.168.0.101, http://192.168.0.102, 192.168.0.30, 192.168.0.81, 192.168.0.101, 192.168.0.102
network.auth.use-sspi = false IE I have done the thing of putting in Internet Options -> Security -> Trusted Zones -> Add IP, Local Intranet zone -> Automatic Logon only in Local Intranet Chrome Same same... google-chrome --auth-server-whitelist = "admin/admin" or google-chrome --auth-server-whitelist = "192.168.0.81" Other observations: If I use command line to run kinit, it shows zero tickets, even though in MIT Kerberos app it has Browsers answers: java.lang.IllegalArgumentException: Malformed gss token Many others like: Authentification failure. I am out of ideas, I really trust that there is no security without Kerberos, and the next step will be to add Apache Knox, but this is for future. Can someone, please, point me anything? I have used all the google/bing links about this problems. I know this probably is related to the browsers, but I cannot discard. Note: Yes, in Ubuntu 16.04 console I am able to connect to beeline, HDFS, ..., everything is managed perfectly by Ranger (Awesome!) I am documenting all this process, so I am okay to write a guide in future for the community as giveback. -------------------------------------------------------------------------------------------------------------------------------------------------------------- Cluster info: HDP 3.1 Kerberos: 5 Accessing Machine: Windows 10, or Mac OSx Browsers: Any, IE, Chrome, Firefox. OS: Ubuntu 16.04 IP Address: kerberos server: 192.168.0.30 ambari server: 192.168.0.50 hdp-master-001: 192.168.0.81 hdp-worker-001: 192.168.0.101 hdp-worker-002: 192.168.0.102 krb5.ini [libdefaults] default_realm = CLUSTER001
[realms] EXAMPLE.COM = { admin_server = 192.168.0.30
kdc = 192.168.0.30 }
... View more
Labels: