Member since
12-20-2015
39
Posts
0
Kudos Received
0
Solutions
01-06-2020
06:55 AM
Is there a way to setup a Kafka container without ambari and then add kafka broker to ambari ?
... View more
12-02-2019
12:03 AM
@naveensangam @jepe_desu In reference to Invalid KDC administrator credentials issue raised by @naveensangam I wrote a walkthrough of the solution that resolved the issue for other users like @jepe_desu who had encountered exactly the same problem. @naveensangam can you update the thread if my solution resolved your issue or if not can you share what errors you have. Once you accept an answer it can be referenced by other members for similar issues rather than starting a new thread. Happy hadooping
... View more
08-18-2019
09:42 PM
Hi @rushi_ns , yours might be completly different issue. Please create a new Question thread stating your issue.
... View more
10-24-2018
06:10 AM
Why its using LDAP?LDAP is not setup on my cluster.I am using KDC.
@JayKumarSharma
Also i have done the configuration in admin topology so i am using now admin instead of default in my URL.
[hdfs@<knox1> ~]$ curl -k -i -vvvv -u guest:guest-password "https://<knox>:8443/gateway/default/webhdfs/v1/user?=op=LISTSTATUS"
* About to connect() to <knox> port 8443 (#0)
* Trying <knoxIP>... connected
* Connected to <knox> (<knoxIP>) port 8443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=<knox>,OU=Test,O=Hadoop,L=Test,ST=Test,C=US
* start date: Oct 22 16:16:52 2018 GMT
* expire date: Oct 22 16:16:52 2019 GMT
* common name: <knox>
* issuer: CN=<knox>,OU=Test,O=Hadoop,L=Test,ST=Test,C=US
* Server auth using Basic with user 'guest'
> GET /gateway/default/webhdfs/v1/user?=op=LISTSTATUS HTTP/1.1
> Authorization: Basic Z3Vlc3Q6Z3Vlc3QtcGFzc3dvcmQ=
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: <knox>:8443
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
HTTP/1.1 401 Unauthorized
< Date: Wed, 24 Oct 2018 06:04:23 GMT
Date: Wed, 24 Oct 2018 06:04:23 GMT
< Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; Expires=Tue, 23-Oct-2018 06:04:23 GMT
Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; Expires=Tue, 23-Oct-2018 06:04:23 GMT
* Authentication problem. Ignoring this.
< WWW-Authenticate: BASIC realm="application"
WWW-Authenticate: BASIC realm="application"
< Content-Length: 0
Content-Length: 0
< Server: Jetty(9.2.15.v20160210)
Server: Jetty(9.2.15.v20160210)
<
* Connection #0 to host <knox> left intact
* Closing connection #0
[hdfs@dev-p76-app-01 ~]$
<br>
... View more
06-26-2018
04:29 PM
@Mudit Kumar To connect to HDFS that required a Kerberos ticket for authentication, you need to get a valid Kerberos ticket from a relevant KDC and use a client that can send that ticket when requested - all on the client host. First, you need a Kerberos infrastructure on your laptop. If you are running Mac OS, then one should already be installed. If you are running Windows, you will probably need to install something. There are several ways to do this, I suggest searching the Internet for possibly solutions. For example - http://web.mit.edu/kerberos/kfw-4.1/kfw-4.1.html Once you have a Kerberos infrastructure installed, you need to set up a krb5.conf file so that kinit knows where the KDC is so you can authenticate and request service tickets. To get a Kerberos ticket, you need to authenticate using kinit: HW14041:~ rlevas$ kinit rlevas@EXAMPLE.COM
rlevas@EXAMPLE.COM's password: Upon success, you should have a Kerberos ticket: HW14041:~ rlevas$ klist
Credentials cache: API:47BBBB94-9891-4D2A-B8F0-9E796DC30BD1
Principal: rlevas@EXAMPLE.COM
Issued Expires Principal
Jun 26 12:17:06 2018 Jun 27 12:17:05 2018 krbtgt/EXAMPLE.COM@EXAMPLE.COM Now you can use a client that knows how to authenticate using Kerberos, like curl: curl -i --negotiate -u : "http://c6401.ambari.apache.org:50070/webhdfs/v1/tmp?op=LISTSTATUS" Note: --negotiate tells curl to use Kerberos for authentication; and -u tells curl that authentication data should be sent to the server, even though it is empty. Both are important for this call. I hope this helps.
... View more
05-10-2018
09:20 PM
1 Kudo
@Mudit Kumar You have deployed and secured your multi-node-cluster with an MIT KDC running on a Linux box (dedicated or not), this can also be applied on a single node cluster. Below is a step by step procedure Assumption KDC is running KDC is created KDC user and master password is available REALM: DEV.COM Users : user1,user2,user3-user5 Edge node: for users Kerberos Admin user is root or sudoer A good solution security-wise is to copy the generated keytabs to the users'home directory. If these are local Unix users NOT Active directory then create the keytabs in e.g /tmp and later copy them to their respective home directories and make sure to change the correct permissions on the keytabs. You will notice a node dedicated to users EDGE NODE, all client software are installed here and not on the data or name nodes! Change directory to tmp # cd /tmp With root access, no need for sudo, specify the password for user1 # sudo kadmin.local
Authenticating as principal root/admin@DEV.COM with password.
kadmin.local: addprinc user1@DEV.COM
WARNING: no policy specified for user1@DEV.COM; defaulting to no policy
Enter password for principal "user1@DEV.COM":
Re-enter password for principal "user1@DEV.COM":
Principal "user1@DEV.COM" created. Do the above step for for all the other users too addprinc user2@DEV.COM
addprinc user3@DEV.COM
addprinc user4@DEV.COM
addprinc user5@DEV.COM The keytabs with be generated in the current directory Generate keytab for user1 The keytab will be generated in the current directory # sudo ktutil
ktutil: addent -password -p user1@DEV.COM -k 1 -e RC4-HMAC
Password for user1@DEV.COM:
ktutil: wkt user1.keytab
ktutil: q You MUST repeat the above for all the 5 users Copy the newly created keytab to the user's home directory, in this example I have copied the keytab to /etc/security/keytabs # cp user1.keytab /etc/security/keytabs Change ownership & permission here user1 belongs to hadmin group # chown user1:hadmin user1.keytab Again do the above for all the other users. A good technical and security best practice is to copy the keytabs from the kdc to edgenode respective home directories and change the ownership of the keytabs Validate the principals in this example the keytabs are in /etc/security/keytabs # klist -kt /etc/security/keytabs/user1.keytab
Keytab name: FILE:/etc/security/keytabs/user1.keytab
KVNO Timestamp Principal
----------- ------------------- ------------------------------------------------------
1 05/10/2018 10:46:27 user1@DEV.COM To ensure successful ticket attribution the user should validate the principal see example below and use it grab a ticket , the principal will be concatenated with the keytab when running the kinit # klist -kt /etc/security/keytabs/user1.keytab
Keytab name: FILE:/etc/security/keytabs/user1.keytab
KVNO Timestamp Principal
-------- ------------------------ --------------------------------------------------------
1 05/10/18 01:00:50 user1@DEV.COM
.... .................. ..............
1 05/10/18 01:00:50 user1@DEV.COM
Test the new user1 should try grabbing a Kerberos ticket (keytab + principal) # kinit -kt /etc/security/keytabs/user1.keytab user1@DEV.COM The below command should show the validity of the Kerberos ticket # klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: user1@DEV.COM
Valid starting Expires Service principal
05/10/2018 10:53:48 05/11/2018 10:53:48 krbtgt/DEV.COM@DEV.COM You should be okay now access and successfully run jobs on the cluster
... View more
06-23-2018
03:31 PM
@Geoffrey Shelton Okot:Now i need to access my HDP cluster from my Laptop using curl/rest API but i am not able to do so.My laptop is in different AD domain.I tried enabling SPENGO/HTTP as well but no luck.Curl call works inside the cluster but not from outside.Any documentation help on that?
... View more
11-12-2018
06:40 AM
@Jay Kumar SenSharma
I am also facing the same issue..however in my case i am seeing that all packagaes are installed and yum.log is clean means no errors..
ambari=> select * from host_version;
id | repo_version_id | host_id | state
----+-----------------+---------+----------------
8 | 2 | 1 | CURRENT
9 | 2 | 5 | CURRENT
13 | 2 | 3 | CURRENT
12 | 2 | 2 | CURRENT
14 | 2 | 4 | CURRENT
11 | 2 | 7 | CURRENT
10 | 2 | 6 | CURRENT
62 | 52 | 2 | INSTALL_FAILED
63 | 52 | 3 | INSTALL_FAILED
58 | 52 | 1 | INSTALL_FAILED
64 | 52 | 4 | INSTALL_FAILED
59 | 52 | 5 | INSTALL_FAILED
61 | 52 | 7 | INSTALL_FAILED
60 | 52 | 6 | INSTALL_FAILED
(14 rows)
The new target version is showing failed..which pakacges are installed on all nodes and i cannot get to upgrade prompt.
... View more
10-30-2018
12:40 AM
@Geoffrey Shelton Okot
Only a user who own the thread Or a user with 1000+ points can accept other users answers as accepted. I have marked your previous answer as "Accepted" which you answered on "Aug 09, 2017" as that answer looks more informative form this HCC thread perspective.
... View more
12-22-2015
02:31 PM
As for multiple networks you can multi-home the nodes so you have a Public network and a Cluster Traffic network. Hardware vendors like the Cisco Refernce architecture are designed expecting multi-homing to be configured. https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html
... View more