Member since
11-07-2016
637
Posts
253
Kudos Received
144
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2193 | 12-06-2018 12:25 PM | |
2222 | 11-27-2018 06:00 PM | |
1727 | 11-22-2018 03:42 PM | |
2776 | 11-20-2018 02:00 PM | |
5006 | 11-19-2018 03:24 PM |
12-06-2018
12:25 PM
2 Kudos
@nur majid, You can use the below API to validate the config curl -u username:password -X POST -d '{post-body-json-mentioned-below}' -H "Content-Type:application/json" http://{ranger-host}:{port}/service/plugins/services/validateConfig Sample post body looks like below {
"id": 1,
"guid": "fd82acf1-d3e9-4707-9a75-a839a9802cd9",
"isEnabled": true,
"createdBy": "hdfs",
"updatedBy": "hdfs",
"createTime": 1544045853000,
"updateTime": 1544045853000,
"version": 1,
"type": "hdfs",
"name": "cl1_hadoop",
"description": "hdfs repo",
"configs": {
"username": "hadoop",
"password": "*****",
"fs.default.name": "hdfs://mycluster",
"hadoop.security.authorization": true,
"hadoop.security.authentication": "kerberos",
"hadoop.security.auth_to_local": "RULE:[1:$1@$0](ambari-qa@EXAMPLE.COM)s/.*/ambari-qa/RULE:[1:$1@$0](hbase@EXAMPLE.COM)s/.*/hbase/RULE:[1:$1@$0](hdfs@EXAMPLE.COM)s/.*/hdfs/RULE:[1:$1@$0](yarn-ats@EXAMPLE.COM)s/.*/yarn-ats/RULE:[1:$1@$0](.*@EXAMPLE.COM)s/@.*//RULE:[2:$1@$0](amshbase@EXAMPLE.COM)s/.*/ams/RULE:[2:$1@$0](amsmon@EXAMPLE.COM)s/.*/ams/RULE:[2:$1@$0](amszk@EXAMPLE.COM)s/.*/ams/RULE:[2:$1@$0](atlas@EXAMPLE.COM)s/.*/atlas/RULE:[2:$1@$0](dn@EXAMPLE.COM)s/.*/hdfs/RULE:[2:$1@$0](hbase@EXAMPLE.COM)s/.*/hbase/RULE:[2:$1@$0](hive@EXAMPLE.COM)s/.*/hive/RULE:[2:$1@$0](jhs@EXAMPLE.COM)s/.*/mapred/RULE:[2:$1@$0](jn@EXAMPLE.COM)s/.*/hdfs/RULE:[2:$1@$0](knox@EXAMPLE.COM)s/.*/knox/RULE:[2:$1@$0](nfs@EXAMPLE.COM)s/.*/hdfs/RULE:[2:$1@$0](nm@EXAMPLE.COM)s/.*/yarn/RULE:[2:$1@$0](nn@EXAMPLE.COM)s/.*/hdfs/RULE:[2:$1@$0](rangeradmin@EXAMPLE.COM)s/.*/ranger/RULE:[2:$1@$0](rangerkms@EXAMPLE.COM)s/.*/keyadmin/RULE:[2:$1@$0](rangertagsync@EXAMPLE.COM)s/.*/rangertagsync/RULE:[2:$1@$0](rangerusersync@EXAMPLE.COM)s/.*/rangerusersync/RULE:[2:$1@$0](rm@EXAMPLE.COM)s/.*/yarn/RULE:[2:$1@$0](yarn@EXAMPLE.COM)s/.*/yarn/RULE:[2:$1@$0](yarn-ats-hbase@EXAMPLE.COM)s/.*/yarn-ats/DEFAULT",
"dfs.datanode.kerberos.principal": "dn/test-node-4.openstacklocal@EXAMPLE.COM",
"dfs.namenode.kerberos.principal": "nn/test-node-4.openstacklocal@EXAMPLE.COM",
"dfs.secondary.namenode.kerberos.principal": "nn/test-node-4.openstacklocal@EXAMPLE.COM",
"hadoop.rpc.protection": "privacy",
"commonNameForCertificate": "-",
"tag.download.auth.users": "hdfs",
"policy.download.auth.users": "hdfs"
},
"policyVersion": 3,
"policyUpdateTime": 1544045856000,
"tagVersion": 1,
"tagUpdateTime": 1544045853000,
"tagService": ""
} You can get the exact json for your cluster from the browser's developer tools Right Click -> Inspect -> Network -> Click on the request -> Request payload . If this helped you , please take a moment to login and "Accept" the answer 🙂
... View more
11-27-2018
06:00 PM
1 Kudo
@Sami Ahmad For Knox , you have to make 2 curl calls. The second curl call should be made to the "Location" header obtained from the 1st curl call response. 1st curl call curl -i -k -u admin:admin-password -X GET 'https://localhost:8443/gateway/default/webhdfs/v1/tmp/uname.txt?op=OPEN'
HTTP/1.1 307 Temporary Redirect
Date: Tue, 27 Nov 2018 16:21:44 GMT
Set-Cookie: JSESSIONID=1219u2f8zreb11eu9fuxlggxhq;Path=/gateway/default;Secure;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; Expires=Mon, 26-Nov-2018 16:21:44 GMT
Cache-Control: no-cache
Expires: Tue, 27 Nov 2018 16:21:44 GMT
Date: Tue, 27 Nov 2018 16:21:44 GMT
Pragma: no-cache
Expires: Tue, 27 Nov 2018 16:21:44 GMT
Date: Tue, 27 Nov 2018 16:21:44 GMT
Pragma: no-cache
X-FRAME-OPTIONS: SAMEORIGIN
Location: https://hadoop1:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/tmp/uname.txt?_=AAAACAAAABAAAACgLvtILkFAljr5PIP7MVSOAump8j0kSwFCPdGCP2R_b1tCZ0V2KGOQuiRiI4_IU7GDG6NqRtK2Vu7DOZeOhbuQUaP1FYtD_-IV3P-VXMbOFbPfbwpNseAuN-RyQduRm5S1mrk0GVbYKQg4NscgsoF0GGsvqKDyPtECwhwkX96E37Jc5_yCnlkw3LVKUY41Hg6LOt96W8-3rTmnrbo7o26dOcpPv1_uv4Q1F18b4yk5N5BNf6HTZdVZ6Q
Content-Type: application/octet-stream
Server: Jetty(6.1.26.hwx)
Content-Length: 0
2nd curl call (url is taken from Location header obtained in 1st curl call) curl -i -k -u admin:admin-password -X GET https://hadoop1:8443/gateway/default/webhdfs/data/v1/webhdfs/v1/tmp/uname.txt?_=AAAACAAAABAAAACgLvtILkFAljr5PIP7MVSOAump8j0kSwFCPdGCP2R_b1tCZ0V2KGOQuiRiI4_IU7GDG6NqRtK2Vu7DOZeOhbuQUaP1FYtD_-IV3P-VXMbOFbPfbwpNseAuN-RyQduRm5S1mrk0GVbYKQg4NscgsoF0GGsvqKDyPtECwhwkX96E37Jc5_yCnlkw3LVKUY41Hg6LOt96W8-3rTmnrbo7o26dOcpPv1_uv4Q1F18b4yk5N5BNf6HTZdVZ6Q
... View more
11-22-2018
03:42 PM
1 Kudo
@chris herssens, Looks like there is a resource crunch. Can you try adding additional NodeManagers if possible. Check if any applications are running in YARN. You can kill any app if it's not used and see if your spark job goes to running state from accepted.
... View more
11-22-2018
03:33 PM
@Guillaume Roger, I guess ATSv2 is running as a service and not embedded mode. Can you filter for "is_hbase_system_service" in YARN configs and check the value. If it is set to true, then ATS v2 will be running as a yarn application. Else, it will be running in embedded mode. If it is running as a yarn application, then it can be started on any of the node where NodeManagers are present with proper resources. Can you check in the Yarn application logs , if HBase master and region servers are able to come up properly.
... View more
11-21-2018
03:46 AM
@Andreas Kühnert, Glad that the issue is resolved. Since this is a different issue, I suggest to open a new thread for this issue so that the main thread doesn't get deviated. I'm not sure of that issue, may be other experts can help 🙂
... View more
11-20-2018
02:26 PM
/etc/hosts file just translates the hostname to the ip. Even without the mapping you can open the urls by passing the ip. If you want to access it using hostname, then you need the entries in /etc/hosts files. It doesn't do any access control.
... View more
11-20-2018
02:00 PM
@kanna k, Did you make the /etc/hosts entries in your laptop/desktop from where you are accessing the ambari cluster. If not, please add entries in your desktop as well.
... View more
11-19-2018
03:24 PM
@Andreas Kühnert, I guess the directory got deleted somehow from HDFS. You can create it and try starting RM # su hdfs
# kinit -kt /etc/security/keytabs/hdfs.headless.keytab {principal} ---> Run this if your environment is kerberized
# hdfs dfs -mkdir -p /ats/done
# hdfs dfs -chown -R yarn:hadoop /ats/done After running the commands, try to restart RM. If it fails with /ats/active directory not found, repeat the same steps changing the directory name. . If this works , please take a moment to login and "Accept" the answer.
... View more
11-15-2018
11:58 AM
You can work with pyspark if you know python. All the features will be available in pyspark as well. You need not learn scala or java
... View more
11-15-2018
06:32 AM
You need not learn both Java and Scala to start with spark. If need to be familiar with either of Java,Scala, Python or R to work with spark. You can start with this tutorial to understand the basics https://hortonworks.com/tutorial/hands-on-tour-of-apache-spark-in-5-minutes/ https://www.tutorialspoint.com/pyspark/index.htm
... View more