Member since
04-03-2019
89
Posts
5
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1874 | 01-21-2022 04:31 PM | |
4347 | 02-25-2020 10:02 AM | |
2214 | 02-19-2020 01:29 PM | |
1952 | 09-17-2019 06:33 AM | |
4438 | 08-26-2019 01:35 PM |
01-21-2022
04:43 PM
@Scharan By the way, under Zeppelin Shiro Urls Block, the original value is ++ /api/interpreter/** = authc, roles[{{zeppelin_admin_group}}] ++ Could you tell me what this notation {{zeppelin_admin_group}} for? I saw this kind of notation - double curly braces - frequently. Is it a token to be replaced? If yes, what kind of replacement it is waiting for? Thanks.
... View more
01-21-2022
04:31 PM
@Scharan I figured out. CDP Cloudera Manager UI did expose shiro.ini like Ambari, but did it via a different layout, which I should have realized earlier. Under "zeppelin.shiro.user.block", I added admin=admin, admin , and it worked. Thanks.
... View more
01-21-2022
03:01 PM
On the Zeppellin node, under the directory /etc/zeppelin/conf, I found the following files. ++ configuration.xsl interpreter-list log4j.properties log4j_yarn_cluster.properties shiro.ini.template zeppelin-env.cmd.template zeppelin-env.sh.template zeppelin-site.xml.template ++ Should I create a shiro.ini file here?
... View more
01-21-2022
02:32 PM
@Scharan Thanks for the reply. I followed your recommendation and got the same permission error. I felt the disconnect is that, I added a user called admin successfully. The configuration /api/interpreter/** = authc, roles[admin] is for a role called admin. The link between a user and a role seems to be inside shiro.ini, which I have no idea how I can access. I used Zeppelin in HDP and the HDP Zeppelin exposes its shiro.ini via Zeppelin configuration inside Ambari. Now in CDP I cannot find a similar configuration inside Cloudera Manager.
... View more
01-20-2022
07:02 PM
I am using CDP 7.1.7 and the cluster has not enabled Kerbores yet. Ranger is not enabled either. I followed the step in this post https://community.cloudera.com/t5/Support-Questions/CDP-7-1-3-Zepplin-not-able-to-login-with-default-username/td-p/303717 to be able to log in as admin. But this "admin" account has no permission to access the configuration or interpreter page. According to CDP documentation, https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/configuring-zeppelin/topics/enabling_access_control_for_interpreter__configuration__and_credential_settings.html, to configure shiro.ini for Zeppelin security, I have to go through Zeppelin web UI. What should I do? Regards,
... View more
Labels:
- Labels:
-
Apache Zeppelin
11-18-2021
01:30 PM
rbiswas1, I tried your code but pssh returned a timeout error. It was waiting for the password but I never got the prompt to enter the password. Could you elaborate more about your method? Thanks.
... View more
09-15-2021
10:32 PM
@RangaReddy The link is exactly what I need. Thanks for your help.
... View more
09-09-2021
01:18 AM
I am trying to parse a nested json document using RDD rather than DataFrame. The reason I cannot use DataFrame (the typical code is like spark.read.json) is that the document structure is very complicated. The schema detected by the reader is useless because child nodes at the same level have different schemas. So I try the script below. import json
s='{"key1":{"myid": "123","myname":"test"}}'
rdd=sc.parallelize(s).map(json.loads) My next step will be using map transformation to parse json string but I do not know where to start. I tried the script below but it failed. rdd2=rdd.map(lambda j: (j[x]) for x in j) I would appreciate any resource on using RDD transformation to parse json.
... View more
Labels:
- Labels:
-
Apache Spark
09-03-2021
05:08 PM
Vidya, Thanks for your reply. Could you help me clarify the issue further? Does Spark (or other MapReduce tool) create the container using the local host as its template (to some degree)?
... View more
08-26-2021
02:58 PM
I will use Spark2 in CDP and need to install Python3. Do I need to installation Python3 on every node in the CDP cluster, just only need to install it on one particular node? Spark2 job is executed in JVM containers that could be created on any worker node. I wonder whether the container is created upon a template? If yes, then how the template is created and where is it? Thanks.
... View more
Labels:
- Labels:
-
Apache Spark