Member since
07-09-2019
361
Posts
97
Kudos Received
56
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
826 | 08-26-2024 08:17 AM | |
1293 | 08-20-2024 08:17 PM | |
500 | 07-08-2024 04:45 AM | |
614 | 07-01-2024 05:27 AM | |
526 | 06-05-2024 06:25 AM |
11-07-2019
09:44 PM
@Manoj690 Can you check whether authorization has been delegated to Ranger/Kerbe/SQLAuth if you have Ranger plugin for Hive enabled then the authorization has been delegated to Ranger the central authority. You will need to enable the permissions through ranger for all hive database Hive > Configs > Settings > In Security it is set to ?
... View more
11-07-2019
06:33 AM
Hi @Scharan Spark conf: tee -a ~/spark/conf/spark-defaults.conf >> /dev/null <<EOF
spark.sql.catalogImplementation hive
spark.master yarn
spark.driver.memory 4g
spark.shuffle.service.enabled true
spark.yarn.jars hdfs:///user/zeppelin/lib/spark/jars/*
EOF Livy conf: tee -a ~/livy/conf/livy-env.sh >> /dev/null <<EOF
JAVA_HOME=/usr/lib/jvm/java-8-oracle
HADOOP_HOME=/usr/lib/hadoop
HADOOP_CONF_DIR=/etc/hadoop/conf
SPARK_HOME=~/spark
LD_LIBRARY_PATH=/usr/lib/hadoop/lib/native
EOF tee -a ~/livy/conf/livy.conf >> /dev/null <<EOF
livy.repl.enable-hive-context = true
livy.spark.master = yarn
livy.spark.deploy-mode = cluster
livy.impersonation.enabled = true
EOF Zeppelin Conf: tee ~/zeppelin/conf/shiro.ini > /dev/null <<EOF
[main]
### FreeIPA over LDAP
ldapRealm = org.apache.zeppelin.realm.LdapRealm
ldapRealm.contextFactory.environment[ldap.searchBase] = dc=my,dc=corp,dc=de
ldapRealm.userDnTemplate = uid={0},cn=users,cn=accounts,dc=my,dc=corp,dc=de
ldapRealm.userSearchScope = subtree
ldapRealm.groupSearchScope = subtree
ldapRealm.searchBase = cn=accounts,dc=my,dc=corp,dc=de
ldapRealm.userSearchBase = cn=users,cn=accounts,dc=my,dc=corp,dc=de
ldapRealm.groupSearchBase = cn=groups,cn=accounts,dc=my,dc=corp,dc=de
ldapRealm.userObjectClass = person
ldapRealm.groupObjectClass = groupofnames
ldapRealm.groupSearchEnableMatchingRuleInChain = true
ldapRealm.userSearchAttributeName = uid
ldapRealm.userSearchFilter=(&(objectclass=person)(uid={0}))
ldapRealm.memberAttribute = member
ldapRealm.memberAttributeValueTemplate = uid={0},cn=users,cn=accounts,dc=my,dc=corp,dc=de
ldapRealm.contextFactory.authenticationMechanism = simple
ldapRealm.contextFactory.systemUsername = zeppelin
ldapRealm.contextFactory.systemPassword = password
ldapRealm.contextFactory.url = ldap://freeipa.my.corp.de:389
securityManager.realms = \$ldapRealm
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
### Enables 'HttpOnly' flag in Zeppelin cookies
cookie = org.apache.shiro.web.servlet.SimpleCookie
cookie.name = JSESSIONID
cookie.httpOnly = true
### Uncomment the below line only when Zeppelin is running over HTTPS
#cookie.secure = true
sessionManager.sessionIdCookie = \$cookie
securityManager.sessionManager = \$sessionManager
# 86,400,000 milliseconds = 24 hour
securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login
[roles]
pharos = *
admin = *
[urls]
/api/version = anon
# Allow all authenticated users to restart interpreters on a notebook page.
# Comment out the following line if you would like to authorize only admin users to restart interpreters.
/api/interpreter/setting/restart/** = authc
/api/interpreter/** = authc
/api/configurations/** = authc
/api/credential/** = authc
/** = authc
EOF Interpreter: "livy": {
"id": "livy",
"name": "livy",
"group": "livy",
"properties": {
"livy.spark.executor.instances": {
"name": "livy.spark.executor.instances",
"value": "",
"type": "number"
},
"livy.spark.dynamicAllocation.cachedExecutorIdleTimeout": {
"name": "livy.spark.dynamicAllocation.cachedExecutorIdleTimeout",
"value": "",
"type": "string"
},
"zeppelin.livy.concurrentSQL": {
"name": "zeppelin.livy.concurrentSQL",
"value": false,
"type": "checkbox"
},
"zeppelin.livy.url": {
"name": "zeppelin.livy.url",
"value": "http://localhost:8998",
"type": "url"
},
"zeppelin.livy.pull_status.interval.millis": {
"name": "zeppelin.livy.pull_status.interval.millis",
"value": "1000",
"type": "number"
},
"livy.spark.executor.memory": {
"name": "livy.spark.executor.memory",
"value": "",
"type": "string"
},
"zeppelin.livy.restart_dead_session": {
"name": "zeppelin.livy.restart_dead_session",
"value": false,
"type": "checkbox"
},
"livy.spark.dynamicAllocation.enabled": {
"name": "livy.spark.dynamicAllocation.enabled",
"value": false,
"type": "checkbox"
},
"zeppelin.livy.maxLogLines": {
"name": "zeppelin.livy.maxLogLines",
"value": "1000",
"type": "number"
},
"livy.spark.dynamicAllocation.minExecutors": {
"name": "livy.spark.dynamicAllocation.minExecutors",
"value": "",
"type": "number"
},
"livy.spark.executor.cores": {
"name": "livy.spark.executor.cores",
"value": "",
"type": "number"
},
"zeppelin.livy.session.create_timeout": {
"name": "zeppelin.livy.session.create_timeout",
"value": "120",
"type": "number"
},
"zeppelin.livy.spark.sql.maxResult": {
"name": "zeppelin.livy.spark.sql.maxResult",
"value": "1000",
"type": "number"
},
"livy.spark.driver.cores": {
"name": "livy.spark.driver.cores",
"value": "4",
"type": "number"
},
"livy.spark.jars.packages": {
"name": "livy.spark.jars.packages",
"value": "",
"type": "textarea"
},
"zeppelin.livy.spark.sql.field.truncate": {
"name": "zeppelin.livy.spark.sql.field.truncate",
"value": true,
"type": "checkbox"
},
"livy.spark.driver.memory": {
"name": "livy.spark.driver.memory",
"value": "8G",
"type": "string"
},
"zeppelin.livy.displayAppInfo": {
"name": "zeppelin.livy.displayAppInfo",
"value": true,
"type": "checkbox"
},
"zeppelin.livy.principal": {
"name": "zeppelin.livy.principal",
"value": "",
"type": "string"
},
"zeppelin.livy.keytab": {
"name": "zeppelin.livy.keytab",
"value": "",
"type": "textarea"
},
"livy.spark.dynamicAllocation.maxExecutors": {
"name": "livy.spark.dynamicAllocation.maxExecutors",
"value": "",
"type": "number"
},
"livy.spark.dynamicAllocation.initialExecutors": {
"name": "livy.spark.dynamicAllocation.initialExecutors",
"value": "",
"type": "number"
}
},
"status": "READY",
"interpreterGroup": [
{
"name": "spark",
"class": "org.apache.zeppelin.livy.LivySparkInterpreter",
"defaultInterpreter": true,
"editor": {
"language": "scala",
"editOnDblClick": false,
"completionKey": "TAB",
"completionSupport": true
}
},
{
"name": "sql",
"class": "org.apache.zeppelin.livy.LivySparkSQLInterpreter",
"defaultInterpreter": false,
"editor": {
"language": "sql",
"editOnDblClick": false,
"completionKey": "TAB",
"completionSupport": true
}
},
{
"name": "pyspark",
"class": "org.apache.zeppelin.livy.LivyPySparkInterpreter",
"defaultInterpreter": false,
"editor": {
"language": "python",
"editOnDblClick": false,
"completionKey": "TAB",
"completionSupport": true
}
},
{
"name": "pyspark3",
"class": "org.apache.zeppelin.livy.LivyPySpark3Interpreter",
"defaultInterpreter": false,
"editor": {
"language": "python",
"editOnDblClick": false,
"completionKey": "TAB",
"completionSupport": true
}
},
{
"name": "sparkr",
"class": "org.apache.zeppelin.livy.LivySparkRInterpreter",
"defaultInterpreter": false,
"editor": {
"language": "r",
"editOnDblClick": false,
"completionKey": "TAB",
"completionSupport": true
}
},
{
"name": "shared",
"class": "org.apache.zeppelin.livy.LivySharedInterpreter",
"defaultInterpreter": false
}
],
"dependencies": [],
"option": {
"remote": true,
"port": -1,
"perNote": "shared",
"perUser": "scoped",
"isExistingProcess": false,
"setPermission": false,
"owners": [],
"isUserImpersonate": false
}
}, Thanks! Nicola
... View more
11-07-2019
06:06 AM
1 Kudo
@Harish-hadoop Add the queue name in hive.url property in jdbc interpreter settings hive.url=jdbc:hive2://<zookeeper quorum>/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2?tez.queue.name=<Queue Name> EXAMPLE: hive.url= jdbc:hive2://test1:2181,test2:2181,test3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2?tez.queue.name=sai
... View more
11-06-2019
06:20 AM
1 Kudo
@wdailey Use "ldapRealm=org.apache.zeppelin.realm.LdapRealm" Below is the template for your reference ldapRealm=org.apache.zeppelin.realm.LdapRealm
ldapRealm.contextFactory.systemUsername =cn=manager,dc=charan,dc=com
ldapRealm.contextFactory.systemPassword =admin
ldapRealm.contextFactory.authenticationMechanism=simple
ldapRealm.contextFactory.url=ldap://test1:389
ldapRealm.authorizationEnabled=true
#ldapRealm.pagingSize = 20000
ldapRealm.searchBase=
ldapRealm.userSearchBase=
ldapRealm.groupSearchBase=
ldapRealm.userObjectClass=
ldapRealm.userSearchAttributeName = uid
ldapRealm.userSearchScope = subtree
ldapRealm.groupSearchScope = subtree
ldapRealm.userSearchFilter=
ldapRealm.memberAttribute = member
ldapRealm.memberAttributeValueTemplate=(name={0})
... View more
10-24-2019
03:41 AM
1 Kudo
@IrenaKon Refer below link https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.5/index.html
... View more
10-22-2019
08:16 AM
1 Kudo
@Peruvian81 You need to update ranger, You can follow the below steps to reset the password in postgres: 1.Login into postgres 2. postgres=# \connect ranger 3. ranger=# update x_portal_user set password = 'ceb4f32325eda6142bd65215f4c0f371' where login_id = 'admin'; Above would reset the password to 'admin'. 4. Login to Ranger UI using the above password 5. Go to User Profile and change the password 6. Open Ambari UI Ranger Configs 7. Update 'admin_password' in Advanced ranger-env with the newly set password
... View more
09-04-2019
10:45 AM
2 Kudos
To encrypt LDAP bind password in zeppelin Shiro configuration follow below steps: 1. Removed system password line from the config (this line contained the password for the system user that is doing the LDAP lookups) 2. Added the following line in shiro.ini: ldapRealm.hadoopSecurityCredentialPath=jceks://file/home/zeppelin/conf/zeppelin.jceks 3. Run the following command as Zeppelin user on Zeppelin host: # hadoop credential create ldapRealm.systemPassword -provider jceks://file/home/zeppelin/conf/zeppelin.jceks
#Entered password <test> Where <test> is my password 4. Confirmed that file has 700 attributes 5. Restart Zeppelin
... View more
Labels:
09-04-2019
10:28 AM
1 Kudo
Jdbc interpreter is will take default queue to run a job.
To run the job in specific queue add the queue name in hive.url property in JDBC interpreter settings as shown below
hive.url=jdbc:hive2://<zookeeper quorum>/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2?tez.queue.name=<Queue Name>
EXAMPLE:
hive.url= jdbc:hive2://c186-node2.-labs.com:2181,c186-node3.-labs.com:2181,c186-node4.-labs.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2?tez.queue.name=sai
Where sai is my queue name.
To check and confirm execute below query in zeppelin using JDBC interpreter and check-in Yarn Resource manager UI
%jdbc(hive)
show databases;
use <database name>;
show tables;
select count (*) from <table name>;
... View more
Labels:
12-28-2018
12:59 PM
@Sedat Kestepe no it is not related to mainboard changed
... View more
12-17-2018
05:53 AM
thanks bro
... View more
- « Previous
- Next »