Member since
07-09-2019
357
Posts
97
Kudos Received
56
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
361 | 08-26-2024 08:17 AM | |
564 | 08-20-2024 08:17 PM | |
352 | 07-08-2024 04:45 AM | |
399 | 07-01-2024 05:27 AM | |
377 | 06-05-2024 06:25 AM |
09-22-2020
07:03 PM
It works,thank you.
... View more
09-17-2020
07:00 AM
1 Kudo
@9een Yes it is supported you can refer to the following links https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/authentication-with-kerberos/content/kerberos_optional_use_an_existing_ipa.html https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/ambari-authentication-ldap-ad/content/amb_freeIPA_ladap_setup_example.html
... View more
09-17-2020
05:27 AM
2 Kudos
@Nivas12 You can define an alert dispatcher which Ambari will invoke when alerts fire: Refer https://cwiki.apache.org/confluence/display/AMBARI/Creating+a+Script-based+Alert+Dispatcher
... View more
09-04-2020
10:14 PM
@saivenkatg55 Use the below commands to remove the background operation entry in ambari database. # select task_id,role,role_command from host_role_command where status='IN_PROGRESS'; The above command will list all IN_PROGRESS status , You can also check for QUEUED or PENDING STATE by replacing 'IN_PROGRESS' with QUEUED or PENDING # update host_role_command set status='ABORTED' where status='QUEUED'; Use the above command to change the state to ABORTED
... View more
08-05-2020
03:56 AM
@shrikant_bmSimilar issue for me got resolved after removing the 'renew_lifetime' line /etc/krb5.conf. The following link also provides additional information regarding this issue: https://community.cloudera.com/t5/Community-Articles/How-to-solve-the-Message-stream-modified-41-error-on/ta-p/292986
... View more
08-03-2020
02:21 AM
@GangWar In which Oozie's service configuration item in Cloudera Manager this should be defined?
... View more
01-12-2020
05:44 AM
2 Kudos
@murali2425 You could build a nifi flow that could “copy” files to guthub. This would need to be done by creating a custom processor or maybe even just ExecuteScript to run a custom python script. Either route you take you would need to make sure that all nifi nodes are setup with permissions to write to the github repo. Then inside of your custom processor or script you would execute the required git commands to commit (“copy”) the file(s).
... View more
12-19-2019
01:00 PM
This resolved my problem: "As per the source code it pulls group's cn based on these values. Also comment out below if there is no group inside groups." Instead of putting full DN name, simply put cn name. Thanks. This should be the accepted answer for zeppelin 0.8. Zeppelin version: 0.8; HDP version: HDP 3.1.4
... View more
11-07-2019
09:44 PM
@Manoj690 Can you check whether authorization has been delegated to Ranger/Kerbe/SQLAuth if you have Ranger plugin for Hive enabled then the authorization has been delegated to Ranger the central authority. You will need to enable the permissions through ranger for all hive database Hive > Configs > Settings > In Security it is set to ?
... View more
11-07-2019
06:33 AM
Hi @Scharan Spark conf: tee -a ~/spark/conf/spark-defaults.conf >> /dev/null <<EOF
spark.sql.catalogImplementation hive
spark.master yarn
spark.driver.memory 4g
spark.shuffle.service.enabled true
spark.yarn.jars hdfs:///user/zeppelin/lib/spark/jars/*
EOF Livy conf: tee -a ~/livy/conf/livy-env.sh >> /dev/null <<EOF
JAVA_HOME=/usr/lib/jvm/java-8-oracle
HADOOP_HOME=/usr/lib/hadoop
HADOOP_CONF_DIR=/etc/hadoop/conf
SPARK_HOME=~/spark
LD_LIBRARY_PATH=/usr/lib/hadoop/lib/native
EOF tee -a ~/livy/conf/livy.conf >> /dev/null <<EOF
livy.repl.enable-hive-context = true
livy.spark.master = yarn
livy.spark.deploy-mode = cluster
livy.impersonation.enabled = true
EOF Zeppelin Conf: tee ~/zeppelin/conf/shiro.ini > /dev/null <<EOF
[main]
### FreeIPA over LDAP
ldapRealm = org.apache.zeppelin.realm.LdapRealm
ldapRealm.contextFactory.environment[ldap.searchBase] = dc=my,dc=corp,dc=de
ldapRealm.userDnTemplate = uid={0},cn=users,cn=accounts,dc=my,dc=corp,dc=de
ldapRealm.userSearchScope = subtree
ldapRealm.groupSearchScope = subtree
ldapRealm.searchBase = cn=accounts,dc=my,dc=corp,dc=de
ldapRealm.userSearchBase = cn=users,cn=accounts,dc=my,dc=corp,dc=de
ldapRealm.groupSearchBase = cn=groups,cn=accounts,dc=my,dc=corp,dc=de
ldapRealm.userObjectClass = person
ldapRealm.groupObjectClass = groupofnames
ldapRealm.groupSearchEnableMatchingRuleInChain = true
ldapRealm.userSearchAttributeName = uid
ldapRealm.userSearchFilter=(&(objectclass=person)(uid={0}))
ldapRealm.memberAttribute = member
ldapRealm.memberAttributeValueTemplate = uid={0},cn=users,cn=accounts,dc=my,dc=corp,dc=de
ldapRealm.contextFactory.authenticationMechanism = simple
ldapRealm.contextFactory.systemUsername = zeppelin
ldapRealm.contextFactory.systemPassword = password
ldapRealm.contextFactory.url = ldap://freeipa.my.corp.de:389
securityManager.realms = \$ldapRealm
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
### Enables 'HttpOnly' flag in Zeppelin cookies
cookie = org.apache.shiro.web.servlet.SimpleCookie
cookie.name = JSESSIONID
cookie.httpOnly = true
### Uncomment the below line only when Zeppelin is running over HTTPS
#cookie.secure = true
sessionManager.sessionIdCookie = \$cookie
securityManager.sessionManager = \$sessionManager
# 86,400,000 milliseconds = 24 hour
securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login
[roles]
pharos = *
admin = *
[urls]
/api/version = anon
# Allow all authenticated users to restart interpreters on a notebook page.
# Comment out the following line if you would like to authorize only admin users to restart interpreters.
/api/interpreter/setting/restart/** = authc
/api/interpreter/** = authc
/api/configurations/** = authc
/api/credential/** = authc
/** = authc
EOF Interpreter: "livy": {
"id": "livy",
"name": "livy",
"group": "livy",
"properties": {
"livy.spark.executor.instances": {
"name": "livy.spark.executor.instances",
"value": "",
"type": "number"
},
"livy.spark.dynamicAllocation.cachedExecutorIdleTimeout": {
"name": "livy.spark.dynamicAllocation.cachedExecutorIdleTimeout",
"value": "",
"type": "string"
},
"zeppelin.livy.concurrentSQL": {
"name": "zeppelin.livy.concurrentSQL",
"value": false,
"type": "checkbox"
},
"zeppelin.livy.url": {
"name": "zeppelin.livy.url",
"value": "http://localhost:8998",
"type": "url"
},
"zeppelin.livy.pull_status.interval.millis": {
"name": "zeppelin.livy.pull_status.interval.millis",
"value": "1000",
"type": "number"
},
"livy.spark.executor.memory": {
"name": "livy.spark.executor.memory",
"value": "",
"type": "string"
},
"zeppelin.livy.restart_dead_session": {
"name": "zeppelin.livy.restart_dead_session",
"value": false,
"type": "checkbox"
},
"livy.spark.dynamicAllocation.enabled": {
"name": "livy.spark.dynamicAllocation.enabled",
"value": false,
"type": "checkbox"
},
"zeppelin.livy.maxLogLines": {
"name": "zeppelin.livy.maxLogLines",
"value": "1000",
"type": "number"
},
"livy.spark.dynamicAllocation.minExecutors": {
"name": "livy.spark.dynamicAllocation.minExecutors",
"value": "",
"type": "number"
},
"livy.spark.executor.cores": {
"name": "livy.spark.executor.cores",
"value": "",
"type": "number"
},
"zeppelin.livy.session.create_timeout": {
"name": "zeppelin.livy.session.create_timeout",
"value": "120",
"type": "number"
},
"zeppelin.livy.spark.sql.maxResult": {
"name": "zeppelin.livy.spark.sql.maxResult",
"value": "1000",
"type": "number"
},
"livy.spark.driver.cores": {
"name": "livy.spark.driver.cores",
"value": "4",
"type": "number"
},
"livy.spark.jars.packages": {
"name": "livy.spark.jars.packages",
"value": "",
"type": "textarea"
},
"zeppelin.livy.spark.sql.field.truncate": {
"name": "zeppelin.livy.spark.sql.field.truncate",
"value": true,
"type": "checkbox"
},
"livy.spark.driver.memory": {
"name": "livy.spark.driver.memory",
"value": "8G",
"type": "string"
},
"zeppelin.livy.displayAppInfo": {
"name": "zeppelin.livy.displayAppInfo",
"value": true,
"type": "checkbox"
},
"zeppelin.livy.principal": {
"name": "zeppelin.livy.principal",
"value": "",
"type": "string"
},
"zeppelin.livy.keytab": {
"name": "zeppelin.livy.keytab",
"value": "",
"type": "textarea"
},
"livy.spark.dynamicAllocation.maxExecutors": {
"name": "livy.spark.dynamicAllocation.maxExecutors",
"value": "",
"type": "number"
},
"livy.spark.dynamicAllocation.initialExecutors": {
"name": "livy.spark.dynamicAllocation.initialExecutors",
"value": "",
"type": "number"
}
},
"status": "READY",
"interpreterGroup": [
{
"name": "spark",
"class": "org.apache.zeppelin.livy.LivySparkInterpreter",
"defaultInterpreter": true,
"editor": {
"language": "scala",
"editOnDblClick": false,
"completionKey": "TAB",
"completionSupport": true
}
},
{
"name": "sql",
"class": "org.apache.zeppelin.livy.LivySparkSQLInterpreter",
"defaultInterpreter": false,
"editor": {
"language": "sql",
"editOnDblClick": false,
"completionKey": "TAB",
"completionSupport": true
}
},
{
"name": "pyspark",
"class": "org.apache.zeppelin.livy.LivyPySparkInterpreter",
"defaultInterpreter": false,
"editor": {
"language": "python",
"editOnDblClick": false,
"completionKey": "TAB",
"completionSupport": true
}
},
{
"name": "pyspark3",
"class": "org.apache.zeppelin.livy.LivyPySpark3Interpreter",
"defaultInterpreter": false,
"editor": {
"language": "python",
"editOnDblClick": false,
"completionKey": "TAB",
"completionSupport": true
}
},
{
"name": "sparkr",
"class": "org.apache.zeppelin.livy.LivySparkRInterpreter",
"defaultInterpreter": false,
"editor": {
"language": "r",
"editOnDblClick": false,
"completionKey": "TAB",
"completionSupport": true
}
},
{
"name": "shared",
"class": "org.apache.zeppelin.livy.LivySharedInterpreter",
"defaultInterpreter": false
}
],
"dependencies": [],
"option": {
"remote": true,
"port": -1,
"perNote": "shared",
"perUser": "scoped",
"isExistingProcess": false,
"setPermission": false,
"owners": [],
"isUserImpersonate": false
}
}, Thanks! Nicola
... View more