Member since 
    
	
		
		
		07-09-2019
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                422
            
            
                Posts
            
        
                97
            
            
                Kudos Received
            
        
                58
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 489 | 07-06-2025 05:24 AM | |
| 528 | 05-28-2025 10:35 AM | |
| 2243 | 08-26-2024 08:17 AM | |
| 2848 | 08-20-2024 08:17 PM | |
| 1177 | 07-08-2024 04:45 AM | 
			
    
	
		
		
		08-05-2020
	
		
		03:56 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @shrikant_bmSimilar issue for me got resolved after removing the 'renew_lifetime' line /etc/krb5.conf.  The following link also provides additional information regarding this issue:  https://community.cloudera.com/t5/Community-Articles/How-to-solve-the-Message-stream-modified-41-error-on/ta-p/292986 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-03-2020
	
		
		02:21 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @GangWar In which Oozie's service configuration item in Cloudera Manager this should be defined? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-12-2020
	
		
		05:44 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 @murali2425 You could build a nifi flow that could “copy” files to guthub.  This would need to be done by creating a custom processor or maybe even just ExecuteScript to run a custom python script.      Either route you take you would need to make sure that all nifi nodes are setup with permissions to write to the github repo.  Then inside of your custom processor or script you would execute the required git commands to commit (“copy”) the file(s). 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-19-2019
	
		
		01:00 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 This resolved my problem:      "As per the source code it pulls group's cn based on these values. Also comment out below if there is no group inside groups."     Instead of putting full DN name, simply put cn name. Thanks. This should be the accepted answer for zeppelin 0.8.      Zeppelin version: 0.8; HDP version: HDP 3.1.4    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-07-2019
	
		
		09:44 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Manoj690   Can you check whether authorization has been delegated to Ranger/Kerbe/SQLAuth if you have Ranger plugin for Hive enabled then  the authorization has been delegated to Ranger the central authority.   You will need to enable the permissions through ranger for all hive database  Hive > Configs > Settings > In Security it is set to ?    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-07-2019
	
		
		06:33 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Scharan   Spark conf:  tee -a ~/spark/conf/spark-defaults.conf >> /dev/null <<EOF
spark.sql.catalogImplementation hive
spark.master yarn
spark.driver.memory 4g
spark.shuffle.service.enabled true
spark.yarn.jars hdfs:///user/zeppelin/lib/spark/jars/*
EOF  Livy conf:  tee -a ~/livy/conf/livy-env.sh >> /dev/null <<EOF
JAVA_HOME=/usr/lib/jvm/java-8-oracle
HADOOP_HOME=/usr/lib/hadoop
HADOOP_CONF_DIR=/etc/hadoop/conf
SPARK_HOME=~/spark
LD_LIBRARY_PATH=/usr/lib/hadoop/lib/native
EOF  tee -a ~/livy/conf/livy.conf >> /dev/null <<EOF
livy.repl.enable-hive-context = true
livy.spark.master = yarn
livy.spark.deploy-mode = cluster 
livy.impersonation.enabled = true
EOF  Zeppelin Conf:  tee ~/zeppelin/conf/shiro.ini > /dev/null <<EOF
[main]
### FreeIPA over LDAP
ldapRealm = org.apache.zeppelin.realm.LdapRealm
ldapRealm.contextFactory.environment[ldap.searchBase] = dc=my,dc=corp,dc=de
ldapRealm.userDnTemplate = uid={0},cn=users,cn=accounts,dc=my,dc=corp,dc=de
ldapRealm.userSearchScope = subtree
ldapRealm.groupSearchScope = subtree
ldapRealm.searchBase = cn=accounts,dc=my,dc=corp,dc=de
ldapRealm.userSearchBase = cn=users,cn=accounts,dc=my,dc=corp,dc=de
ldapRealm.groupSearchBase = cn=groups,cn=accounts,dc=my,dc=corp,dc=de
ldapRealm.userObjectClass = person
ldapRealm.groupObjectClass = groupofnames
ldapRealm.groupSearchEnableMatchingRuleInChain = true
ldapRealm.userSearchAttributeName = uid
ldapRealm.userSearchFilter=(&(objectclass=person)(uid={0}))
ldapRealm.memberAttribute = member
ldapRealm.memberAttributeValueTemplate = uid={0},cn=users,cn=accounts,dc=my,dc=corp,dc=de
ldapRealm.contextFactory.authenticationMechanism = simple
ldapRealm.contextFactory.systemUsername = zeppelin
ldapRealm.contextFactory.systemPassword = password
ldapRealm.contextFactory.url = ldap://freeipa.my.corp.de:389
securityManager.realms = \$ldapRealm
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
### Enables 'HttpOnly' flag in Zeppelin cookies
cookie = org.apache.shiro.web.servlet.SimpleCookie
cookie.name = JSESSIONID
cookie.httpOnly = true
### Uncomment the below line only when Zeppelin is running over HTTPS
#cookie.secure = true
sessionManager.sessionIdCookie = \$cookie
securityManager.sessionManager = \$sessionManager
# 86,400,000 milliseconds = 24 hour
securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login
[roles]
pharos = *
admin = *
[urls]
/api/version = anon
# Allow all authenticated users to restart interpreters on a notebook page.
# Comment out the following line if you would like to authorize only admin users to restart interpreters.
/api/interpreter/setting/restart/** = authc
/api/interpreter/** = authc
/api/configurations/** = authc
/api/credential/** = authc
/** = authc
EOF  Interpreter:      "livy": {
      "id": "livy",
      "name": "livy",
      "group": "livy",
      "properties": {
        "livy.spark.executor.instances": {
          "name": "livy.spark.executor.instances",
          "value": "",
          "type": "number"
        },
        "livy.spark.dynamicAllocation.cachedExecutorIdleTimeout": {
          "name": "livy.spark.dynamicAllocation.cachedExecutorIdleTimeout",
          "value": "",
          "type": "string"
        },
        "zeppelin.livy.concurrentSQL": {
          "name": "zeppelin.livy.concurrentSQL",
          "value": false,
          "type": "checkbox"
        },
        "zeppelin.livy.url": {
          "name": "zeppelin.livy.url",
          "value": "http://localhost:8998",
          "type": "url"
        },
        "zeppelin.livy.pull_status.interval.millis": {
          "name": "zeppelin.livy.pull_status.interval.millis",
          "value": "1000",
          "type": "number"
        },
        "livy.spark.executor.memory": {
          "name": "livy.spark.executor.memory",
          "value": "",
          "type": "string"
        },
        "zeppelin.livy.restart_dead_session": {
          "name": "zeppelin.livy.restart_dead_session",
          "value": false,
          "type": "checkbox"
        },
        "livy.spark.dynamicAllocation.enabled": {
          "name": "livy.spark.dynamicAllocation.enabled",
          "value": false,
          "type": "checkbox"
        },
        "zeppelin.livy.maxLogLines": {
          "name": "zeppelin.livy.maxLogLines",
          "value": "1000",
          "type": "number"
        },
        "livy.spark.dynamicAllocation.minExecutors": {
          "name": "livy.spark.dynamicAllocation.minExecutors",
          "value": "",
          "type": "number"
        },
        "livy.spark.executor.cores": {
          "name": "livy.spark.executor.cores",
          "value": "",
          "type": "number"
        },
        "zeppelin.livy.session.create_timeout": {
          "name": "zeppelin.livy.session.create_timeout",
          "value": "120",
          "type": "number"
        },
        "zeppelin.livy.spark.sql.maxResult": {
          "name": "zeppelin.livy.spark.sql.maxResult",
          "value": "1000",
          "type": "number"
        },
        "livy.spark.driver.cores": {
          "name": "livy.spark.driver.cores",
          "value": "4",
          "type": "number"
        },
        "livy.spark.jars.packages": {
          "name": "livy.spark.jars.packages",
          "value": "",
          "type": "textarea"
        },
        "zeppelin.livy.spark.sql.field.truncate": {
          "name": "zeppelin.livy.spark.sql.field.truncate",
          "value": true,
          "type": "checkbox"
        },
        "livy.spark.driver.memory": {
          "name": "livy.spark.driver.memory",
          "value": "8G",
          "type": "string"
        },
        "zeppelin.livy.displayAppInfo": {
          "name": "zeppelin.livy.displayAppInfo",
          "value": true,
          "type": "checkbox"
        },
        "zeppelin.livy.principal": {
          "name": "zeppelin.livy.principal",
          "value": "",
          "type": "string"
        },
        "zeppelin.livy.keytab": {
          "name": "zeppelin.livy.keytab",
          "value": "",
          "type": "textarea"
        },
        "livy.spark.dynamicAllocation.maxExecutors": {
          "name": "livy.spark.dynamicAllocation.maxExecutors",
          "value": "",
          "type": "number"
        },
        "livy.spark.dynamicAllocation.initialExecutors": {
          "name": "livy.spark.dynamicAllocation.initialExecutors",
          "value": "",
          "type": "number"
        }
      },
      "status": "READY",
      "interpreterGroup": [
        {
          "name": "spark",
          "class": "org.apache.zeppelin.livy.LivySparkInterpreter",
          "defaultInterpreter": true,
          "editor": {
            "language": "scala",
            "editOnDblClick": false,
            "completionKey": "TAB",
            "completionSupport": true
          }
        },
        {
          "name": "sql",
          "class": "org.apache.zeppelin.livy.LivySparkSQLInterpreter",
          "defaultInterpreter": false,
          "editor": {
            "language": "sql",
            "editOnDblClick": false,
            "completionKey": "TAB",
            "completionSupport": true
          }
        },
        {
          "name": "pyspark",
          "class": "org.apache.zeppelin.livy.LivyPySparkInterpreter",
          "defaultInterpreter": false,
          "editor": {
            "language": "python",
            "editOnDblClick": false,
            "completionKey": "TAB",
            "completionSupport": true
          }
        },
        {
          "name": "pyspark3",
          "class": "org.apache.zeppelin.livy.LivyPySpark3Interpreter",
          "defaultInterpreter": false,
          "editor": {
            "language": "python",
            "editOnDblClick": false,
            "completionKey": "TAB",
            "completionSupport": true
          }
        },
        {
          "name": "sparkr",
          "class": "org.apache.zeppelin.livy.LivySparkRInterpreter",
          "defaultInterpreter": false,
          "editor": {
            "language": "r",
            "editOnDblClick": false,
            "completionKey": "TAB",
            "completionSupport": true
          }
        },
        {
          "name": "shared",
          "class": "org.apache.zeppelin.livy.LivySharedInterpreter",
          "defaultInterpreter": false
        }
      ],
      "dependencies": [],
      "option": {
        "remote": true,
        "port": -1,
        "perNote": "shared",
        "perUser": "scoped",
        "isExistingProcess": false,
        "setPermission": false,
        "owners": [],
        "isUserImpersonate": false
      }
    },     Thanks!  Nicola       
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-07-2019
	
		
		06:06 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Harish-hadoop   Add the queue name in hive.url property in jdbc interpreter settings  hive.url=jdbc:hive2://<zookeeper quorum>/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2?tez.queue.name=<Queue Name>    EXAMPLE:  hive.url= jdbc:hive2://test1:2181,test2:2181,test3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2?tez.queue.name=sai 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-06-2019
	
		
		06:20 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @wdailey  Use "ldapRealm=org.apache.zeppelin.realm.LdapRealm"  Below is the template for your reference  ldapRealm=org.apache.zeppelin.realm.LdapRealm 
ldapRealm.contextFactory.systemUsername =cn=manager,dc=charan,dc=com
ldapRealm.contextFactory.systemPassword =admin 
ldapRealm.contextFactory.authenticationMechanism=simple 
ldapRealm.contextFactory.url=ldap://test1:389 
ldapRealm.authorizationEnabled=true 
#ldapRealm.pagingSize = 20000 
ldapRealm.searchBase=
ldapRealm.userSearchBase=
ldapRealm.groupSearchBase=
ldapRealm.userObjectClass=
ldapRealm.userSearchAttributeName = uid 
ldapRealm.userSearchScope = subtree 
ldapRealm.groupSearchScope = subtree 
ldapRealm.userSearchFilter= 
ldapRealm.memberAttribute = member 
ldapRealm.memberAttributeValueTemplate=(name={0}) 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-24-2019
	
		
		03:41 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @IrenaKon  Refer below link  https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.5/index.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-22-2019
	
		
		08:16 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Peruvian81  You need to update ranger,  You can follow the below steps to reset the password in postgres:    1.Login into postgres    2. postgres=# \connect ranger    3. ranger=# update x_portal_user set password = 'ceb4f32325eda6142bd65215f4c0f371' where login_id = 'admin';    Above would reset the password to 'admin'.    4. Login to Ranger UI using the above password    5. Go to User Profile and change the password    6. Open Ambari UI Ranger Configs    7. Update 'admin_password' in Advanced ranger-env with the newly set password 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		- « Previous
- Next »
 
        













