Member since 
    
	
		
		
		05-17-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                41
            
            
                Posts
            
        
                5
            
            
                Kudos Received
            
        
                3
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 8641 | 10-11-2017 05:31 AM | |
| 1833 | 08-01-2017 01:16 PM | |
| 2882 | 05-17-2016 01:22 PM | 
			
    
	
		
		
		12-01-2020
	
		
		12:55 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 The following map rule is wrong:     RULE:[2:\$1@\$0](rm@MY_REALM)s/.*/rm/      the user for the ResourceManager is not "rm" but "yarn" and this should be the replacement value. This is the same as for the hadoop.security.auth_to_local in Hadoop/HDFS configuration. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-18-2018
	
		
		11:04 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 It's my understanding that the authentication for Jupyter is pluggable.  Here's their documentation that explaines their security model. It's a little light on details.  This seems to be the integration point you are looking for https://github.com/jupyterhub/ldapauthenticator 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-10-2018
	
		
		01:24 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Mustafa Kemal MAYUK  I guess you run the Kerberos wizard through Ambari if so the corresponding keytabs must have already been generated so no need for any action.   The Zeppelin daemon needs a Kerberos account and keytab to run in a Kerberized cluster. Have a look at  %spark  interpreter like the property spark.yarn.keytabs  or spark.yarn.principal they should already be filled.  All the configuration is in the shiro.ini, you can even map local users and restart Zeppelin these users should be able to login Zeppelin UI.   These are the default users  [users]
# List of users with their password allowed to access Zeppelin.
# To use a different strategy (LDAP / Database / ...) 
# check the shiro doc at http://shiro.apache.org/configuration.html
# Configuration-INI Sections
admin = admin, admin
user1 = user1, role1, role2
user2 = user2, role3
user3 = user3, role2
# Added user John/John
John = John, role1, role2  But your spark queries won't necessarily run after logging in as one of these. For spark queries to run, the user needs to be a local user on the Linux box. Hence these are just default logins which you can change yourself.  For simple configs, you can add more username/password in text format in [users] section in the above example I added  John = John, role1, role2  And could log on to zeppelin UI as John/John 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-01-2017
	
		
		01:16 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 I found the solution.  It happens because "atlas_titan" is a zombie hbase table. It can't be created(hbase says table exists) and it can't be dropped(hbase says table does not exist). This happens when table doesn't exist in hbase but exists in zookeeper. It should be deleted from zookeeper.  $ hbase zkcli  [zk: ...] ls /hbase-unsecure/table  [zk: ...] rmr /hbase-unsecure/table/ATLAS_ENTITY_AUDIT_EVENTS  [zk: ...] rmr /hbase-unsecure/table/atlas_titan  [zk: ...] quit  Then restart atlas, it should recreate hbase tables and application should be up a few seconds later. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-11-2018
	
		
		08:20 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I fixed the issue on RHEL6.9 by installing libtirpc and libtirpc-devel 0.15 and uninstalling libtirpc 0.13. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-30-2017
	
		
		05:58 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello,  thanks for answers. Can you clarify these answers;  -I couldn't get meta data calculation on namenode. The document is about calculating java heap size. Roughly I need  storage requirement of name nodes.  -Can we say "more serialization means more cpu" in Spark? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-27-2016
	
		
		01:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Saurabh Kumar  Sorry for late reply, i was in a vacation. I agree what Ravi says. If solrconfig.xml(I have not dig into it yet.) has properly configured then you may look following...  Copy all lines of the solrconfig.xml to an empty notepad and replace all " with ” or vice versa. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-04-2017
	
		
		07:16 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Currently, Solr is available as a service Ambari Infra 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-19-2017
	
		
		07:19 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		4 Kudos
		
	
				
		
	
		
					
							 From HDP-2.6 onwards Hortonworks Data Platform is supported on IBM Power Systems  You can refer below documentation for installing/upgrading HDP on IBM Power  https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-installation-ppc/content/ch_Getting_Ready.html  https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-upgrade-ppc/content/ambari_upgrade_guide-ppc.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
         
					
				













