Member since 
    
	
		
		
		07-30-2019
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                181
            
            
                Posts
            
        
                205
            
            
                Kudos Received
            
        
                51
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 6116 | 10-19-2017 09:11 PM | |
| 2020 | 12-27-2016 06:46 PM | |
| 1580 | 09-01-2016 08:08 PM | |
| 1488 | 08-29-2016 04:40 PM | |
| 3991 | 08-24-2016 02:26 PM | 
			
    
	
		
		
		09-06-2016
	
		
		06:03 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Ryan Hanson  The obvious issue is the circular symlink references. Have you created symlinks prior to running the installer? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-01-2016
	
		
		08:08 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 @Siva Nagisetty  The Data Governance documentation contains references to setting up Governance with Apache Atlas for various components including Kafka. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-30-2016
	
		
		07:01 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @mkataria  In order to do superuser commands (like enter safe mode, balance cluster, etc.), you have to run the command as the user that started the NameNode process. If the NameNode is running as the hdfs user, then you will need to issue these commands as the hdfs user:  sudo -u hdfs hdfs balancer -threshold 5 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-29-2016
	
		
		04:40 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Eyad Garelnabi  According to the Hadoop Documentation, permissions checks for the superuser always succeed, even if you try to restrict them. The process (and group) used to start the namenode become the superuser and can always do everything within HDFS. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-24-2016
	
		
		04:30 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Sami Ahmad  The following line seems to indicate the issue:  Caused by: java.io.IOException: Check-sum mismatch between hdfs://hadoop1.tolls.dot.state.fl.us:8020/user/sami/error1.log and hdfs://hadoop1.tolls.dot.state.fl.us:8020/user/zhang/.distcp.tmp.attempt_1472051594557_0001_m_000001_0. Source and target differ in block-size. Use -pb to preserve block-sizes during copy. Alternatively, skip checksum-checks altogether, using -skipCrc. (NOTE: By skipping checksums, one runs the risk of masking data-corruption during file-transfer.)
  Is the block size set differently between the source and target clusters? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-24-2016
	
		
		02:26 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @da li  The answers here are close, but not quite. The proxy user settings take the form of hadoop.proxyuser.<username>.[groups|hosts]. So, in your Custom hdfs-site.xml section of Ambari, add the following two parameters:  hadoop.proxyuser.root.hosts=*
hadoop.proxyuser.root.groups=*  This will correct the impersonation error. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-23-2016
	
		
		07:10 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 @mqadri  FreeIPA does not currently support Multi-tenancy. There was an article written with regards to what was required in V3 to support this, but it has not been implemented as of 2015. The Request for Enhancement has been open for 4 years or so, but development has been in the direction of IPA to IPA trusts (at least as of Feb 2015).  The version of IPA included with RHEL/CentOS 6 is 3.0.0:  [root@sandbox resources]# yum info ipa-server
Loaded plugins: fastestmirror, priorities
Loading mirror speeds from cached hostfile
 * base: mirror.team-cymru.org
 * epel: mirrors.mit.edu
 * extras: ftp.usf.edu
 * updates: dallas.tx.mirror.xygenhosting.com
Available Packages
Name        : ipa-server
Arch        : x86_64
Version     : 3.0.0
Release     : 50.el6.centos.1
Size        : 1.1 M
Repo        : base
Summary     : The IPA authentication server
URL         : http://www.freeipa.org/
License     : GPLv3+
Description : IPA is an integrated solution to provide centrally managed Identity (machine,
            : user, virtual machines, groups, authentication credentials), Policy
            : (configuration settings, access control information) and Audit (events,
            : logs, analysis thereof). If you are installing an IPA server you need
            : to install this package (in other words, most people should NOT install
            : this package).
  The version included with RHEL/CentOS 7 is version 4.2, but it still does not seem to support multi-tenancy per the above links. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-23-2016
	
		
		04:56 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 @Vincent Romeo  The hive.metastore.heapsize is not a parameter that is in a file like hive-site.xml. This value is used by Ambari for substitution into the hive-env template file. You can see this section in the text box in Ambari:  if [ "$SERVICE" = "metastore" ]; then
 export HADOOP_HEAPSIZE={{hive_metastore_heapsize}} # Setting for HiveMetastore
else
 export HADOOP_HEAPSIZE={{hive_heapsize}} # Setting for HiveServer2 and Client
fi
  The {{hive_metastore_heapsize}} is where the substitution is made. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-18-2016
	
		
		09:01 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 @ripunjay godhani  No, it is not possible to modify the install locations. These locations are specified at the time the RPMs are built and can not be changed. 3rd party software will depend on HDP being installed in this location, and Ambari distributes all of the config files to /etc on all of the nodes. Log file directories can be changed, but not the binary installation and config file directories. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-18-2016
	
		
		08:49 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 @Kumar Veerappan  You should be able to read the /etc/ambari-agent/conf/ambari-agent.ini file on any node in the cluster. You will find a [server] section that will tell you where the Ambari server is:  [server]
hostname = ambari-server.example.com
url_port = 8440
secured_url_port = 8441
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













