Member since 
    
	
		
		
		06-09-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                529
            
            
                Posts
            
        
                129
            
            
                Kudos Received
            
        
                104
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1731 | 09-11-2019 10:19 AM | |
| 9321 | 11-26-2018 07:04 PM | |
| 2480 | 11-14-2018 12:10 PM | |
| 5310 | 11-14-2018 12:09 PM | |
| 3140 | 11-12-2018 01:19 PM | 
			
    
	
		
		
		07-12-2016
	
		
		09:40 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Beverley Andalora   
	Not possible from Ambari File View, but you can use linux command to change it  	hdfs dfs -chown [-R] [OWNER][:[GROUP]] PATH...  
	Example:  	hdfs dfs -chown falcon /path/filename 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-11-2016
	
		
		11:06 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 One more thing I want to add is you need to run this using ambari user, in my case root otherwise if I used hdfs user I got same problem. I'm sure this has to do with file permissions. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-11-2016
	
		
		11:02 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Xiaoyu Yao Correct, I was missing the config, thanks. I realize I could of also checked on the Operations Running if I clicked for more details.   Thanks! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-11-2016
	
		
		10:23 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 I'm starting the datanode using   /usr/hdp/2.4.2.0-258/hadoop/sbin/hadoop-daemon.sh start datanode  I think there may be an environment variable related, but I also tried to source   . /etc/hadoop/conf/./hadoop-env.sh  Here is the complete error I'm getting:  2016-07-11 18:21:06,436 ERROR datanode.DataNode (DataNode.java:secureMain(2545)) - Exception in secureMain
java.lang.RuntimeException: Cannot start secure DataNode without configuring either privileged resources or SASL RPC data transfer protection and SSL for HTTP.  Using privileged resources in combination with SASL RPC data transfer protection is not supported.
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkSecureConfig(DataNode.java:1217)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1103)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:432)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2423)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2310)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2357)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2538)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2562)
2016-07-11 18:21:06,438 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2016-07-11 18:21:06,445 INFO  datanode.DataNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG: 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Hadoop
			
    
	
		
		
		07-06-2016
	
		
		07:21 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 @Timothy Spann   Try setting hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory in hive-site and hive.security.authorization.enabled=false  You can search on ambari Hive for those configurations and test after changing them. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-06-2016
	
		
		01:11 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Thank you @Pierre Villard, I will try using maven instead of trying to resolve dependencies myself 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-05-2016
	
		
		05:37 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Found hadoop-core-1.0.3.jar is for v1 only, and haven't found which ones to use with v2. I got Sandbox, but could not fine the correct client jar to use. Could you please point it out? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Hadoop
			
    
	
		
		
		07-04-2016
	
		
		06:10 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Luis  Picazo When there are large number of components, the alert checks do not space out and would fail more often. Did you restarted the ambari-agent? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-04-2016
	
		
		04:53 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Luis  Picazo   First identify the Ambari Agent nodes which have got this problem.  Then from Ambari 2.2 try increasing alter_grace_period from default of 5 seconds to 10. Can be modified in /etc/ambari-agent/conf/ambari-agent.ini  Previous 2.2 see https://community.hortonworks.com/questions/9762/how-to-get-rid-of-stale-alerts-in-ambari.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-01-2016
	
		
		08:59 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @ed day  FQDN is required if you are not using DNS/reverse DNS. See:  http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.2.0/bk_Installing_HDP_AMB/content/_check_dns.html  Looks like you are not using DNS, make sure to run the command:  hostname -f   This should return the <fully.qualified.domain.name> you just set on each node of cluster. And should match /etc/hosts file.  Also make sure the network file contains correct information:  vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=<fully.qualified.domain.name>  I recommend you follow prepare environment part of the documentation step by step:  http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.2.0/bk_Installing_HDP_AMB/content/_prepare_the_environment.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		- « Previous
- Next »
 
        













