Member since 
    
	
		
		
		03-14-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                4721
            
            
                Posts
            
        
                1111
            
            
                Kudos Received
            
        
                874
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2493 | 04-27-2020 03:48 AM | |
| 4963 | 04-26-2020 06:18 PM | |
| 4045 | 04-26-2020 06:05 PM | |
| 3279 | 04-13-2020 08:53 PM | |
| 4997 | 03-31-2020 02:10 AM | 
			
    
	
		
		
		06-17-2019
	
		
		05:59 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Limit ls to  few, I mean   hdfs dfs -ls /tmp/hive/hive/14*  The directory under is of zero bytes   drwx------   - hive hdfs          0 2017-09-04 17:10 /tmp/hive/hive/149e8d6a-ad2a-433e-87be-6cb5b27e2b7b/_tmp_space.db  Find out older one and start purging them manually  till you get a breather !!!.  After that get permission to implement Automatic approach 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-11-2019
	
		
		10:52 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Adil BAKKOURI  We see the following message seems to be causing the DataNode startup failure.  2019-06-11 12:30:52,832 WARN  common.Storage (DataStorage.java:loadDataStorage(418)) - Failed to add storage directory [DISK]file:/hadoop/hdfs/data
java.io.IOException: Incompatible clusterIDs in /hadoop/hdfs/data: namenode clusterID = CID-bd1a4e24-9ff2-4ab8-928a-f04000e375cc; datanode clusterID = CID-9a605cbd-1b0e-41d3-885e-f0efcbe54851  Looks like your VERSION file has different cluster IDs present in NameNode and DataNode that need to be correct.   Please copy the clusterID from nematode "<dfs.namenode.name.dir>/current/VERSION" and put it in the VERSION file of datanode "<dfs.datanode.data.dir>/current/VERSION" and then try again.      Also please check the following link:   https://community.hortonworks.com/questions/79432/datanode-goes-dows-after-few-secs-of-starting-1.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-27-2019
	
		
		08:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Rahul Borkar   use the Ambari UI / APIs , Like "Advanced hadoop-env" from ambari. Added following lines of code, at the end of file      # Add java-agent to get jmx metrics for prometheus  agent_namenode=`echo $HADOOP_NAMENODE_OPTS | grep javaagent | wc -l `  if [ "$agent_namenode" == 0 ]; then   export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false   -Dcom.sun.management.jmxremote.port=10010 -javaagent:/usr/hdp/2.3.4.0-3485/hadoop/lib/jmx_prometheus_javaagent-0.11.0.jar=9998:/usr/hdp/2.3.4.0-3485/hadoop/jmx_exporter/namenode.yaml $HADOOP_NAMENODE_OPTS"   fi      agent_datanode=`echo $HADOOP_DATANODE_OPTS | grep javaagent | wc -l `  if [ "$agent_datanode" == 0 ]; then   export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false   -Dcom.sun.management.jmxremote.port=10011 -javaagent:/usr/hdp/2.3.4.0-3485/hadoop/lib/jmx_prometheus_javaagent-0.11.0.jar=9999:/usr/hdp/2.3.4.0-3485/hadoop/jmx_exporter/datanode.yaml $HADOOP_DATANODE_OPTS"   fi  
  changed the permission of jmx_prometheus_javaagent-0.11.0.jar" and "namenode.yml" and "datanode.yaml" to "777" and put it into the right place.      restart namenode and datanode at a time. you will get what you want 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-04-2019
	
		
		03:43 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 The above question and the entire reply thread below was originally posted in the Community Help Track. On Tue Jun 4 03:37 UTC 2019, a member of the HCC moderation staff moved it to the Cloud & Operations track. The Community Help Track is intended for questions about using the HCC site itself. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-11-2019
	
		
		11:37 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Jay Kumar SenSharma,  Thanks for the support!!!  Yeah, there was inconsistency in Ambari-Server DB which was not allowing Alert to function on Ambari-UI.  The Ambari-server DB size was grown to 294 MB. By purging the last 6 months from DB and restarting the ambari functioned the Alerts back on Ambari-UI.  Would like to know in detail if this happens on PROD env what measures should be taken as an admin. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-04-2019
	
		
		01:21 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Jay Kumar SenSharma  Thank you very much. That solved the issue.  And to answer your questions for completeness and for other members of the community..  Ambari Version = 2.6.2.2  Yes, we did install the components using the hortonworks public repo for ubunutu..  Because of the firewall requirements we had to create use our internal repo and hence chanted the repo in ambari server to point to the internal repo server.  After that we are seeing the error mentioned. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-13-2019
	
		
		09:46 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi can you put datanode in maintenance through bash command or direct python command?  I have ginormous  and i want to quickly stop and start services. I am using hadoop-daemon.sh  start to to start a datanode. I know maintenence mode is not from hadoop API that is built as part of Ambari. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-06-2019
	
		
		08:08 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Michael Bronson  Find the hostnames where the "SPARK2_THRIFTSERVER" server is running:  # curl -H "X-Requested-By: ambari" -u admin:admin -X GET "http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/hosts?(host_components/HostRoles/component_name=SPARK2_THRIFTSERVER)&minimal_response=true"  | grep host_name | awk -F":" '{print $2}' |  awk -F"\"" '{print $2}'    Example Output:  newhwx3.example.com  newhwx5.example.com    Once we know the hosts where the "SPARK2_THRIFTSERVER" is running then we can run the following command by replacing the host newhws3 and newhwx5 to turn ON the maintenance mode for it.  # curl -H "X-Requested-By: ambari" -u admin:admin -X PUT -d '{"RequestInfo":{"context":"Turn ON Maintenance Mode for Spark2 Thrift Server"},"Body":{"HostRoles":{"maintenance_state":"ON"}}}' "http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/hosts/newhwx3.example.com/host_components/SPARK2_THRIFTSERVER"
# curl -H "X-Requested-By: ambari" -u admin:admin -X PUT -d '{"RequestInfo":{"context":"Turn ON Maintenance Mode for Spark2 Thrift Server"},"Body":{"HostRoles":{"maintenance_state":"ON"}}}' "http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/hosts/newhwx5.example.com/host_components/SPARK2_THRIFTSERVER"    .    Turn OFF maintenance Mode for Spark2 thrift server on newhwx3 and newhws5  # curl -H "X-Requested-By: ambari" -u admin:admin -X PUT -d '{"RequestInfo":{"context":"Turn OFF Maintenance Mode for Spark2 Thrift Server"},"Body":{"HostRoles":{"maintenance_state":"OFF"}}}' "http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/hosts/newhwx3.example.com/host_components/SPARK2_THRIFTSERVER"
# curl -H "X-Requested-By: ambari" -u admin:admin -X PUT -d '{"RequestInfo":{"context":"Turn OFF Maintenance Mode for Spark2 Thrift Server"},"Body":{"HostRoles":{"maintenance_state":"OFF"}}}' "http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/hosts/newhwx5.example.com/host_components/SPARK2_THRIFTSERVER"              . 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-02-2019
	
		
		08:49 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thank you, indeed it was an internet issue, changing the dns resolver in "/etc/resolv.conf" worked. default was 127.0.0.11, but this kept spitting errors, changing it to 8.8.8.8 worked. not sure why though. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
         
					
				













