Member since 
    
	
		
		
		01-25-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                396
            
            
                Posts
            
        
                28
            
            
                Kudos Received
            
        
                11
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1385 | 10-19-2023 04:36 PM | |
| 5132 | 12-08-2018 06:56 PM | |
| 6754 | 10-05-2018 06:28 AM | |
| 23313 | 04-19-2018 02:27 AM | |
| 23335 | 04-18-2018 09:40 AM | 
			
    
	
		
		
		11-22-2018
	
		
		08:20 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @anrama You can user the filter=status!=Running 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-21-2018
	
		
		07:28 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @epowell My bad, i mixed this with my issue with the impala ODBC.     In this issue i still unable to get it works.     Yes i'm passing the conf than Lars share but my issue with i change the AuthMech to 2, my queries stuck.      jdbc:impala://node1.example.com:21050;AuthMech=2; UID=fawzea                 Log:     ERROR [2018-10-21 10:16:20,388] ({pool-2-thread-2} JDBCInterpreter.java[open]:197) - zeppelin will be ignored. driver.zeppelin and zeppelin.url is mandatory. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-05-2018
	
		
		12:20 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @bgooley Do you know if this issue exist with:     krb5-workstation-1.10.3-65.el6.x86_64  krb5-auth-dialog-0.13-6.el6.x86_64  krb5-libs-1.10.3-65.el6.x86_64     I expercienced the same issue with these packages but with the following error:     2017-10-23 06:56:03,908 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain  java.lang.RuntimeException: Cannot start secure DataNode without configuring either privileged resources or SASL RPC data transfer protection and SSL for HTTP.  Using privileged resources in combination with SASL RPC data transfer protection is not supported.  at org.apache.hadoop.hdfs.server.datanode.DataNode.checkSecureConfig(DataNode.java:1371)  at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1271)  at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:464)  at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2583)  at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2470)  at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2517)  at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2699)  at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2723)  2017-10-23 06:56:03,919 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1  2017-10-23 06:56:03,921 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:        I can authinicate against the AD and can confirm that the ports used for the HDFS are below 1023 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-05-2018
	
		
		06:28 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 @fil Here you GO ...     #!/bin/bash     STARTDATE=`date -d " -1 day " +%s%N | cut -b1-13`  ENDDATE=`date +%s%N | cut -b1-13`  result=`curl -s http://resource_manager1:8088/ws/v1/cluster/apps?finishedTimeBegin=$STARTDATE&finishedTimeEnd=$ENDDATE`  if [[ $result =~ "standby RM" ]]; then  result=`curl -s http://resource_manager2:8088/ws/v1/cluster/apps?finishedTimeBegin=$STARTDATE&finishedTimeEnd=$ENDDATE`  fi  #echo $result  echo $result | python -m json.tool | sed 's/["|,]//g' | grep -E "user|coreSeconds" | awk ' /user/ { user = $2 }  /vcoreSeconds/ { arr[user]+=$2 ; }  END { for (x in arr) {print "yarn." x ".cpums="arr[x]} } '    echo $result | python -m json.tool | sed 's/["|,]//g' | grep -E "user|memorySeconds" | awk ' /user/ { user = $2 }  /memorySeconds/ { arr1[user]+=$2 ; }  END { for (y in arr1) {print "yarn." y ".memorySeconds="arr1[y]} } ' 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-04-2018
	
		
		11:14 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Tomas79 Does your 3rd party support running 2 commands or SQL in the same file/document? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-04-2018
	
		
		11:10 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @VeljkoC Are you using cloudera manager?     If yes, can you check the hdfs configuration for UNIX Domain Socket path, what is it value?     If it empty try to add : /hadoop/sockets/hdfs-sockets/dn 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-04-2018
	
		
		11:05 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @fil I would suggest you to create a shell script that pick this data from Yarn resource manager .     I created for me a shell script and pick the data on a daily basis and aggregare the memory and CPU time for each pool, for sure you can do that per user and even job if needed.     See below:     Note: i grap some of the code which related to my data center and i may delete something that will result the script to fail in your side, try to play around.     Let me know if you need any more help with that, for sure you can change here the queue with user.        #!/bin/bash  STARTDATE=`date -d " -1 day " +%s%N | cut -b1-13`  ENDDATE=`date +%s%N | cut -b1-13`  result=`curl -s http://yarn_resource_manager:8088/ws/v1/cluster/apps?finishedTimeBegin=$STARTDATE&finishedTimeEnd=$ENDDATE`  if [[ $result =~ "standby RM" ]]; then  result=`curl -s http://yarn_resource_manager2:8088/ws/v1/cluster/apps?finishedTimeBegin=$STARTDATE&finishedTimeEnd=$ENDDATE`    echo $result | python -m json.tool | sed 's/["|,]//g' | grep -E "queue|coreSeconds" | awk -v ' /queue/ { queue = $2 }  /vcoreSeconds/ { arr[queue]+=$2 ; }  END { for (x in arr) {print ".yarn." x ".cpums="arr[x]} } '  echo $result | python -m json.tool | sed 's/["|,]//g' | grep -E "queue|memorySeconds" | awk ' /queue/ { queue = $2 }  /memorySeconds/ { arr1[queue]+=$2 ; }  END { for (y in arr1) {print ".yarn." y ".memorySeconds="arr1[y]} } ' 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-25-2018
	
		
		12:38 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @epowell Apologize me, it's my fault since i solved it using auth mechanism 6 and didn't update the thread. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-24-2018
	
		
		04:59 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hi @mdjedaini     I'm not aware of of such functionlity in cloudera manager, but you can do it using shell using something like this.     If you have a node that can ssh to all these nodes then you can issue a loop shell command:     for host in `cat /tmp/file_name`;do ssh $host bash -c 'hostname ; the shell_command you want to run ; done        If you don't have a node then you need to key keys to be able to run the mentioned command, you can do this using:     choose one of the nodes and generate public/privateusing by running the following command:  ssh-keygen  you will be asked for enter file in which to save the key and enter passphrase and enter passphrase again, in all these steps just press enter without taking any action.     then run from this node ssh-copy-id node2 and it will ask you for your password, once you enter it then you can directly ssh node2 from the node you chose.     You need to do the same for all the nodes in you cluser ( ssh-copy-id ).  in the node you choose add file using vim file_name and add the nodes you want to run the shell from.     then the run the command you want using:     for host in `cat /tmp/file_name`;do ssh $host bash -c 'hostname ; the shell_command you want to run ; done 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-23-2018
	
		
		09:00 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @anrama try using filter=user=xxxx     STARTDATE=`date -d " -5 minute" "+%FT%T"`    result=`curl -u 'admin' : 'admin' 'cloudera_manager_host:port/api/v11/clusters/cluster/services/yarn/yarnApplications?from='$STARTDATE'&limit=1000&filter=user=xxx'`  echo $result 
						
					
					... View more