Member since 
    
	
		
		
		05-07-2018
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                331
            
            
                Posts
            
        
                45
            
            
                Kudos Received
            
        
                35
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 9632 | 09-12-2018 10:09 PM | |
| 3751 | 09-10-2018 02:07 PM | |
| 11545 | 09-08-2018 05:47 AM | |
| 4095 | 09-08-2018 12:05 AM | |
| 4942 | 08-15-2018 10:44 PM | 
			
    
	
		
		
		08-17-2018
	
		
		03:33 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Muthukumar S!  What happens if you run the following command? (*change the dfs.nameservices above for the respective value)  hdfs dfs -ls hdfs://<dfs.nameservices>/user  And also you can try to run the following command  hdfs namenode -recover  https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#namenode  Hope this helps 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-16-2018
	
		
		06:48 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hmmm, @Serg Serg so let's try to see what do you have in your python ssl libs.  python2 --version 
#Create a file get_ssl_protocols.py 
#!/usr/bin/env python 
import ssl; 
for i in dir(ssl): 
	if i.startswith("PROTOCOL"): 
		print(i) 
#Then, let's apply full mask permission to the py script
chmod 777 get_ssl_protocols.py 
#Then send us the output of the below command : 
python2 ./get_ssl_protocols.py <br>  It should appear the following output:  [root@node1 ~]# python2 ./python_ssl.py 
PROTOCOL_SSLv23
PROTOCOL_SSLv3
PROTOCOL_TLSv1
PROTOCOL_TLSv1_1
PROTOCOL_TLSv1_2<br>  Hope this helps! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-16-2018
	
		
		02:42 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Michele
 Proverbio!  Did you have a chance to look at my last answer?  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-16-2018
	
		
		04:21 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello @Sneha Abraham!   Once I had a similar issue (random errors coming from the oracle jdbc) and I got solved by changing the ojdbc7 to ojdbc6.   Also, if you're using more than 1 mapper plus oraoop, then take a look at this link (ps: I didn't have the chance to test it):  https://sqoop.apache.org/docs/1.4.7/SqoopUserGuide.html#_consistent_read_all_mappers_read_from_the_same_point_in_time  Hope this helps! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-16-2018
	
		
		03:21 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hello @Daniel Zafar!   It seems that your Phoenix it's with ACID feat enable.   https://phoenix.apache.org/transactions.html  So that's why Phoenix is complaining about tephra (http://tephra.incubator.apache.org/).  Take a look at the link below, has some good explanations about how to enable tephra.    https://community.hortonworks.com/questions/113991/enabling-transaction-in-phoenix-issue.html  Hope this helps! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-16-2018
	
		
		12:56 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello @Muthukumar S!  Hm, I got curious in your case 🙂   Could you check if:   Is there any client trying to point to the old SN? E.g. hdfs dfs -put hdfs://SN...   Could you check if your dfs.nameservices at hdfs-site.xml and fs.defaultFS at core-site.xml are okay?   And also, I've noted that after the  
 2018-08-1515:04:14,335 INFO ha.EditLogTailer(EditLogTailer.java:doTailEdits(238))-Loaded104 edits starting from txid 211034588   You started to face warning messages, we may need to check if both Active and StandBy have the same edits/fsimage. So, try to run a ls -R under the namonode directory in your linux fs. And check if it's missing a file or if the sizes are quite different.  And please, let me know which version are you running. And if it's possible, try to enable the DEBUG log for the SN node.   Hope this helps! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-16-2018
	
		
		12:11 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello @Serg Serg!  Could you check your ambari-server logs, to see if you have more details?   And also, if you have more than one JDK installed check if it's set to the same JDK version asked by Ambari.   BTW, take a look at this link:  https://community.hortonworks.com/articles/188269/javapython-updates-and-ambari-agent-tls-settings.html  And just asking to confirm, did you restart your ambari-agent after the changes?  Hope this helps! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-15-2018
	
		
		11:29 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello @Kumar Veerappan!  Looks like you can't reach the REALM.   Check your /etc/krb5.conf, here's my example:  MYMAC:etc vmurakami$ cat /etc/krb5.conf 
[libdefaults]
  renew_lifetime = 7d
  forwardable = true
  default_realm = EXAMPLE.COM
  ticket_lifetime = 24h
  dns_lookup_realm = false
  dns_lookup_kdc = false
  default_ccache_name = /tmp/krb5cc_%{uid}
  #default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
  #default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[domain_realm]
  .example.com = EXAMPLE.COM
  example.com = EXAMPLE.COM
[logging]
  default = FILE:/var/log/krb5kdc.log
  admin_server = FILE:/var/log/kadmind.log
  kdc = FILE:/var/log/krb5kdc.log
[realms]
  EXAMPLE.COM = {
    admin_server = vmurakami-1
    kdc = vmurakami-1
  }    And also, after you got the keytab (if you don't have it, then if it's possible, copy the same keytab valid and used in the HS2 hosts to your mac), check if it's valid with the following command:  [root@vmurakami-1 ~]# klist -ef 
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: zookeeper/vmurakami-1@EXAMPLE.COM
Valid starting       Expires              Service principal
08/15/2018 23:23:31  08/16/2018 23:23:31  krbtgt/EXAMPLE.COM@EXAMPLE.COM
  Flags: FI, Etype (skey, tkt): aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96     If you're still having issues, please share with us the whole error.   Hope this helps! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-15-2018
	
		
		10:44 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 Hello @Maksym Shved!   Right, looking at your screenshot. I'm assuming you're doing this tests in a non-production environment. If so, then change the following parameters:  #Ambari > Spark2 > Advanced livy2-conf
livy.spark.master=yarn 
livy.impersonation.enabled=false
#Ambari > Spark2 > Advanced livy2-env
            export SPARK_HOME=/usr/hdp/current/spark2-client
            export SPARK_CONF_DIR=/etc/spark2/conf
            export JAVA_HOME={{java_home}}
            export HADOOP_CONF_DIR=/etc/hadoop/conf
            export LIVY_LOG_DIR={{livy2_log_dir}}
            export LIVY_PID_DIR={{livy2_pid_dir}}
            export LIVY_SERVER_JAVA_OPTS="-Xmx2g"
  Restart the SPARK2 service and make the test again. If you're still facing issues, then go back to the Ambari and set the following.  Ambari > HDFS > Custom Core-Site
hadoop.proxyuser.root.groups=*
hadoop.proxyuser.root.hosts=*
  Hope this helps. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-15-2018
	
		
		04:48 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello @Ek Im!  Have you tried to add the ORDER BY?   select * from dummy_schema.dummy1 order by <SOME_COLUMN> limit 1,2;
  Hope this helps 
						
					
					... View more