Member since 
    
	
		
		
		06-24-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                111
            
            
                Posts
            
        
                8
            
            
                Kudos Received
            
        
                0
            
            
                Solutions
            
        
			
    
	
		
		
		05-18-2017
	
		
		01:04 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 You should add hive-hbase-handler.jar in hive shell or hive-conf.  Connect Hive Shell  Execute these commands.  ADD JAR /usr/hdp/2.5.3.0-37/hive/lib/hive-hbase-handler.jar;   
CREATE TABLE hbase_table_1(tags map<string,int>, row_key string)   
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'   WITH SERDEPROPERTIES (
"hbase.columns.mapping" = "cf:tag_.*,:key",
"hbase.columns.mapping.prefix.hide" = "true"
);   Then you'll find created table in hbase shell. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-17-2017
	
		
		11:12 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Yes I did. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-15-2017
	
		
		07:01 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hm, I just fixed this issue with multiple jars option. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-15-2017
	
		
		06:41 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							     Using spark-shell with --packages options like databricks,  Of course, spark is downloading package library on the maven repository of internet.  But, in case of offline mode, it not useful.  How can I change or add spark package repository? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Spark
			
    
	
		
		
		05-14-2017
	
		
		02:10 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Execute these commans on installed ambari-server node.  vim /etc/ambari-server/conf/ambari.properties  server.jdbc.database=mysql  server.jdbc.database_name=ambari  If you are using postgres database, then server.jdbc.database values is postgres.  How did you delete an older version? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-12-2017
	
		
		03:34 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Check List.  1. Check Ambari-Server DB  # ambari-server check-database  You must return a message "No errors were found."  If you installed hdp&ambari normally, then Ambari-Server will connect in ambari database of your postgres.  But I think, maybe you were upgraded twice between hdp 2.4.0 -> hdp 2.5.3.0 and hdp 2.5.3.0-> hdp 2.6.0.0 of rolling or manual upgrade.  So I asking, did you upgrade clearly from previous hdp version?  Now, your ambari database in postgres is conflicting cluster, service, some tables, FK constraint(alert_definition table doesn't exist.)..  2. Check Ambari DDL Postgres.  # cd /var/lib/ambari-server/resources  If embeded postgres  # vim Ambari-DDL-Postgres-EMBEDDED-CREATE.sql  Else if custom postgres  # vim Ambari-DDL-Postgres-CREATE.sql  ambari-server still can't find you hadoop cluster 'cresta'.  3. Check ambari database in Postgres.  connect postgres.  \connect ambari  SELECT * FROM ambari.alert_definition;  SELECT * FROM ambari.alert_history;  SELECT * FROM alert_definition WHERE definition_id=74;  SELECT * FROM alert_history WHERE alert_definition_id=74;  definition_id in alert_definition table is alert_definition_id in alert_history table like FK.  But alert_definition_id (74) does not exist in alert_definition table.  It seems something wrong and not clean upgraded. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-11-2017
	
		
		11:44 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 1. Run this command as hdfs user.   hdfs fsck /  2. If you return CORRUPT files, then try it below command.  hdfs fsck -list-corruptfileblocks  hdfs fsck $hdfsfilepath -location -blocks -files   
hdfs fsck -delete $corrupt_files_path   3. Complete above procedure, and re-run 1. command.  hdfs fsck / 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-24-2017
	
		
		01:21 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 You should check your admin home directory ownership in HDFS (/user/admin/hive).   Causedby: org.apache.hadoop.security.AccessControlException:Permission denied: user=admin, access=WRITE, inode="/user/admin/hive/jobs/hive-job-82-2017-04-21_10-02/query.hql":root:hdfs:drwxr-xr-x   Following this above messages, you cannot write your home directory or query.hql file.  Check list hierarchically.  /user/admin  /user/admin/hive  /user/admin/hive/jobs  /user/admin/hive/jobs/hive-job-82-2017-04-21_10-02  /user/admin/hive/jobs/hive-job-82-2017-04-21_10-02/query.hql  If the file 'query.hql' only set the ownership like this root:hdfs, then you have to change ownership to admin:admin or admin:hdfs. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-18-2017
	
		
		05:56 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 If this is the case,  Rack-A  Master 0 (Ambari)  Master 1 (NameNode Server 1, NameNodeHA)  Rack-B  Master 2 (NameNode Server 2, NameNodeHA)  Master 3 (Yarn, etc...)  Rack-C  Slave 1-10 (DataNode 1-10)  Rack-D  Slave 11-20 (DataNodes 11-20)  Just only set the rack awareness to datanode's racks using by ambari?  Like this  /default  : Master 0, Master 1, Master 2, Master 3  /Rack-C  : Slave 1-10  /Rack-D  : Slave 11-20 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Hadoop
 
        











