Member since 
    
	
		
		
		06-23-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                136
            
            
                Posts
            
        
                8
            
            
                Kudos Received
            
        
                8
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 3288 | 11-24-2017 08:17 PM | |
| 4044 | 07-28-2017 06:40 AM | |
| 1701 | 07-05-2017 04:32 PM | |
| 1971 | 05-11-2017 03:07 PM | |
| 6261 | 02-08-2017 02:49 PM | 
			
    
	
		
		
		05-07-2017
	
		
		08:19 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I was trying to upgrade HDP  from 2.4 to 2.5. After many problems, I gave up on this and decided to wipe out HDP and Ambari and start again. I deleted everything (or so I thought!), and I got to the point of logging into Ambari in order to add hosts, create a cluster etc and it shows my old cluster as before, current version  2.4 upgrade 2.5 available.   I guess I missed something in my wiping out of my old setup. Where is this information stored so I can wipe it out too?  TIA!  PS I saw this but I do not have /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py on my system.  The steps I went through are in this document. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		05-07-2017
	
		
		07:48 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 It was, as you say, borked and it was /usr/lib/python2.6/site-packages that was the problem. Many thanks! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-03-2017
	
		
		03:26 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi.  I am doing a fresh install of HDP2.5.  I am trying to do:  sudo ambari-server setup  but I got this error:    File "/usr/lib/python2.6/site-packages/ambari_server/serverClassPath.py", line 28, in <module>
    from resource_management.core.shell import quote_bash_args
ImportError: No module named resource_management.core.shell  I fixed this by copying in the quote_bash_args routine from here and removing the import.  Now I get another error, again because the core and/or shell module is missing:    File "/usr/sbin/ambari-server.py", line 39, in <module>
    from ambari_server.serverSetup import reset, setup, setup_jce_policy
  File "/usr/lib/python2.6/site-packages/ambari_server/serverSetup.py", line 31, in <module>
    from ambari_commons.firewall import Firewall
  File "/usr/lib/python2.6/site-packages/ambari_commons/firewall.py", line 25, in <module>
    from resource_management.core import shell
ImportError: No module named resource_management.core  Where can I get the latest core and shell modules from? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		04-22-2017
	
		
		05:32 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I do not know what happened by I re-ran   ambari-server upgrade  and it worked. Thanks! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-17-2017
	
		
		08:37 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi. I tried to upgrade my cluster from Ambari 2.2 to 2.4. I went through the upgrade process got it to run, but the web interface was still showing me a version of 2.2.  I think the problem is this:  Confirm there is only one ambari-server*.jar file in <code>/usr/lib/ambari-server 
. If there is more than one JAR file with name ambari-server*.jar, move all JARs except ambari-server-2.4.*.jar to /tmp before proceeding with upgrade. the jar there is ambari-server-2.2.1.0.161.jar.   Can I simply get  replace that jar with the 2.4 jar, and If so where can I get teh jar from? TIA!  EDIT: is this it? It appears to be a blank page. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Ambari
 
			
    
	
		
		
		02-10-2017
	
		
		09:31 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Those were what I tried. However I think you are right about the user/permissions thing. If I have time I'll see if it helps. Thasnk! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-08-2017
	
		
		03:03 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I have a Hive table tweets stored as text that I am trying to write to another table tweetsORC that is ORC.  I am trying to insert from tweets to tweetsORC using:  INSERT OVERWRITE TABLE tweetsORC SELECT <fields> FROM tweets;  When Hive was started I used:  HADOOP_USER_NAME=hdfs hive ... -hiveconf hive.aux.jars.path=/home/ed/Downloads/serde/json-serde-1.3.7-jar-with-dependencies.jar   this errors with:  File does not exist: /home/ed/Downloads/serde/json-serde-1.3.7-jar-with-dependencies.jar  I have copied the file to that location for all nodes. I have copied it to /usr/hdp/current/hive-server2/auxlib on the Hiveserver2 machine. Still get the error.  I have tried using:  HADOOP_USER_NAME=hdfs hive ... -hiveconf hive.aux.jars.path=hdfs:///master.royble.co.uk/jars/json-serde-1.3.7-jar-with-dependencies.jar  which gives:  RuntimeException: java.lang.IllegalArgumentException: Wrong FS:   I have tried changing hive.metastore.warehouse.dir to hdfs:///master.royble.co.uk/user/hive/warehouse.  Still get the error.  I'm tearing my hair out! TIA!! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Hadoop
 
			
    
	
		
		
		02-08-2017
	
		
		02:49 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 If I remove the * and instead do the following I get a different error:  INSERT OVERWRITE TABLE tweetsORC SELECT racist, contributors, coordinates, created_at, entities, favorite_count, favorited, filter_level, geo, id,id_str, in_reply_to_screen_name, in_reply_to_status_id, in_reply_to_status_id_str, in_reply_to_user_id, in_reply_to_user_id_str, is_quote_status, lang, place, possibly_sensitive, retweet_count, retweeted, source, text, timestamp_ms, truncated, userFROM tweets; 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-02-2017
	
		
		10:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I have a Hive table tweets stored as text that I am trying to write to another table tweetsORC that is ORC. Both have the same structure:  col_name	data_type	comment
racist              	boolean             	from deserializer   
contributors        	string              	from deserializer   
coordinates         	string              	from deserializer   
created_at          	string              	from deserializer   
entities            	struct<hashtags:array<string>,symbols:array<string>,urls:array<struct<display_url:string,expanded_url:string,indices:array<tinyint>,url:string>>,user_mentions:array<string>>	from deserializer   
favorite_count      	tinyint             	from deserializer   
favorited           	boolean             	from deserializer   
filter_level        	string              	from deserializer   
geo                 	string              	from deserializer   
id                  	bigint              	from deserializer   
id_str              	string              	from deserializer   
in_reply_to_screen_name	string              	from deserializer   
in_reply_to_status_id	string              	from deserializer   
in_reply_to_status_id_str	string              	from deserializer   
in_reply_to_user_id 	string              	from deserializer   
in_reply_to_user_id_str	string              	from deserializer   
is_quote_status     	boolean             	from deserializer   
lang                	string              	from deserializer   
place               	string              	from deserializer   
possibly_sensitive  	boolean             	from deserializer   
retweet_count       	tinyint             	from deserializer   
retweeted           	boolean             	from deserializer   
source              	string              	from deserializer   
text                	string              	from deserializer   
timestamp_ms        	string              	from deserializer   
truncated           	boolean             	from deserializer   
user                	struct<contributors_enabled:boolean,created_at:string,default_profile:boolean,default_profile_image:boolean,description:string,favourites_count:tinyint,follow_request_sent:string,followers_count:tinyint,following:string,friends_count:tinyint,geo_enabled:boolean,id:bigint,id_str:string,is_translator:boolean,lang:string,listed_count:tinyint,location:string,name:string,notifications:string,profile_background_color:string,profile_background_image_url:string,profile_background_image_url_https:string,profile_background_tile:boolean,profile_image_url:string,profile_image_url_https:string,profile_link_color:string,profile_sidebar_border_color:string,profile_sidebar_fill_color:string,profile_text_color:string,profile_use_background_image:boolean,protected:boolean,screen_name:string,statuses_count:smallint,time_zone:string,url:string,utc_offset:string,verified:boolean>	from deserializer 
  When I try to insert from tweets to tweetsORC I get:  INSERT OVERWRITE TABLE tweetsORC SELECT * FROM tweets;
FAILED: NoMatchingMethodException No matching method for class org.apache.hadoop.hive.ql.udf.UDFToString with (struct<hashtags:array<string>,symbols:array<string>,urls:array<struct<display_url:string,expanded_url:string,indices:array<tinyint>,url:string>>,user_mentions:array<string>>). Possible choices: _FUNC_(bigint)  _FUNC_(binary)  _FUNC_(boolean)  _FUNC_(date)  _FUNC_(decimal(38,18))  _FUNC_(double)  _FUNC_(float)  _FUNC_(int)  _FUNC_(smallint)  _FUNC_(string)  _FUNC_(timestamp)  _FUNC_(tinyint)  _FUNC_(void)  
  The only help I have found on this kind of problem says to make a UDF use primitive types, but I am not using a UDF! Any help is much appreciated!  FYI: Hive version:  Hive 1.2.1000.2.4.2.0-258
Subversion git://u12-slave-5708dfcd-10/grid/0/jenkins/workspace/HDP-build-ubuntu12/bigtop/output/hive/hive-1.2.1000.2.4.2.0 -r 240760457150036e13035cbb82bcda0c65362f3a
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Hortonworks Data Platform (HDP)