Member since 
    
	
		
		
		03-04-2019
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                67
            
            
                Posts
            
        
                2
            
            
                Kudos Received
            
        
                3
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 5315 | 03-18-2020 01:42 AM | |
| 2666 | 03-11-2020 01:09 AM | |
| 3257 | 12-16-2019 04:17 AM | 
			
    
	
		
		
		03-18-2020
	
		
		02:55 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @prakashpunj .  below error is coming in Hue GUI? if yes then you don't have access for Hue GUI.     Cannot access: //. The HDFS REST service is not available. Note: you are a Hue admin but not a HDFS superuser, "hdfs" or part of HDFS supergroup, "supergroup".  you need to contact with your Hadoop admin team to take the Hue access control.     Thanks  HadoopHelp    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-18-2020
	
		
		02:39 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Logica .     I think you need keep hive-site.xml file into spark -     Please follow the below steps for running the hive query or accessing the hive table through pyspark-     https://acadgild.com/blog/how-to-access-hive-tables-to-spark-sql        Thanks  HadoopHelp 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-18-2020
	
		
		01:42 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Logica .     please check whether database is selected or not for running the query-     below is code for reading hive table -  from pyspark.conf import SparkConf
from pyspark.context import SparkContext
from pyspark.sql import HiveContext
sc= SparkContext('local','example')
hc = HiveContext(sc)
tf1 = sc.textFile("/user/BigData/nooo/SparkTest/train.csv")
#print(tf1.show(10))
#here reading hive table from pyspark
#print(data)
#data=tf1.top(10)
#print(data)
hc.sql("use default") #selected db here 
spf = hc.sql("SELECT * FROM tempaz LIMIT 100")
print(spf.show(5))     Thanks  HadoopHelp    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-11-2020
	
		
		01:09 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi .  I think below link is helpful to you , just try IT-     https://community.cloudera.com/t5/Support-Questions/Quickstart-VM/m-p/290564#M214948     Thanks  HadoopHelp 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-03-2020
	
		
		06:06 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Dear all. 
   
 I created one hive temp tables as below - 
 but how can we identified that tables is temp tables or not. 
   
   
   
 CREATE temporary TABLE IF NOT EXISTS employee ( eid int, name String,
salary String, destination String)
COMMENT ‘Employee details’
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘\t’
LINES TERMINATED BY ‘\n’
STORED AS TEXTFILE; 
   
   
 i used below command to described but not getting any info- 
   
   
 describe formatted employee;  col_name 	data_type 	comment
1	# col_name            	data_type           	comment             
2		NULL	NULL
3	eid	int	
4	name	string	
5	salary	string	
6	destination	string	
7		NULL	NULL
8	# Detailed Table Information	NULL	NULL
9	Database:           	h7                  	NULL
10	OwnerType:          	USER                	NULL
11	Owner:              	****            	NULL
12	CreateTime:         	Tue Mar 03 08:50:28 EST 2020	NULL
13	LastAccessTime:     	UNKNOWN             	NULL
14	Retention:          	0                   	NULL
15	Location:           	hdfs:***********	NULL
16	Table Type:         	MANAGED_TABLE       	NULL
17	Table Parameters:	NULL	NULL
18		comment             	Employee details    
19		NULL	NULL
20	# Storage Information	NULL	NULL
21	SerDe Library:      	org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe	NULL
22	InputFormat:        	org.apache.hadoop.mapred.TextInputFormat	NULL
23	OutputFormat:       	org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat	NULL
24	Compressed:         	No                  	NULL
25	Num Buckets:        	-1                  	NULL
26	Bucket Columns:     	[]                  	NULL
27	Sort Columns:       	[]                  	NULL
28	Storage Desc Params:	NULL	NULL
29		field.delim         	\t                  
30		line.delim          	\n                   
   
   
 Thanks 
 HadoopHelp 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Hive
- 
						
							
		
			Apache Impala
			
    
	
		
		
		03-02-2020
	
		
		10:50 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							      Hello all.     is really hdp file size is 29 gb?     https://www.cloudera.com/downloads/hortonworks-sandbox/hdp.html     i started downloading this hdp file from above link and showing file size is 29 gb approx?                    HadoopHelp      
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-02-2020
	
		
		10:46 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello all.     is really hdp file size is 29 gb?     https://www.cloudera.com/downloads/hortonworks-sandbox/hdp.html     i started downloading this hdp file from above link and showing file size is 29 gb approx?     sorry this is not a part of this subject headlines  but yes i need comment ?              HadoopHelp 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-26-2020
	
		
		05:34 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hi Mat.     Please Try with -     https://www.cloudera.com/downloads/hortonworks-sandbox/hdp.html        Thanks  HadoopHelp 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-12-2020
	
		
		11:06 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @vignesh_radhakr .     you can simply access your hive by using below :-  URL:-  conn = hive.Connection(host="masterIP", port=10000, username="cdh123")
note:- MasterIP need to pass with port 10000     Thanks  HadoopHelp 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        








