Member since 
    
	
		
		
		09-24-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                527
            
            
                Posts
            
        
                136
            
            
                Kudos Received
            
        
                19
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2875 | 06-30-2017 03:15 PM | |
| 4341 | 10-14-2016 10:08 AM | |
| 9572 | 09-07-2016 06:04 AM | |
| 11612 | 08-26-2016 11:27 AM | |
| 1908 | 08-23-2016 02:09 PM | 
			
    
	
		
		
		08-11-2020
	
		
		01:40 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I did this in root user, found the file and changed it there. But, how to change it for each node?    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-18-2019
	
		
		06:55 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @myoung can you please give the syntax to write this sort of query 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-29-2019
	
		
		06:05 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							  @Roberto Sancho, Good to know that it helped. please accept the answer to close the thread. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-21-2018
	
		
		03:40 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 its work from pyspark shell but no from jupyter with pyspark kernel. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-15-2019
	
		
		05:10 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 same issue,help please  Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to org.apache.atlas.catalog.VertexWrapper                                         
        at org.apache.atlas.catalog.query.BaseQueryExpression.evaluate(BaseQueryExpression.java:52)                                                        
        at org.apache.atlas.catalog.query.BaseQueryExpression$1.compute(BaseQueryExpression.java:71)                                                       
        at org.apache.atlas.catalog.query.BaseQueryExpression$1.compute(BaseQueryExpression.java:68)                                                       
        at com.tinkerpop.pipes.filter.FilterFunctionPipe.processNextStart(FilterFunctionPipe.java:24)                                                      
        at com.tinkerpop.pipes.AbstractPipe.hasNext(AbstractPipe.java:98)                                                                                  
        at com.tinkerpop.pipes.util.Pipeline.hasNext(Pipeline.java:105)                                                                                    
        at com.tinkerpop.pipes.filter.BackFilterPipe.processNextStart(BackFilterPipe.java:33)                                                              
        at com.tinkerpop.pipes.AbstractPipe.next(AbstractPipe.java:89)                                                                                     
        at com.tinkerpop.pipes.util.Pipeline.next(Pipeline.java:115)                                                                                       
        at com.tinkerpop.pipes.util.PipeHelper.fillCollection(PipeHelper.java:52)                                                                          
        at com.tinkerpop.gremlin.java.GremlinPipeline.toList(GremlinPipeline.java:1564)                                                                    
        at org.apache.atlas.catalog.query.BaseQuery.executeQuery(BaseQuery.java:106)                                                                       
        at org.apache.atlas.catalog.query.BaseQuery.execute(BaseQuery.java:67)                                                                             
        at org.apache.atlas.catalog.TaxonomyResourceProvider.doGetResources(TaxonomyResourceProvider.java:163)                                             
        at org.apache.atlas.catalog.TaxonomyResourceProvider.getResources(TaxonomyResourceProvider.java:75)                                                
        at org.apache.atlas.web.resources.BaseService.getResources(BaseService.java:66)                                                                    
        ... 89 more                                                                                                                                        
2019-01-15 16:58:34,818 INFO  - [pool-2-thread-10 - ba8dd7bd-2d8a-4401-9f73-a33bef04b1a0:] ~ LuceneQuery: hierarchy__slash__path:__slash__ (QueryFactory:98
)                                                                                                                                                          
2019-01-15 16:58:34,841 ERROR - [pool-2-thread-8 - 5366fbbb-e72a-497d-9bc6-07c358e3576f:] ~ graph rollback due to exception AtlasBaseException:Instance __A   tlasUserProfile with unique attribute {name=admin} does not exist (GraphTransactionInterceptor:73)  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-07-2017
	
		
		07:49 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 @Roberto Sancho 
  1. If you need any detail from the backup then you MUST migrate the hive metastore backup to new postgres instance.  2. To ensure that the backup is applied correctly and that there are no inconsistencies, you must shut down the Hive Instance/Metastore then apply the backup, do quick consistency checks and then restart hive metastore and the instance.  3. You only need to shutdown Hive Metastore/Instance. No other component needs to be shutdown. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-30-2017
	
		
		03:15 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Problem was solved by running the command hdfs debug recoverLease -path <path-of-the-file> [-retries <retry-times>]: 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-10-2017
	
		
		03:30 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 HI:  but it have sense to install all elements on the HDFS, i mean sqoop, flume, kafka, storm etc, son please any explanation why is not good to install flume and sqoop on HDF?? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-17-2016
	
		
		07:25 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi:  after restart the hiveserver2 and set hive.execution.engine=mr is working.  thanks 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-27-2016
	
		
		07:08 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi:   I have resolved the problem, but I thing there is  A bug or somenthing, let my explain:  The V1=11.88 whe y type DoubleType or DecimalType doesnt work, but if I type StringType, is working, so... please could you confirm that is correct my test????  {"CAMPO1":"xxxx","CAMPO2":"xxx","VARIABLE":{"V1":"11.88"}}
schema = StructType([
StructField("CAMPO1", StringType(), True),
StructField("CAMPO2", StringType(), True),
StructField("VARIABLE", StructType([
StructField("V1", StringType(), True),
StructField("V2", StringType(), True),
StructField("V3", StringType(), True)]))
])
  thanks 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













