Member since 
    
	
		
		
		09-11-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                269
            
            
                Posts
            
        
                281
            
            
                Kudos Received
            
        
                55
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 4187 | 03-15-2017 07:12 AM | |
| 2503 | 03-14-2017 07:08 PM | |
| 3025 | 03-14-2017 03:36 PM | |
| 2482 | 02-28-2017 04:32 PM | |
| 1713 | 02-28-2017 10:02 AM | 
			
    
	
		
		
		02-20-2017
	
		
		10:16 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							@Bilal Arshad The exception says - the connection is reset. This could happen in cases when atlas has not come up properly. Can you please see if atlas has started properly by checking its status?    curl -v http://localhost:21000/api/atlas/admin/version  
  To load quick start model - sample model and data    bin/quick_start.py [<atlas endpoint>] 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-16-2017
	
		
		05:11 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Bilal Arshad   Can you check what do you have under ATLAS_HOME/conf/solr? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-14-2017
	
		
		06:38 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		12 Kudos
		
	
				
		
	
		
					
							 Recently, I have observed there are lot of questions in HCC on - how to reset Atlas and Is there a way to delete the registered types in Atlas. So, I thought of sharing this article with the community to clarify these queries.  To give some context, Atlas uses HBase as its default datastore when its managed by Ambari. So, basically it uses two Hbase tables to store all its metadata.   'atlas_titan' : stores all the metadata from various sources.  'ATLAS_ENTITY_AUDIT_EVENTS': stores the audit information of the entities in Atlas   The above two table names can be changed using two properties "atlas.graph.storage.hbase.table" and "atlas.audit.hbase.tablename" respectively in the atlas-application.properties.  Now, coming back to the actual question - how to wipe-out the metadata from the Atlas? Follow below steps to achieve the same.   Stop Atlas via Ambari.  In hbase terminal, to disable hbase table, run this command.   disable 'atlas_titan'   In hbase terminal, to drop hbase table, run this command.   drop 'atlas_titan'   Start Atlas via Ambari.   The above steps can be repeated for 'ATLAS_ENTITY_AUDIT_EVENTS' table if there is requirement to wipe-out audit data as well.  This above steps should reset atlas and start it as if it is a fresh installation. Let me know if there any queries. Thanks. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		02-14-2017
	
		
		05:16 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							@Brandon Wilson Currently there is not simple way to delete the atlas tags using REST API. To wipe out the atlas database and start fresh, follow the below steps..  1. Stop Atlas
  2. In hbase terminal, to disable hbase table, run this command - "disable 'atlas_titan'"  3. In hbase terminal, to drop hbase table, run this command - "drop 'atlas_titan'"  4. Restart Atlas  This should actually remove all the existing types from backend store(hbase) and reset the database. Hope this helps. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-14-2017
	
		
		05:50 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Check the logs of hbase master and region servers for any exceptions. zookeeper has to be up & running for hbase to work properly. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-13-2017
	
		
		08:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Tariq  The below documentation has step by step details on how to use sqoop to move from any RDBMS to hive.  https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_data-access/content/using_sqoop_to_move_data_into_hive.html  You can refer to this as well: http://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html#_literal_sqoop_import_all_tables_literal  To get the metadata of all this sqoop imported data in to Atlas, make sure the below configurations are set properly.  http://atlas.incubator.apache.org/Bridge-Sqoop.html   Please note the above configuration step is not needed if your cluster configuration is managed by Ambari. Hope this helps. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-13-2017
	
		
		08:01 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Hitesh Rajpurohit  HBase is the default datastore for atlas. So, as part of atlas startup script, it sets up required hbase tables to store the metadata.   From the logs, it looks like HBase master is in its initialization phase and is not completely running yet.. I would suggest to wait for the hbase master and region server to come up completely. If it is stuck then try restarting hbase service. You can check the status of hbase services through Ambari. Hope this helps.  ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
	at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2402)
	at org.apache.hadoop.hbase.master.MasterRpcServices.getTableNames(MasterRpcServices.java:901)
	at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:57172)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
	at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-13-2017
	
		
		05:55 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Glad it worked! Accept the answer. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-13-2017
	
		
		05:28 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@nick _ Did you try the mentioned solution? Is it working for you? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-13-2017
	
		
		05:28 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@subash sharma Is this issue resolved? 
						
					
					... View more