Member since 
    
	
		
		
		04-03-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                164
            
            
                Posts
            
        
                8
            
            
                Kudos Received
            
        
                4
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2905 | 03-09-2021 10:47 PM | |
| 3913 | 12-10-2018 10:59 AM | |
| 6879 | 12-02-2018 08:55 PM | |
| 11445 | 11-28-2018 10:38 AM | 
			
    
	
		
		
		12-04-2018
	
		
		08:55 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,     As this is a different issue, I would request to open a new thread because there are multiple issues that has been discussed over here which is making this thread more complicating.     I would request to open a new thread and kindly share the link so we can work on it.     Regards  Nitish 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-04-2018
	
		
		08:54 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 It does though but if it has mentioned that in your CDH version you can apply this patch on top of it to make the job run fine then you need to apply the patch on it owns.     It is resolved means there is a patch that has been created and on these CDH versions it has to be applied.     Hope this info helps.     Regards  Nitish 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-04-2018
	
		
		01:48 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,      you can create Oozie workflow jobs( which will run Sqoop actions frequently) and pass it out in coordinator.     Link:- https://oozie.apache.org/docs/4.1.0/DG_SqoopActionExtension.html (To create workflow.xml)  Link:- https://oozie.apache.org/docs/3.1.3-incubating/CoordinatorFunctionalSpec.html ( To create coordinator)     Regards  Nitish 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-04-2018
	
		
		01:41 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,     I think that after apply the patch there is no need to give the tablename in []. As per the comment what you can do is to put the table name in quotes or []. But after getting this patch applied you don't have to put the tablename in quotes or brackets.     The reason why i am saying this is because you are on 1.4.6 version and if this patch was already in the Sqoop then you wouldn't have faced this error.     So looks like you need to apply patch on CDH version.    Hope this helps.     Regards  Nitish 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-03-2018
	
		
		03:30 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hi,     When ever you create sqoop job then it gets created as local metastore under $home/.sqoop folder.     I woul request you to confgiure Global metastore so that all the jobs can be accessed from other nodes as well.     Use the [Best Practices for Sqoop1 Metastore] instructions below to manage the Sqoop1 Metastore     [Best Practices for Sqoop1 Metastore]  1. Identify a cluster host (e.g. metastore1.cloudera.com) upon which to run the Metastore process  2. Use the [Deploy Sqoop1 Client Gateway] instructions to deploy it on the Metatore host  3. Use the [Configure Sqoop1 Metastore Directory] instructions to create a directory for the Metastore database  4. Use the [Set Sqoop1 Safety Valve sqoop-site.xml ( metastore )] instructions to configure the Metastore  5. Use the [Start Sqoop1 Metastore Java Process] instructions to start the Metastore process  6. Use the [Stop Sqoop1 Metastore Java Process] instructions to gracefully stop the Metastore process        ~~~~~     [Deploy Sqoop1 Client Gateway]  1. Login to Cloudera Manager  2. Navigate to: (Home > Cluster)  3. Click button to the right of ClusterName  4. Select "Add a Service"  5. Select "Sqoop 1 Client"  6. Deploy the Gateway on each host that will run Sqoop1 CLI commands  7. Complete wizard     Also see Cloudera Documentation (Deploy Sqoop Client Gateway):  http://www.cloudera.com/content/www/en-us/documentation/enterprise/latest/topics/cm_mc_sqoop1_client...     ~~~~~     [Configure Sqoop1 Metastore Directory]  1. Execute [commands] to setup the Sqoop1 Metastore directory     [commands]  ssh root@metastore.cloudera.com  mkdir /var/lib/sqoop-metastore (can be anywhere)  chown sqoop:sqoop /var/lib/sqoop-metastore     ~~~~~     [Set Sqoop1 Safety Valve sqoop-site.xml ( metastore )]  1. Login to Cloudera Manager  2. Navigate to: (Home > Cluster > Sqoop1 Client Gateway > Configuration)  3. Search for "Sqoop 1 Client Client Advanced Configuration Snippet (Safety Valve) for sqoop-conf/sqoop-site.xml"  4. Add the [xml properties] below  5. Save Changes  6. Redeploy Sqoop1 Client Gateway     [xml properties]  <!-- START SQOOP1 CLIENT GATEWAY SQOOP-SITE.XML SAFETY VALVE CONFIGURATION -->  <property>  <name>sqoop.metastore.client.autoconnect.url</name>  <value>jdbc:hsqldb:hsql://metastore1.cloudera.com:16000/sqoop</value>  <description>THIS TELLS SQOOP1 WHERE TO GO TO CONNECT TO SHARED METASTORE</description>  </property>  <property>  <name>sqoop.metastore.server.location</name>  <value>/var/lib/sqoop-metastore/metastore.db</value>  <description>THIS TELLS SQOOP1 THE LINX PATH AND METASTORE NAME FOR THE DATABASE</description>  </property>  <property>  <name>sqoop.metastore.server.port</name>  <value>16000</value>  <description>THIS THE LISTEN PORT FOR HSQLDB</description>  </property>  <property>  <name>sqoop.metastore.client.record.password</name>  <value>false</value>  <description>IF TRUE, SAVING PASSWORDS IN METASTORE IS ALLOWED</description>  </property>  <property>  <name>sqoop.metastore.client.enable.autoconnect</name>  <value>true</value>  <description>IF TRUE, SQOOP WILL USE LOCAL METASTORE WHEN NO OTHER METASTORE ARGUMENTS ARE PROVIDED </description>  </property>  <!-- SQOOP1 CLIENT GATEWAY SQOOP-SITE.XML SAFETY VALVE CONFIGURATION -->     ~~~~~     [Start Sqoop1 Metastore Java Process]  1. Execute [commands] to start the Sqoop1 Metastore process     [commands]  ssh root@metastore.cloudera.com   sudo -u sqoop nohup sqoop-metastore&   ps -aux | egrep -i metastore     Output:  root      3021  0.1  0.1 169388  3024 pts/1    S    01:49   0:00 sudo -u sqoop sqoop-metastore  sqoop     3022 11.8  3.9 1535656 75732 pts/1   Sl   01:49   0:02 /usr/java/jdk1.7.0_67-cloudera/bin/java -Xmx1000m -Dhadoop.log.dir=/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/lib/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=/opt/cloudera/parcels/GPLEXTRAS-5.7.1-1.cdh5.7.1.p0.11/lib/hadoop/lib/native:::/opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,NullAppender org.apache.sqoop.Sqoop metastore  root      3215  0.0  0.0 101012   860 pts/1    S+   01:49   0:00 egrep -i metastore     ~~~~~     [Stop Sqoop1 Metastore Database]  1. Execute [commands] to stop the Sqoop1 Metastore     [commands]  ssh root@metastore.cloudera.com   sudo -u sqoop sqoop-metastore --shutdown  ps -aux | egrep -i metastore     Hope this helps.        Regards  Nitish 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-03-2018
	
		
		01:33 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hi,     You can create Sqoop jobs which will store the value for last value into the embedded database.     Link:- https://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html#_saved_jobs     Kindly go through above link which will help you to achieve this.     Regards  Nitish 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-02-2018
	
		
		08:55 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,     You can use double quotes or brackets for the table name that you want to import.     Coming to the HDFS error, you can change the directory name by using --target-dir "<any other name>"     Eventually all the data is gonna go into the Hive table and the location of this table would be "/user/hive/warehouse/database/tablename"     This target dir is just a temporary path and finally it will be moved to hive path, so to overcome this error you can give any path in target-dir which should be in correct format.     Regards  Nitish 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-28-2018
	
		
		10:38 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,     There are 2 options that you can do.     1. Sqoop import command with + hive import ( 1 step hive import)  2. Sqoop import to HDFS and create hive table on top of it.     Either of the above option will reach the desired results.     Let me know if you have any questions.     Regards  Nitish 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-28-2018
	
		
		09:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,     Thank you for an update.     The reason why it is failing is because you haven't passed the credentials for Hive Metastore which is required to make the connectivity to HMS. We have passed the hive-site.xml file which will have the address for your HMS backend but to make the connectivity to HMS it has to pass <credentials> tag so that Sqoop can take the delegation token for HMS that will be used further for connectivity.     Link:- https://oozie.apache.org/docs/4.2.0/DG_ActionAuthentication.html     I would request you to go through this link and add <credentials> the it has mentioned in the workflow and use that <credentials> into the action.     Once you add this then try your workflow again, it will run fine.     Regards  Nitish 
						
					
					... View more