Member since
08-08-2016
43
Posts
32
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4653 | 09-22-2017 09:10 PM | |
2998 | 06-16-2017 11:29 AM | |
2380 | 06-14-2017 09:27 PM | |
3305 | 02-28-2017 03:51 PM | |
1020 | 11-02-2016 02:00 PM |
12-07-2016
04:37 AM
@Rene Sluiter - ls /usr/share/java/mysql-connector-java.jar can you check the jar in share folder ?
... View more
11-02-2016
02:00 PM
@Peter Coates - look for the parameters fs.s3a.multipart.threshold and fs.s3a.multipart.size
... View more
09-20-2016
02:50 PM
It might be due to network connectivity issue. Please check on the network config to see any packet loss.
... View more
09-20-2016
10:58 AM
1 Kudo
You must supply the generic arguments -conf , -D , and so on after the tool name but before any tool-specific arguments (such as --connect ). Note that generic Hadoop arguments are preceeded by a single dash character ( - ), whereas tool-specific arguments start with two dashes ( -- ), unless they are single character arguments such as -P . https://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html#_using_generic_and_specific_arguments
... View more
09-20-2016
10:54 AM
6 Kudos
@Gayathri Reddy G - pass generic arguments like -D after SQOOP JOB -Dhadoop.security.credential.provider.path=jceks .... General syntax is sqoop-job (generic-args) (job-args) [-- [subtool-name] (subtool-args)]
... View more
09-12-2016
04:22 PM
2 Kudos
you have to specify MM for the month. 'mm' is for the minutes.
... View more
09-11-2016
09:38 PM
5 Kudos
Using SQOOP with MySQL as metastore To set up MySQL for use with SQOOP: On the SQOOP
Server host, install the connector. Install the
connector
RHEL/CentOS/Oracle Linux yum install mysql-connector-java SLES zypper install mysql-connector-java Confirm that .jar
is in the Java share directory. ls /usr/share/java/mysql-connector-java.jar Make sure the
.jar file has the appropriate permissions - 644. Create a user for
SQOOP and grant it permissions. For example,
using the MySQL database admin utility: # mysql -u root -p
CREATE USER '<SQOOPUSER>'@'%' IDENTIFIED BY '<SQOOPPASSWORD>';
GRANT ALL PRIVILEGES ON *.* TO '<SQOOPUSER>'@'%';
CREATE USER '<SQOOPUSER>'@'localhost' IDENTIFIED BY '<SQOOPPASSWORD>';
GRANT ALL PRIVILEGES ON *.* TO '<SQOOPUSER>'@'localhost';
CREATE USER '<SQOOPUSER>'@'<SQOOPSERVERFQDN>' IDENTIFIED BY '<SQOOPPASSWORD>';
GRANT ALL PRIVILEGES ON *.* TO '<SQOOPUSER>'@'<SQOOPSERVERFQDN>';
FLUSH PRIVILEGES; Where
<SQOOPUSER> is the SQOOP user name, <SQOOPPASSWORD> is the SQOOP
user password and <SQOOPSERVERFQDN> is the Fully Qualified Domain Name of
the SQOOP Server host. Configure the
sqoop-site.xml to create the sqoop database and load the SQOOP Server database
schema. <configuration> <property> <name>sqoop.metastore.client.enable.autoconnect</name> <value>true</value> </property> <property> <name>sqoop.metastore.client.autoconnect.url</name> <value>jdbc:mysql://<<MYSQLHOSTNAME>>/sqoop?createDatabaseIfNotExist=true</value> </property> <property> <name>sqoop.metastore.client.autoconnect.username</name> <value>$$SQOOPUSER$$</value> </property> <property> <name>sqoop.metastore.client.autoconnect.password</name> <value>$$$SQOOPPASSWORD$$$</value> </property> <property> <name>sqoop.metastore.client.record.password</name> <value>true</value> </property> <property> <name>sqoop.metastore.server.location</name> <value>/usr/lib/sqoop/metastore/</value> </property> <property> <name>sqoop.metastore.server.port</name> <value>16000</value> </property> </configuration> execute the
following command to create the initial database and tables. sqoop job --list If you get any error or exception then you must
pre-load the SQOOP tables with the mandatory values. mysql -u <SQOOPUSER> -p USE <SQOOPDATABASE>; -- Inserted the following row
INSERT INTO SQOOP_ROOT VALUES( NULL, 'sqoop.hsqldb.job.storage.version', '0' ); Where
<SQOOPUSER> is the SQOOP user name and <SQOOPDATABASE> is the SQOOP
database name. execute the
following command one more time, to create the all required SQOOP internal meta
tables. sqoop job --list Once all the
necessary sqoop tables are created, then sqoop job will use the meta store for
the SQOOP job execution.
... View more
Labels:
09-01-2016
03:05 PM
@Jon Roberts Could you please elaborate on external table supports secured clusters? Not sure how HAWQ handles HDFS write to different secured hadoop cluster using external writable table. Thanks in advance.
... View more
09-01-2016
02:49 PM
@Jon Roberts sqoop can be run in parallel based on split by coloumn id or externally providing the number of mapper. Majority of the places, HAWQ will be managed by a different team, creating the external table involves lot of process changes. Not sure how HAWQ will handle HDFS write, in case of secured cluster.
... View more
- « Previous
-
- 1
- 2
- Next »