Member since
05-19-2016
93
Posts
17
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5286 | 01-30-2017 07:34 AM | |
3645 | 09-14-2016 10:31 AM |
09-21-2016
11:08 AM
@Pierre Villard This is working with -Dmapreduce.job.user.classpath.first=true option. Thanks a lot.
... View more
09-21-2016
07:05 AM
I am trying to execute below command in sqoop sqoop import --connection-manager org.apache.sqoop.teradata.TeradataConnManager --connect jdbc:teradata://***.***.***.***/DATABASE=***** --username ***** --password **** --table mytable --target-dir /user/aps/test2 --as-parquetfile -m 1 Output : -rw-r--r-- 3 ****** hdfs 0 2016-09-21 12:25 /user/aps/test2/_SUCCESS -rw-r--r-- 3 ****** hdfs 18 2016-09-21 12:25 /user/aps/test2/part-m-00000 Above output is not in parquet format. If I use com.teradata.jdbc.TeraDriver , it is working. But I have to use org.apache.sqoop.teradata.TeradataConnManager for connection. Please help.
... View more
Labels:
- Labels:
-
Apache Sqoop
09-15-2016
11:12 AM
Below Sqoop commands are working for me. For Snappy: sqoop import -Dmapreduce.output.fileoutputformat.compress=true -Dmapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.SnappyCodec --connection-manager org.apache.sqoop.teradata.TeradataConnManager --connect jdbc:teradata://**.***.***.***/DATABASE=****** --username ****** --password **** --table mytable --target-dir /user/aps/test95 -m 1 For BZip2: sqoop import -Dmapreduce.output.fileoutputformat.compress=true -Dmapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.BZip2Codec --connection-manager org.apache.sqoop.teradata.TeradataConnManager --connect jdbc:teradata://**.***.***.***/DATABASE=****** --username ****** --password **** --table mytable --target-dir /user/aps/test96 -m 1 For lzo: sqoop import -Dmapreduce.output.fileoutputformat.compress=true -Dmapreduce.output.fileoutputformat.compress.codec=com.hadoop.compression.lzo.LzopCodec --connection-manager org.apache.sqoop.teradata.TeradataConnManager --connect jdbc:teradata://**.***.***.***/DATABASE=****** --username ****** --password **** --table mytable --target-dir /user/aps/test98 -m 1
... View more
09-15-2016
10:54 AM
@Nitin Shelke This is working after adding this configuration. Thanks a lot.
... View more
09-15-2016
09:59 AM
@Nitin Shelke I have already checked with org.apache.hadoop.io.compress.SnappyCodec. This is not working for me. Sqoop command: sqoop import --connection-manager org.apache.sqoop.teradata.TeradataConnManager --connect jdbc:teradata://**.***.***.***/DATABASE=****** --username ****** --password **** --table mytable --target-dir /user/aps/test85 --compress --compression-codec org.apache.hadoop.io.compress.SnappyCodec -m 1 Output: -rw-r--r-- 3 ****** hdfs 0 2016-09-15 13:39 /user/aps/test85/_SUCCESS -rw-r--r-- 3 ****** hdfs 18 2016-09-15 13:39 /user/aps/test85/part-m-00000 Please help.
... View more
09-15-2016
08:28 AM
1 Kudo
I am trying to import data from teradata to HDFS using both teradata manager and jdbc driver . Using jdbc driver it is working fine but for teradata manager it is not working as expected. I am not getting any error. Below is the sqoop commands. Using JDBC Driver: sqoop import --driver com.teradata.jdbc.TeraDriver --connect jdbc:teradata://**.***.***.***/DATABASE=****** --username ****** --password **** --table mytable --target-dir /user/aps/test87 --compress -m 1 Output: -rw-r--r-- 3 ***** hdfs 0 2016-09-15 13:45 /user/aps/test87/_SUCCESS -rw-r--r-- 3 ***** hdfs 38 2016-09-15 13:45 /user/aps/test87/part-m-00000.gz Using Teradata Manager : sqoop import --connection-manager org.apache.sqoop.teradata.TeradataConnManager --connect jdbc:teradata://**.***.***.***/DATABASE=****** --username ****** --password **** --table mytable --target-dir /user/aps/test88 --compress -m 1 Output: -rw-r--r-- 3 ****** hdfs 0 2016-09-15 13:46 /user/aps/test88/_SUCCESS -rw-r--r-- 3 ****** hdfs 18 2016-09-15 13:46 /user/aps/test88/part-m-00000
For Teradata Manager output should be .gz file. Am I doing something wrong. Please help. I am facing same issue for snappy, parquet, BZip2, avro . Please help asap.
... View more
Labels:
- Labels:
-
Apache Sqoop
09-15-2016
06:56 AM
@Steven O'Neill Thanks a lot 🙂 . This is working for me .
... View more
09-14-2016
03:17 PM
Below is from yarn log 2016-09-14 15:49:29,345 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.NoSuchMethodError: org.apache.avro.generic.GenericData.createDatumWriter(Lorg/apache/avro/Schema;)Lorg/apache/avro/io/DatumWriter;
at org.apache.avro.mapreduce.AvroKeyRecordWriter.<init>(AvroKeyRecordWriter.java:53)
at org.apache.avro.mapreduce.AvroKeyOutputFormat$RecordWriterFactory.create(AvroKeyOutputFormat.java:78)
at org.apache.avro.mapreduce.AvroKeyOutputFormat.getRecordWriter(AvroKeyOutputFormat.java:104)
at com.teradata.connector.hdfs.HdfsAvroOutputFormat.getRecordWriter(HdfsAvroOutputFormat.java:49)
at com.teradata.connector.common.ConnectorOutputFormat$ConnectorFileRecordWriter.<init>(ConnectorOutputFormat.java:89)
at com.teradata.connector.common.ConnectorOutputFormat.getRecordWriter(ConnectorOutputFormat.java:38)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:647)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:767)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
2016-09-14 15:49:29,351 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping MapTask metrics system...
2016-09-14 15:49:29,351 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system stopped.
2016-09-14 15:49:29,352 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system shutdown complete.
End of LogType:syslog
... View more
09-14-2016
11:35 AM
Below is the full stack trace. 16/09/14 15:49:10 INFO mapreduce.Job: Running job: job_1473774257007_0002
16/09/14 15:49:19 INFO mapreduce.Job: Job job_1473774257007_0002 running in uber mode : false
16/09/14 15:49:19 INFO mapreduce.Job: map 0% reduce 0%
16/09/14 15:49:22 INFO mapreduce.Job: Task Id : attempt_1473774257007_0002_m_000000_0, Status : FAILED
Error: org.apache.avro.generic.GenericData.createDatumWriter(Lorg/apache/avro/Schema;)Lorg/apache/avro/io/DatumWriter;
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143 16/09/14 15:49:25 INFO mapreduce.Job: Task Id : attempt_1473774257007_0002_m_000000_1, Status : FAILED
Error: org.apache.avro.generic.GenericData.createDatumWriter(Lorg/apache/avro/Schema;)Lorg/apache/avro/io/DatumWriter;
16/09/14 15:49:29 INFO mapreduce.Job: Task Id : attempt_1473774257007_0002_m_000000_2, Status : FAILED
Error: org.apache.avro.generic.GenericData.createDatumWriter(Lorg/apache/avro/Schema;)Lorg/apache/avro/io/DatumWriter;
16/09/14 15:49:35 INFO mapreduce.Job: map 100% reduce 0%
16/09/14 15:49:36 INFO mapreduce.Job: Job job_1473774257007_0002 failed with state FAILED due to: Task failed task_1473774257007_0002_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0 16/09/14 15:49:36 INFO mapreduce.Job: Counters: 12
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=3
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=8818
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=8818
Total vcore-seconds taken by all map tasks=8818
Total megabyte-seconds taken by all map tasks=18059264
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
16/09/14 15:49:36 INFO processor.TeradataInputProcessor: input postprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor starts at: 1473848376584
16/09/14 15:49:37 INFO processor.TeradataInputProcessor: input postprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor ends at: 1473848376584
16/09/14 15:49:37 INFO processor.TeradataInputProcessor: the total elapsed time of input postprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor is: 0s
16/09/14 15:49:37 INFO teradata.TeradataSqoopImportHelper: Teradata import job completed with exit code 1
16/09/14 15:49:37 ERROR tool.ImportTool: Error during import: Import Job failed Schema : {
"type" : "record",
"namespace" : "avronamespace",
"name" : "Employee",
"fields" : [
{ "name" : "Id" , "type" : "string" },
{ "name" : "Name" , "type" : "string" }
]
} Also my concern is , why avro schema file is required here. I am trying to import data from Teradata to HDFS using avro file format. Please help.
... View more
09-14-2016
10:43 AM
Thanks a lot . This is working for lower case.
... View more
- « Previous
- Next »