Member since
09-25-2015
23
Posts
108
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1690 | 09-08-2017 11:48 PM | |
3500 | 05-23-2017 10:16 PM | |
3209 | 04-28-2017 09:57 PM | |
2709 | 03-23-2017 12:53 AM |
09-08-2017
11:48 PM
5 Kudos
Found the answer to this. The reason this was happening was because yarn.timeline-service.entity-group-fs-store.group-id-plugin-classes in yarn-site.xml was blank. I updated the cluster with the correct config below and restarted required services. And now its working! yarn.timeline-service.entity-group-fs-store.group-id-plugin-classes = org.apache.tez.dag.history.logging.ats.TimelineCachePluginImpl.
... View more
09-08-2017
11:41 PM
1 Kudo
That explains it, thanks!
... View more
09-08-2017
11:25 PM
6 Kudos
I was upgrading my HDP 2.5 cluster to 2.6 and noticed that hive.llap.io.enabled is set to false in my upgraded 2.6 cluster where as it is true by default on a fresh install HDP 2.6 cluster
... View more
Labels:
09-08-2017
11:11 PM
13 Kudos
I'm trying out some things on tez UI where I run a Tez example jar, get the relevant AppID and DAG ID and look for various pages of Tez UI to reflect expected status. The Task Attempts tab in Tez View keeps displaying "no records available"
... View more
Labels:
- Labels:
-
Apache Tez
05-23-2017
10:16 PM
10 Kudos
@Prabhat Ratnala This worked for me. 2 changes were required-
table_cols="`cat t_cols.txt`" Remove '\' before '\${hiveconf:t_name}' hive --hiveconf t_name="`cat t_cols.txt`" -e 'create table leap_frog_snapshot.LINKED_OBJ_TRACKING (${hiveconf:t_name}) stored as orc tblproperties ("orc.compress"="SNAPPY") ; '
... View more
05-05-2017
11:56 PM
5 Kudos
@Raj B I am not sure if there is a possible way to recover/repair this table.
Here's another thread which discusses a similar issue - https://community.hortonworks.com/questions/68669/csv-query-to-run-from-hivedefaultfileformat-is-orc.html
However, I can suggest 2 solutions to achieve this:
1. The import from RDBMS to Hive in ORC format via HCatalog is supported. You can create a Hive table stored as ORC like below $ hive -e "CREATE TABLE cust (id int, name string) STORED AS ORCFILE;"
When running Sqoop import to Hive, you can use --hcatalog-database and --hcatalog-table options instead of the --hive-table option as described in https://cwiki.apache.org/confluence/download/attachments/27361435/SqoopHCatIntegration-HadoopWorld2013.pptx
2. The same way you did the first time, use Sqoop to import data into a temporary Hive managed table. Create an ORC table in Hive($ hive -e "CREATE TABLE cust (id int, name string) STORED AS ORCFILE;") and finally an insert from temporary table to the ORC table. ($ hive -e "INSERT OVERWRITE TABLE mydrivers SELECT * FROM drivers;")
... View more
04-28-2017
09:57 PM
7 Kudos
@Karan Alang Looks like you are missing a semicolon(;) at the end of the query.
... View more
03-23-2017
12:53 AM
11 Kudos
Found this on the Hortonworks Teradata Connector support doc : If you will run Avro jobs, download avro-mapred-1.7.4-hadoop2.jar and place it under $SQOOP_HOME/lib. I had two versions of avro jars in my $SQOOP_HOME/lib, upon removing all others except the avro-mapred-1.7.4-hadoop2.jar, import succeeded.
... View more
03-22-2017
10:23 PM
6 Kudos
I am trying to a copy an Avro schema file from Teradata to Hdfs using sqoop, but the import job is failing with the below error: sqoop import --libjars "SQOOP_HOME/lib/avro-mapred-1.7.5-hadoop2.jar,SQOOP_HOME/lib/avro-mapred-1.7.4-hadoop2.jar,SQOOP_HOME/lib/paranamer-2.3.jar" --connect jdbc:teradata://xx.xx.xx.xxx/Database=xxxx --connection-manager org.apache.sqoop.teradata.TeradataConnManager --username xxx --password xxx --table xx --target-dir xx --as-avrodatafile -m 1 -- --usexview --accesslock --avroschemafile xx.avsc
INFO impl.YarnClientImpl: Submitted application application_1455051872611_0127
INFO mapreduce.Job: The url to track the job: http://teradata-sqoop-ks-re-sec-4.novalocal:8088/proxy/application_1455051872611_0127/
INFO mapreduce.Job: Running job: job_1455051872611_0127
INFO mapreduce.Job: Job job_1455051872611_0127 running in uber mode : false
INFO mapreduce.Job: map 0% reduce 0%
INFO mapreduce.Job: map 100% reduce 0%
INFO mapreduce.Job: Task Id : attempt_1455051872611_0127_m_000000_0, Status : FAILED
Error: org.apache.avro.generic.GenericData.createDatumWriter(Lorg/apache/avro/Schema;)Lorg/apache/avro/io/DatumWriter;
INFO mapreduce.Job: map 0% reduce 0%
INFO mapreduce.Job: Task Id : attempt_1455051872611_0127_m_000000_1, Status : FAILED
Error: org.apache.avro.generic.GenericData.createDatumWriter(Lorg/apache/avro/Schema;)Lorg/apache/avro/io/DatumWriter;
INFO mapreduce.Job: Task Id : attempt_1455051872611_0127_m_000000_2, Status : FAILED
Error: org.apache.avro.generic.GenericData.createDatumWriter(Lorg/apache/avro/Schema;)Lorg/apache/avro/io/DatumWriter;
INFO mapreduce.Job: map 100% reduce 0%
INFO mapreduce.Job: Job job_1455051872611_0127 failed with state FAILED due to: Task failed task_1455051872611_0127_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
.
.
.
INFO processor.TeradataInputProcessor: input postprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor starts at: 1455147607714
INFO processor.TeradataInputProcessor: input postprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor ends at: 1455147607714
INFO processor.TeradataInputProcessor: the total elapsed time of input postprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor is: 0s
INFO teradata.TeradataSqoopImportHelper: Teradata import job completed with exit code 1
ERROR tool.ImportTool: Error during import: Import Job failed
FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.NoSuchMethodError: org.apache.avro.generic.GenericData.createDatumWriter(Lorg/apache/avro/Schema;)Lorg/apache/avro/io/DatumWriter;
at org.apache.avro.mapreduce.AvroKeyRecordWriter.<init>(AvroKeyRecordWriter.java:53)
at org.apache.avro.mapreduce.AvroKeyOutputFormat$RecordWriterFactory.create(AvroKeyOutputFormat.java:78)
at org.apache.avro.mapreduce.AvroKeyOutputFormat.getRecordWriter(AvroKeyOutputFormat.java:104)
at com.teradata.connector.hdfs.HdfsAvroOutputFormat.getRecordWriter(HdfsAvroOutputFormat.java:49)
at com.teradata.connector.common.ConnectorOutputFormat$ConnectorFileRecordWriter.<init>(ConnectorOutputFormat.java:89)
at com.teradata.connector.common.ConnectorOutputFormat.getRecordWriter(ConnectorOutputFormat.java:38)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:647)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:767)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
... View more
Labels:
- Labels:
-
Apache Sqoop
03-07-2017
12:15 AM
1 Kudo
Also ensure all of the below ACID properties are set accordingly: "hive.support.concurrency": true, "hive.enforce.bucketing": true, "hive.exec.dynamic.partition.mode": "nonstrict", "hive.txn.manager": "org.apache.hadoop.hive.ql.lockmgr.DbTxnManager", "hive.compactor.initiator.on": true.
... View more