Member since
04-13-2016
422
Posts
150
Kudos Received
55
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1937 | 05-23-2018 05:29 AM | |
| 4972 | 05-08-2018 03:06 AM | |
| 1686 | 02-09-2018 02:22 AM | |
| 2716 | 01-24-2018 08:37 PM | |
| 6175 | 01-24-2018 05:43 PM |
08-27-2016
02:23 AM
@mqureshi I have seen this link, it doesn't provide much information to problem.
... View more
08-27-2016
02:22 AM
@Frank Lu This doesn't my problem. Even unable to execute the sqoop command. I'm using 1.4.1 teradata connector
... View more
08-27-2016
02:19 AM
@Kit Menke Can you please share all your finding which you have documented? Thanks in advance.
... View more
08-25-2016
08:51 PM
@Frank Lu Sorry I didn't get you. Are you saying it's working or not working. If it's working can you please rewrite above sqoop command let me know how to pass create-hive-table parameter?
... View more
08-25-2016
06:39 PM
1 Kudo
Hi, Agend: When I Sqoop data from Teradata to Hive, it should import both data and create adhoc schema same as Teradata schema in my Hive. Issue: When I'm trying to Sqoop the data from the Teradata to Hive table, that data is getting successfully sqooped data from Teradata to database directory on HDFS but the is unable able to create the table schema in Hive. Below is my sqoop command: sqoop import -libjars /usr/hdp/current/sqoop-server/lib/ --connect jdbc:teradata://example/Database=ABCDE --create-hive-table --connection-manager org.apache.sqoop.teradata.TeradataConnManager --create-hive-table --username user --password xxxxx --table cricket_location_store --split-by storeid --target-dir /apps/hive/warehouse/user.db/cricket_location_store Below is the error message I'm getting: Processor is: 0s16/08/25 15:58:40 ERROR teradata.TeradataSqoopImportHelper:
Exception running Teradata import jobcom.teradata.connector.common.exception.ConnectorException:
Import Hive table's column schema is missing at
com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:140) at
com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:56) at
org.apache.sqoop.teradata.TeradataSqoopImportHelper.runJob(TeradataSqoopImportHelper.java:370) at
org.apache.sqoop.teradata.TeradataConnManager.importTable(TeradataConnManager.java:504) at
org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497) at
org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:148) at
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at
org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:184) at
org.apache.sqoop.Sqoop.runTool(Sqoop.java:226) at
org.apache.sqoop.Sqoop.runTool(Sqoop.java:235) at
org.apache.sqoop.Sqoop.main(Sqoop.java:244)16/08/25 15:58:40 INFO teradata.TeradataSqoopImportHelper:
Teradata import job completed with exit code 116/08/25 15:58:40 ERROR tool.ImportTool: Encountered
IOException running import job: java.io.IOException: Exception running Teradata
import job at
org.apache.sqoop.teradata.TeradataSqoopImportHelper.runJob(TeradataSqoopImportHelper.java:373) at
org.apache.sqoop.teradata.TeradataConnManager.importTable(TeradataConnManager.java:504) at
org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497) at
org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:148) at
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at
org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:184) at
org.apache.sqoop.Sqoop.runTool(Sqoop.java:226) at
org.apache.sqoop.Sqoop.runTool(Sqoop.java:235) at
org.apache.sqoop.Sqoop.main(Sqoop.java:244)Caused by:
com.teradata.connector.common.exception.ConnectorException: Import Hive table's
column schema is missing at
com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:140) at
com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:56) at
org.apache.sqoop.teradata.TeradataSqoopImportHelper.runJob(TeradataSqoopImportHelper.java:370) ... 9 more Any help is highly appreciated and thanks in advance.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Sqoop
08-23-2016
09:18 PM
1 Kudo
@Rajinder Kaur
Yes, it's valid link. Here you get all the Hortonworks tutorials http://hortonworks.com/tutorials/
... View more
08-18-2016
06:59 PM
1 Kudo
hdfs dfsadmin -setBalancerBandwidth 100000000 on all the DN and the client we ran the command below hdfs balancer -Dfs.defaultFS=hdfs://<NN_HOSTNAME>:8020 -Ddfs.balancer.movedWinWidth=5400000 -Ddfs.balancer.moverThreads=1000 -Ddfs.balancer.dispatcherThreads=200 -Ddfs.datanode.balance.max.concurrent.moves=5 -Ddfs.balance.bandwidthPerSec=100000000 -Ddfs.balancer.max-size-to-move=10737418240 -threshold 5 This will faster balance your HDFS data between datanodes and do this when the cluster is not heavily used. Hope this helps you.
... View more
Labels:
08-13-2016
01:15 AM
@venkat v Can you please check logs on that datanode? Also, run hdfs dfsadmin -report to check whether datanode is really down or ambari gletch ?
... View more
08-11-2016
03:49 PM
@Sunile Manjee May I know are you trying to get service account from LDAP or create them locally? If you are trying to create them locally, just mention the service account which you want to use in the Misc during the installation, Ambari will take care of the other things like making them part of groups. If you trying to get them from LDAP, create a service account in LDAP and make sure that they are part of appropriate groups. By default all service account are part of hadoop group and few service accounts have their own groups like ranger, spark, hdfs, they should also be part of those groups. Example: id ranger
uid=4728(ranger) gid=831(hadoop) groups=848(ranger),831(hadoop)
... View more
08-11-2016
03:21 PM
2 Kudos
@Sunile Manjee I have used the same, that won't be any problem. But make sure that service account id bounded to approriate groups. Example: uid=221(hdfsuser) gid=831(hadoop) groups=347(hdfsgroup),831(hadoop)
... View more