Member since
11-17-2015
53
Posts
32
Kudos Received
4
Solutions
08-31-2016
10:32 PM
@Pengqing Bao I haven't run into that error yet. Maybe something is up with the schema? If you import the table to HDFS are you able to load it into your Hive table and query it without issues?
... View more
08-29-2016
06:14 AM
6 Kudos
Overview
There are a couple of different options for importing data from Teradata into Hadoop:
Sqoop and the Teradata JDBC driver (documentation)
Hortonworks Connector for Teradata (documentation)
Teradata Connector for Hadoop (TDCH) (download README here or get the PDF doc)
This article will explore examples of each of the three above along with some of the pros/cons. The aim of this document is help understand when to use each option while keeping the following in mind:
Goal is to create a schema which is as close to Teradata as possible Hive tables will usually end up as ORC but may need intermediate tables stored as TEXTFILE. The examples below use textfile. I have not tested all possible column types. If you have more information, please let me know. Versions tested with: HDP 2.3.2, Sqoop 1.4.6, TDCH 1.4.1 or 1.4.4
All of the examples will use the following common environment variables which you would define at the top of a bash script. The variables are plugged in to the various commands below:
HADOOP_CLASSPATH=$(hcat -classpath) export HADOOP_CLASSPATH
SQOOP_HOME=/usr/hdp/current/sqoop-client
TDUSER=myuser
#TDPASS=****** <-- use an env var instead?
TDHOST=mytdhost
TDDB=METRICS
TDTABLE=ERD
HIVEDB=thirty_day_tables
HIVETABLE=SQOOP_ERD
JDBCURL=jdbc:teradata://$TDHOST/DATABASE=$TDDB
TDCHJAR=/usr/lib/tdch/1.4/lib/teradata-connector-1.4.4.jar
TDCHJARS=$TDCHJAR,/usr/lib/tdch/1.4/lib/tdgssconfig.jar,/usr/lib/tdch/1.4/lib/terajdbc4.jar
TDWHERE="STATUS IN ('A','B','C') and GROUP = 'USA'"
Sqoop and the Teradata JDBC driver
Import schema and data from Teradata to Hive using plain sqoop and JDBC.
Required components:
Sqoop (comes with HDP out of the box)
Teradata JDBC driver (download from Teradata)
Example command:
sqoop import \
-libjars ${LIB_JARS},${TDCHJARS} \
--driver com.teradata.jdbc.TeraDriver \
--connect $JDBCURL \
--table $TDTABLE \
--hive-table ${HIVEDB}.${HIVETABLE} \
--where "${TDWHERE}" \
--username $TDUSER \
--password $TDPASS \
--map-column-java EFF_TSP=String \
--num-mappers 1 \
--map-column-hive EFF_TSP=STRING \
--hive-import
Pros
This only requires the Teradata JDBC driver to be installed, so it is easy to get started with.
Schema can be created for you automatically (although with issues... see cons)
Cons
Plain JDBC doesn't have some of the smarts that are built in to TDCH.
Some fields are not supported and will have to be mapped. For example, TIMESTAMP and DATE fields in Teradata will get mapped to Hive STRING.
Hortonworks Connector for Teradata
Import data from Teradata to Hive using sqoop and the Hortonworks Connector for Teradata. The Hortonworks Connector for Teradata wraps TDCH.
Required components:
Sqoop (comes with HDP out of the box)
Hortonworks Connector for Teradata (download addon)
Comes with version 1.4.1 of TDCH and the Teradata JDBC drivers installed into SQOOP_HOME/lib If you want the schema created, it is a separate command: sqoop create-hive-table \
-libjars ${LIB_JARS} \
--connect $JDBCURL \
--connection-manager org.apache.sqoop.teradata.TeradataConnManager \
--username $TDUSER \
--password $TDPASS \
--table $TDTABLE \
--map-column-hive EFF_TSP=STRING \
--hive-table ${HIVEDB}.${HIVETABLE}
Example command (requires the schema to have been created in hive already):
sqoop import \
-libjars ${LIB_JARS} \
-Dtdch.input.teradata.conditions="${TDWHERE}" \
--connect $JDBCURL \
--connection-manager org.apache.sqoop.teradata.TeradataConnManager \
--username $TDUSER \
--password $TDPASS \
--table $TDTABLE \
--hive-import \
--hive-table ${HIVEDB}.${HIVETABLE} \
--num-mappers 1
Pros
Use sqoop but get some of the smarts that come with TDCH Can create the hive schema using sqoop create-hive-table
Cons
Can't create the hive schema in one step but it is possible in two using create-hive-table and then import Dates and timestamps are converted to STRING automatically or throw an error which requires using the map-column-hive
Some Sqoop commands are not supported in this mode like --where and --hive-overwrite. If supported in TDCH then you need to specify the java property which gets messy sometimes.
Version of TDCH included is out of date (newest available is TDCH 1.4.4 and version 1.4.1 is included right now)
Teradata Connector for Hadoop (TDCH)
Import data from Teradata to Hive using TDCH.
Required components:
Teradata Connector for Hadoop (TDCH)
Example command (requires the schema to have been created in hive already):
hadoop jar $TDCHJAR com.teradata.connector.common.tool.ConnectorImportTool \
-libjars $LIB_JARS \
-url ${JDBCURL} \
-username $TDUSER \
-password $TDPASS \
-jobtype hive \
-fileformat textfile \
-nummappers 1 \
-sourcetable ${TDTABLE} \
-sourceconditions "${TDWHERE}" \
-targetdatabase ${HIVEDB} \
-targettable ${HIVETABLE}
Pros
TDCH supports a bunch of different methods for getting in/out of teradata
Cons
Can't use TDCH to create the Hive schema automatically
... View more
Labels:
08-11-2016
09:55 PM
For postgres, I needed slightly different steps. psql
create database grafana;
create user grafana with password 'grafana';
GRANT ALL PRIVILEGES ON DATABASE grafana to grafana;
connect to grafana db
\c grafana
create session table
CREATE TABLE session (
key CHAR(16) NOT NULL,
data bytea,
expiry INT NOT NULL,
PRIMARY KEY (key)
);
Edited /var/lib/pgsql/data/pg_hba.conf to add the following lines: host all grafana 0.0.0.0/0 trust
local all grafana trust
In Ambari, under “Advanced ams-grafana-in” the content was changed to use postgres: #################################### Database ####################################
[database]
# Either "mysql", "postgres" or "sqlite3", it's your choice
type = postgres
host = YOURSERVER.EXAMPLE.COM:5432
name = grafana
user = grafana
password = grafana
# For "postgres" only, either "disable", "require" or "verify-full"
ssl_mode = disable
# For "sqlite3" only, path relative to data_path setting
;path = grafana.db
Hope this helps someone!
... View more