Member since
10-01-2018
7
Posts
0
Kudos Received
0
Solutions
06-29-2019
02:16 PM
I am trying to exchange a partition from staging db after merging the incremental data with the existing one as below: 1. Created staging table with partition: CREATE TABLE stg.customers_testcontrol_staging(customer_id bigint,customer_name string, customer_number string,status string,attribute_category string,attribute1 string, attribute2 string, attribute3 string, attribute4 string, attribute5 string) PARTITIONED BY (source_name string) ROW FORMAT SERDE'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' Location('/apps/hive/warehouse/stg.db/customers_testcontrol_staging' 2. Inserted data into above table after merging with the base table data INSERT OVERWRITE TABLE finstg.customers_testcontrol_staging PARTITION
(source_name) SELECT t1.* FROM (SELECT * FROM base.customers where
source_name='ORACLE' UNION ALL SELECT * FROM
external.customers_incremental_data) t1 JOIN (SELECT
customer_id,source_name, max(updated_date) max_modified FROM (SELECT *
FROM base.customers where source_name='ORACLE' UNION ALL SELECT * FROM
external.customers_incremental_data) t2 GROUP BY
customer_id,source_name) s ON t1.customer_id=s.customer_id AND
t1.source_name=s.source_name; Primary Keys of the table I am performing the join are: customer_id & source_name 3. Exchange partition step. ALTER TABLE base.customers EXCHANGE PARTITION (source_name = 'ORACLE') WITH TABLE stg.customers_testcontrol_staging; But the exchange partition step fails with the exception: Error: Error while compiling statement: FAILED: SemanticException [Error 10118]: Partition already exists [customers(source_name=ORACLE)] I took the syntax from Hive Confluence page Is there anything I missed to include in the EXCHANGE partition statement ? Could anyone let me what is the mistake I am doing here & how can I fix it ?
... View more
Labels:
- Labels:
-
Apache Hive
02-15-2019
01:13 PM
I am trying to load a dataframe into a Hive table by following the below steps:
Read the source table and save the dataframe as a CSV file on HDFS val yearDF = spark.read.format("jdbc").option("url", connectionUrl).option("dbtable", s"(${execQuery}) as year2016").option("user", devUserName).option("password", devPassword).option("partitionColumn","header_id").option("lowerBound", 199199).option("upperBound", 284058).option("numPartitions",10).load()
Order the columns as per my Hive table columns
My hive table columns are present in a string in the format of: val hiveCols = col1:coldatatype|col2:coldatatype|col3:coldatatype|col4:coldatatype...col200:datatype
val schemaList = hiveCols.split("\\|")
val hiveColumnOrder = schemaList.map(e => e.split("\\:")).map(e => e(0)).toSeq
val finalDF = yearDF.selectExpr(hiveColumnOrder:_*)
The order of columns that I read in "execQuery" are same as
"hiveColumnOrder" and just to make sure of the order, I select the
columns in yearDF once again using selectExpr
Saving the dataframe as a CSV file on HDFS: newDF.write.format("CSV").save("hdfs://username/apps/hive/warehouse/database.db/lines_test_data56/")
Once I save the dataframe, I take the same columns from "hiveCols",
prepare a DDL to create a hive table on the same location with values being comma separated as given
below:
create table if not exists schema.tablename(col1 coldatatype,col2
coldatatype,col3 coldatatype,col4 coldatatype...col200 datatype)
ROW
FORMAT DELIMITED FIELDS TERMINATED BY ','
STORED AS TEXTFILELOCATION
'hdfs://username/apps/hive/warehouse/database.db/lines_test_data56/';
After I load the dataframe into the table created, the problem I am
facing here is when I query the table, I am getting improper output in
the query.
For ex: If I apply the below query on the dataframe before saving it as a
file: finalDF.createOrReplaceTempView("tmpTable")
select header_id,line_num,debit_rate,debit_rate_text,credit_rate,credit_rate_text,activity_amount,activity_amount_text,exchange_rate,exchange_rate_text,amount_cr,amount_cr_text from tmpTable where header_id=19924598 and line_num=2
I get the output properly. All the values are properly aligned to the columns: [19924598,2,null,null,381761.40000000000000000000,381761.4,-381761.40000000000000000000,-381761.4,0.01489610000000000000,0.014896100000000,5686.76000000000000000000,5686.76] But after saving the dataframe in a CSV file, create a table on top of
it (step4) and apply the same query on the created table I see the data
is jumbled and improperly mapped with the columns: select header_id,line_num,debit_rate,debit_rate_text,credit_rate,credit_rate_text,activity_amount,activity_amount_text,exchange_rate,exchange_rate_text,amount_cr,amount_cr_text from schema.tablename where header_id=19924598 and line_num=2
| header_id | line_num | debit_rate | debit_rate_text | credit_rate | credit_rate_text | activity_amount | activity_amount_text | exchange_rate | exchange_rate_text | amount_cr | amount_cr_text |
| 19924598 | 2 | NULL | | 381761.4 | | 5686.76 | 5686.76 | NULL | -5686.76 | NULL | |
So I tried use a different approach where I created the hive table upfront and insert data into it from dataframe:
Running the DDL in step4 above finalDF.createOrReplaceTempView("tmpTable") spark.sql("insert into schema.table select * from tmpTable") And even this way fails if I run the aforementioned select query once the job is completed.
I tried to refresh the table using refresh table schema.table and msckrepair table schema.table just to see if there is any problem with the metadata but nothing seems to workout. Could anyone let me know what is causing this phenomenon, is there is any problem with the way I operating the data here ?
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
02-06-2019
10:02 AM
I am trying to move data from table: system_releases from Greenplum to Hive in the below manner: <code>val yearDF = spark.read.format("jdbc").option("url","urltemplate;MaxNumericScale=30;MaxNumericPrecision=40;").option("dbtable", s"(${execQuery}) as year2016").option("user","user").option("password","pwd").option("partitionColumn","release_number").option("lowerBound",306).option("upperBound",500).option("numPartitions",2).load()<br>
Inferred Schema of the dataFrame yearDF by spark: <code>description:string
status_date:timestamp
time_zone:string
table_refresh_delay_min:decimal(38,30)
online_patching_enabled_flag:string
release_number:decimal(38,30)
change_number:decimal(38,30)
interface_queue_enabled_flag:string
rework_enabled_flag:string
smart_transfer_enabled_flag:string
patch_number:decimal(38,30)
threading_enabled_flag:string
drm_gl_source_name:string
reverted_flag:string
table_refresh_delay_min_text:string
release_number_text:string
change_number_text:string<br><br>
I have the same table on hive with following datatypes: <code>val hiveCols=string,status_date:timestamp,time_zone:string,table_refresh_delay_min:double,online_patching_enabled_flag:string,release_number:double,change_number:double,interface_queue_enabled_flag:string,rework_enabled_flag:string,smart_transfer_enabled_flag:string,patch_number:double,threading_enabled_flag:string,drm_gl_source_name:string,reverted_flag:string,table_refresh_delay_min_text:string,release_number_text:string,change_number_text:string<br><br>
The columns: table_refresh_delay_min, release_number, change_number and patch_number
are giving too many decimal points even though there aren't many in GP.
So I tried to save it as a CSV file to take a look at how data is being
read by spark.
For example, the max number of release_number on GP is: 306.00 but in
the csv file I saved the dataframe: yearDF, the value becomes
306.000000000000000000. I tried to take the hive table schema and converted to StructType to apply that on yearDF as below. <code>def convertDatatype(datatype:String):DataType={val convert = datatype match{case"string"=>StringTypecase"bigint"=>LongTypecase"int"=>IntegerTypecase"double"=>DoubleTypecase"date"=>TimestampTypecase"boolean"=>BooleanTypecase"timestamp"=>TimestampType}
convert
}val schemaList = hiveCols.split(",")val schemaStructType =newStructType(schemaList.map(col => col.split(":")).map(e =>StructField(e(0), convertDatatype(e(1)),true)))val newDF = spark.createDataFrame(yearDF.rdd, schemaStructType)
newDF.write.format("csv").save("hdfs/location")<br><br>
But I am getting the error: <code>Caused by: java.lang.RuntimeException: java.math.BigDecimal is not a valid external typefor schema of double
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.evalIfFalseExpr8$(UnknownSource)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply_2$(UnknownSource)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(UnknownSource)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:287)...17 more<br><br>
I tried to cast the decimal columns into DoubleType in the below manner but I still face the same exception. <code>val pattern ="""DecimalType\(\d+,(\d+)\)""".r
val df2 = dataDF.dtypes.
collect{case(dn, dt)if pattern.findFirstMatchIn(dt).map(_.group(1)).getOrElse("0")!="0"=> dn }.
foldLeft(dataDF)((accDF, c)=> accDF.withColumn(c, col(c).cast("Double")))Caused by: java.lang.RuntimeException: java.math.BigDecimal is not a valid external typefor schema of double
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.evalIfFalseExpr8$(UnknownSource)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply_2$(UnknownSource)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(UnknownSource)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:287)...17 more<br><br>
I am out of ideas after trying to implement the above two ways.
Could anyone let me know how can I cast the columns of a dataframe properly to the required datatypes ?
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark