Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

SPARK 1.6 - PHOENIX 4.4.0 ERROR 203 (22005): Type mismatch. VARCHAR cannot be coerced to DOUBLE

avatar
Master Collaborator

After create a table with 3 primary key on ROW KEY with phoenix:

CREATE TABLE IF NOT EXISTS tabla1(
c1 VARCHAR not null,
c2 VARCHAR not null,
c3 VARCHAR not null,
c4 DOUBLE,
c5 VARCHAR,
CONSTRAINT pk PRIMARY KEY (c1,c2,c3) );

and try to insert from official site:

df.write \
.format("org.apache.phoenix.spark") \
.mode("overwrite") \
.option("table", "XXXXXX") \
.option("zkUrl", "XXXXXXX:2181") \
.save()

y receive this error

Caused by: org.apache.phoenix.schema.ConstraintViolationException: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type mismatch. VARCHAR cannot be coerced to DOUBLE
at org.apache.phoenix.schema.types.PDataType.throwConstraintViolationException(PDataType.java:282)
at org.apache.phoenix.schema.types.PDouble.toObject(PDouble.java:129)
at org.apache.phoenix.jdbc.PhoenixPreparedStatement.setObject(PhoenixPreparedStatement.java:442)
at org.apache.phoenix.spark.PhoenixRecordWritable$$anonfun$write$1.apply(PhoenixRecordWritable.scala:53)
at org.apache.phoenix.spark.PhoenixRecordWritable$$anonfun$write$1.apply(PhoenixRecordWritable.scala:44)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.phoenix.spark.PhoenixRecordWritable.write(PhoenixRecordWritable.scala:44)
at org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:78)
at org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:39)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply$mcV$sp(PairRDDFunctions.scala:1113)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1111)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1111)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1119)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1091)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
Caused by: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type mismatch. VARCHAR cannot be coerced to DOUBLE
at org.apache.phoenix.exception.SQLExceptionCode$1.newException(SQLExceptionCode.java:71)
at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
... 22 more

There is any problem wiht the row keys???

Regards

1 ACCEPTED SOLUTION

avatar
Super Guru

VARCHAR's cannot be naturally converted to numbers. That is what is meant by "VARCHAR cannot be coerced to DOUBLE".

Use the TO_NUMBER function when inserting data: http://phoenix.apache.org/language/functions.html#to_number

View solution in original post

2 REPLIES 2

avatar
Super Guru

VARCHAR's cannot be naturally converted to numbers. That is what is meant by "VARCHAR cannot be coerced to DOUBLE".

Use the TO_NUMBER function when inserting data: http://phoenix.apache.org/language/functions.html#to_number

avatar
Master Collaborator

Finally it have work, the problem was the column was a String not a number,

thanks.