Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Kafka connector Hive integration issue

Highlighted

Kafka connector Hive integration issue

New Contributor

Hi,
I have integrated Kafka and Hadoop. I am successfully writing data to Hadoop. However, when I open the hive integration, I get the following error. I couldn't find the reason:

[2020-04-09 14:23:04,954] INFO Kafka version: 5.4.1-ce (org.apache.kafka.common.utils.AppInfoParser:117)
[2020-04-09 14:23:04,954] INFO Kafka commitId: 27f41d1c0f80868f (org.apache.kafka.common.utils.AppInfoParser:118)
[2020-04-09 14:23:04,954] INFO Kafka startTimeMs: 1586431384953 (org.apache.kafka.common.utils.AppInfoParser:119)
[2020-04-09 14:23:04,955] INFO interceptor=confluent.monitoring.interceptor.connector-consumer-hdfs3-sink-0 created for client_id=connector-consumer-hdfs3-sink-0 client_type=CONSUMER session= cluster=BJ2hAs1sR4-j-lOSrHpx1w group=connect-hdfs3-sink (io.confluent.monitoring.clients.interceptor.MonitoringInterceptor:153)
[2020-04-09 14:23:04,955] INFO [Producer clientId=confluent.monitoring.interceptor.connector-consumer-hdfs3-sink-0] Cluster ID: BJ2hAs1sR4-j-lOSrHpx1w (org.apache.kafka.clients.Metadata:259)
[2020-04-09 14:23:06,604] INFO Opening record writer for: hdfs://localhost:9000/topics//+tmp/test_hdfs/partition=0/c6284b4c-e689-4696-8788-635ced927ab2_tmp.avro (io.confluent.connect.hdfs3.avro.AvroRecordWriterProvider:56)
[2020-04-09 14:23:06,720] ERROR Adding Hive partition threw unexpected error (io.confluent.connect.hdfs3.TopicPartitionWriter:828)
io.confluent.connect.storage.errors.HiveMetaStoreException: Invalid partition for default.test_hdfs: partition=0
at io.confluent.connect.storage.hive.HiveMetaStore$1.call(HiveMetaStore.java:122)
at io.confluent.connect.storage.hive.HiveMetaStore$1.call(HiveMetaStore.java:107)
at io.confluent.connect.storage.hive.HiveMetaStore.doAction(HiveMetaStore.java:97)
at io.confluent.connect.storage.hive.HiveMetaStore.addPartition(HiveMetaStore.java:132)
at io.confluent.connect.hdfs3.TopicPartitionWriter$3.call(TopicPartitionWriter.java:826)
at io.confluent.connect.hdfs3.TopicPartitionWriter$3.call(TopicPartitionWriter.java:822)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: InvalidObjectException(message:default.test_hdfs table not found)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$append_partition_by_name_result$append_partition_by_name_resultStandardScheme.read(ThriftHiveMetastore.java)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$append_partition_by_name_result$append_partition_by_name_resultStandardScheme.read(ThriftHiveMetastore.java)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$append_partition_by_name_result.read(ThriftHiveMetastore.java)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_append_partition_by_name(ThriftHiveMetastore.java:2557)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.append_partition_by_name(ThriftHiveMetastore.java:2542)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.appendPartition(HiveMetaStoreClient.java:722)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.appendPartition(HiveMetaStoreClient.java:716)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:208)
at com.sun.proxy.$Proxy58.appendPartition(Unknown Source)
at io.confluent.connect.storage.hive.HiveMetaStore$1.call(HiveMetaStore.java:114)
... 9 more

Can you help me?

2 REPLIES 2
Highlighted

Re: Kafka connector Hive integration issue

@gokhandroid There are 2 error to pay attention to :

 

message:default.test_hdfs table not found

 

Invalid partition for default.test_hdfs: partition=0

 

If the table truly exists, does your user have permissions to view the table or the partition?

 


 


If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post.  


 


Thanks,



Steven

Re: Kafka connector Hive integration issue

New Contributor

The problem is that the table does not occur. Normally, the table should be created automatically when the connector is connected. However, there is no table in Hive database.

Don't have an account?
Coming from Hortonworks? Activate your account here