Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Apache kudu

avatar
Contributor

hi,I am woking on kudu and oracle. I have more than 5 million records and i have been asked to read them from oracle and write into kudu table.what i did was,one way i did a ojdbc connection,got the records from oracle and insert into kudu table using partial row and insert menthod. i just want to know if i could do bulk inserts to avoid more time on writes 

2 ACCEPTED SOLUTIONS

avatar
Super Collaborator

Another option is to write a Spark job that uses multiple tasks to read from Oracle and write to Kudu in parallel, or something equivalent using multiple processes or threads.

View solution in original post

avatar
Super Collaborator

Are you sure the bottleneck is Kudu? Maybe the bottleneck is reading from Oracle?

 

Using the Kudu AUTO_FLUSH_BACKGROUND mode should give pretty fast throughput when writing. See https://kudu.apache.org/apidocs/org/apache/kudu/client/SessionConfiguration.FlushMode.html

 

You can also try increasing the KuduSession.setMutationBufferSpace() value, also consider your partitioning scheme.

 

If you want more parallelism you can also consider scanning different ranges in Oracle with different processes or threads on the same or different client machine and perform more parallelized writes to Kudu.

View solution in original post

9 REPLIES 9

avatar
Contributor

can i do bulk insert, if so please tell me how to 

avatar
Contributor

if i do the writes as per the program given in https://github.com/cloudera/kudu-examples/tree/master/java/java-sample/src/main/java/org/kududb/exam...

 

it takes an hour to insert the data in kudu table.

 

How can i insert the records in lesser time 

 

avatar
Super Collaborator

One option is to export to Parquet on HDFS using Sqoop, then use Impala to CREATE TABLE AS SELECT * FROM your parquet table into your Kudu table.

 

Unfortunately Sqoop does not have support for Kudu at this time.

avatar
Super Collaborator

Another option is to write a Spark job that uses multiple tasks to read from Oracle and write to Kudu in parallel, or something equivalent using multiple processes or threads.

avatar
Contributor

thank you but i just want to use java and do batch insert,is tere any way to perfrom faster writes on kudu table using java 

avatar
Super Collaborator

Are you sure the bottleneck is Kudu? Maybe the bottleneck is reading from Oracle?

 

Using the Kudu AUTO_FLUSH_BACKGROUND mode should give pretty fast throughput when writing. See https://kudu.apache.org/apidocs/org/apache/kudu/client/SessionConfiguration.FlushMode.html

 

You can also try increasing the KuduSession.setMutationBufferSpace() value, also consider your partitioning scheme.

 

If you want more parallelism you can also consider scanning different ranges in Oracle with different processes or threads on the same or different client machine and perform more parallelized writes to Kudu.

avatar
New Contributor

Hi,

 

I need to load the data from HIVE to KUDU table using pyspark code.  i am able to insert one record using table.new_insert but could not able to load all the records at once..the way am looking is, getting the data into dataframe and load that dataframe data into KUDU table.  i found example using JAVA but not with Python.  will you please help.

 

Thx.

avatar
Rising Star

Hi,

 

I don't know much about Kudu+PySpark except that there is a lot of room for improvement there, but maybe a couple of examples in the following patch-in-flight could be useful: https://gerrit.cloudera.org/#/c/13102/

avatar
New Contributor

I am able to sqoop the data from Oracle to HDFS and then do a create table as select * from on Impala to write into Kudu.I am abe to manually run the queries here but What is the best way to  automate this when i move the code to production.