Support Questions
Find answers, ask questions, and share your expertise

phoenix bulk load

i am trying to use phoenix bulk load tool to load a csv file into a phoenix table like

bin/psql.py -t EXAMPLE localhost data.csv

I have 5 fields in the csv file and 3 columns in a phoenix table

file - field1, field2,field3,field4,field5

phoenix table - field2,field4,field5

How can i frame the import for the above condition. i.e i need to skip a few columns in the file and also the order of columns in the phoenix table doesnt match the order of fields in the file. 
4 REPLIES 4

@ARUN

You can mention the columns order with -h option for

ex:

psql -t MY_TABLE -h COL1,COL2,COL3 -d : my_cluster:1825 my_table2012-Q3.csv

The last columns in data can be skipped but we cannot skip the data of middle columns.

@Rajeshbabu Chintaguntla

I raised the question for skipping few middle columns ;). Let us see if someone has any alternatives. Also if the file order and table order doesnt match, how does it work

You can map which column is for which field like in your case

-h field2,field4,field5,field1,field3

@ARUN, there is no support where you can map a header to the file and select columns for upload. Probably you can raise a JIRA and if possible try to upload a patch. It should be a trivial change.