@jayes
Unfortunately the Hive Import/Export is only supported for HDFS. The only method I know of to get the table and data into S3 is as follows, see example below.
You need to create a table that is mapped onto S3 bucket and directory
CREATE TABLE tests3 (
id BIGINT, time STRING, log STRING
)
row format delimited fields terminated by ','
lines terminated by '\n'
STORED AS TEXTFILE
LOCATION 's3n://bucket/directory/';
Insert data into s3 table and when the insert is complete the directory will have a csv file
INSERT OVERWRITE TABLE tests3
select id, time, log
from testcsvimport;