Sqoop can be used to bring data from RDBMS, but a limitation of sqoop is that data in HDFS is stored in one folder. If a partitioned table needs to be created in Hive for further queries, users need to create Hive script to distribute data to appropriate partitions. There is no direct option of creating partition tables based in Hive directly from sqoop.
However, we can use sqoop features of putting output in a specific directory to simulate a partitioned table structure in HDFS. Since any partitioned table has a HDFS structure where each partition is <table name>/<partition column name=value> , we can use following sqoop structure to select appropriate data for each partition and move it to correct HDFS structure.
sqoop --table <table1> --where <where clause for pt=0> --target-dir /home/user1/table1/pt=0
sqoop --table <table1> --where <where clause for pt=1> --target-dir /home/user1/table1/pt=1
Now, an external HIVE table can be created that is pointing to /home/user1/table1 directory with partition column as pt.