Created 10-09-2015 07:16 PM
Created 10-09-2015 07:18 PM
Created 10-09-2015 07:18 PM
Created 10-09-2015 08:30 PM
Thank you Deepesh , The reply is appreciated. Large objects may be one way for generic binary object archival.
-- Adding a few more details to this topic.
In sqoop 1.4.6 a mainframe import option is introduced.
http://blog.syncsort.com/2014/06/big-iron-big-data-mainframe-hadoop-apache-sqoop/
The above Syncsort blog suggests that some capabilities like the ability to use cobol copybook metadata, vsam fileformats, Packed-Decimal, and translation of EBCDIC encoded fixed length binary data to ASCII encoded variable length text in HDFS;
It suggests that these capabilities may not be available in Open source Sqoop.
If there are any prior projects where we have experience handling this type of mainframe based dataset please add more details of implementation.
Created 12-01-2015 07:10 PM
I'm also wondering if anyone has used SyncSort's open source contribution, http://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html#_literal_sqoop_import_mainframe_literal, or their proprietary software, http://www.syncsort.com/en/Solutions/Hadoop-Solutions/Mainframe-Offload-to-Hadoop, in getting data from the mainframe.
Created 11-25-2015 07:10 PM
Hey
We have 2 more clients who wants to ingest data from Mainframe VSAM file format. Is syncsort a partnet with Hortonworks ? How do we proceed to go with Syncsort route ? or any suggestions ?
Created 11-25-2015 08:43 PM
Yes syncsort is a partner look here.
Created 11-25-2015 08:41 PM
@pbalasundaram I've seen a customer use JRecord to build a mapreduce inputformat for HDFS, mapreduce, etc. Look around within github and you should see examples. Obviously this would take more work than just using something off the shelf. Besides syncsort, there are capabilities within attunity also.
Created 12-15-2015 07:14 PM
Hi - We tried to import this , but are facing issue with the GDG format, the basic open source sqoop 1272 doesn't support many features I guess. Are there any alternatives?