Support Questions

Find answers, ask questions, and share your expertise
Announcements
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

How can I read Mainframe file which is in EBCDIC format?

Explorer

Hi, We have huge number of mainframe files, which are in EBCDIC format. these files are created by mainframe systems. These files are now stored in HDFS as EBCDIC files. I have a need to read these files, (copy books available), split them into multiple files based on record type, and store them as ASCII files in HDFS.

14 REPLIES 14

Super Guru
@karthick baskaran

You can use following project. It uses JRecord to do the conversion.

https://github.com/tmalaska/CopybookInputFormat

You can use Spark to read your EBCDIC files from hadoop and convert them to ASCII using above library.

Explorer

Hi Mqureshi

I'm very new to this, so i dont know how to do this. But i will try to check some online resources and try this. if i struggle i will come back and ask you for help. if it works, i will let you know about that also. Thanks.

Explorer

But this will work only if the file in Mainframe is like a normal test file right. but in my case, the files are in EBCDIC format (has multiple occurrences), some junk values... so can we still do this with the Sqoop connector. i did go over the details in the link and couldnt see anything related to EBCDIC file. but if you think this is going to work, please share more details, and Im interested in knowing about this.

The connector is a contribution from Syncsort. Syncsort has decades of experience with building tools for Mainframe data ingestion.

I have used Sqoop extensively; however, never for Mainframe data. Syncsort states: "Each data set will be stored as a separate HDFS file and EBCDIC encoded fixed length data will be stored as ASCII encoded variable length text on HDFS"

http://blog.syncsort.com/2014/06/big-data/big-iron-big-data-mainframe-hadoop-apache-sqoop/

There is also a Spark connector to import Mainframe data:

https://github.com/Syncsort/spark-mainframe-connector

Explorer

Hi Mqureshi

I'm very new to this, so i dont know how to do this. But i will try to check some online resources and try this. if i struggle i will come back and ask you for help. if it works, i will let you know about that also. Thanks.

Super Guru

fair enough. see the new answer by @bpreachuk. I was assuming you are loking for free tools but if you can get syncsort or if you already have it, that's the easiest way to do this.

@karthick baskaran

I am not sure if you have the ability to use a 3rd Party tool, bit one of our trusted partners is Syncsort. If you've used the mainframe before you'll know who they are. Dealing with EBCDIC conversions, Copybooks, etc. are features that they excel at and provide in their flagship tool. It's called DMX-h and it would do what you need (in fact it can be your Data Integtration tool for all data, not just mainframe). http://www.syncsort.com/en/Products/BigData/DMXh

Explorer

Hi Bpreachuk

Thanks for the answer. No, we do not have the option to buy syncsort.

New Contributor

@karthick baskaran If you are still looking for an adaptor, ping me at arjun.mahajan@bitwiseglobal.com - we hav recently released a hadoop adaptor for mainframe date. Thanks.

@Karthik Narayanan, you can use Cobrix to parse the EBCDIC files through Spark and stored them on HDFS using whatever format you want. It is open-source.

DISCLAIMER: I work for ABSA and I am one of the developers behind this library. Our focus has been: 1) ease of use, 2) performance.

New Contributor

Hello,

 

Does COBRIX support Python ? I see only Scala api's..at https://github.com/AbsaOSS/cobrix

Please advice.

 

Thanks

Sreedhar Y

New Contributor

@Binu Mathew Hi Binu, Have you been able to resolve your issue. If yes, could you please share the solution. I'm in the same boat.

@Raghavendra Gupta, have you tried Cobrix?

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.