Support Questions
Find answers, ask questions, and share your expertise

Patterns for batch processing time-series data?

Explorer

What patterns or practices exist for dealing with time-series data specifically in batch mode, i.e, Tez or MR as opposed to Spark. Sorting orders the data within a block or ORC split, but how are boundaries between blocks usually handled? For instance, finding derivatives, inflection points, etc. breaks down at file boundaries---are there standard patterns or libraries to deal with this?

1 ACCEPTED SOLUTION

One option that comes to mind is to leverage a custom InputFormat. HDFS doesn't care about where it breaks a file so the input format helps ensure that there are not awkward breaks between blocks when reading files. With this approach, you can define your own notion of a record, whether it be a line of text (TextInputFormat) or a window that could encapsulate multiple records.

You can then use this custom InputFormat to read the data into An MR job or you can use it to develop you own custom Pig loader to work with your data in Pig.

I am not personally aware of any libraries that have been built to address time-series specifically.

View solution in original post

3 REPLIES 3

One option that comes to mind is to leverage a custom InputFormat. HDFS doesn't care about where it breaks a file so the input format helps ensure that there are not awkward breaks between blocks when reading files. With this approach, you can define your own notion of a record, whether it be a line of text (TextInputFormat) or a window that could encapsulate multiple records.

You can then use this custom InputFormat to read the data into An MR job or you can use it to develop you own custom Pig loader to work with your data in Pig.

I am not personally aware of any libraries that have been built to address time-series specifically.

@Brandon Wilson Thanks! 🙂

; ;