Support Questions
Find answers, ask questions, and share your expertise

Data Mining - Data Pre-Processing Use Case


Do you know any good tutorial/use case using Hadoop that shows a good approach to clean our data (specially the outliers detection phase)?




@Pedro Rodgers

We have a few examples at Hortonworks doing outlier or anomaly detection.

For doing some data cleansing, you could look at HDF, or Apache NiFi to get started doing so. Starting with simple event processing like trimming or modifying the incoming data, NiFi could handle most use cases. There is even a detectDuplicate processor in NiFi which may be of use if deduplication is part of your cleansing process. When you start looking at aggregations/windowing, or complex cleaning/transformations Apache Storm (part of HDF), or Spark may be your best bet.

This demo actually shows HDF in action doing cleaning/transformations for Anomaly Detection:

Showing cleaned data with HDF & Hadoop:

If you have more specific questions I'm sure we can narrow down and help provide more detailed help.

Super Guru

HDF, Spark, Sqoop, Flume, some Python scripts and you can pretty much clean any messy data.

I like to keep the raw data though, just in case.

See previous answers: