Support Questions
Find answers, ask questions, and share your expertise
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Data Mining - Data Pre-Processing Use Case


Data Mining - Data Pre-Processing Use Case


Do you know any good tutorial/use case using Hadoop that shows a good approach to clean our data (specially the outliers detection phase)?



Re: Data Mining - Data Pre-Processing Use Case


@Pedro Rodgers

We have a few examples at Hortonworks doing outlier or anomaly detection.

For doing some data cleansing, you could look at HDF, or Apache NiFi to get started doing so. Starting with simple event processing like trimming or modifying the incoming data, NiFi could handle most use cases. There is even a detectDuplicate processor in NiFi which may be of use if deduplication is part of your cleansing process. When you start looking at aggregations/windowing, or complex cleaning/transformations Apache Storm (part of HDF), or Spark may be your best bet.

This demo actually shows HDF in action doing cleaning/transformations for Anomaly Detection:

Showing cleaned data with HDF & Hadoop:

If you have more specific questions I'm sure we can narrow down and help provide more detailed help.


Re: Data Mining - Data Pre-Processing Use Case

Super Guru

HDF, Spark, Sqoop, Flume, some Python scripts and you can pretty much clean any messy data.

I like to keep the raw data though, just in case.

See previous answers:

Don't have an account?
Coming from Hortonworks? Activate your account here