Created on 03-27-2018 04:02 AM - edited 09-16-2022 06:02 AM
Created 03-28-2018 06:06 AM
This prediction depends on the date you have. You may have labelled or unlabelled data based on which you have different algorithms.
Assuming your data is labelled, then you have to find if you are trying to solve a regression problem or a classification problem. Based on that you can choose the algorithms.
Since you have written that you want to find outliers , I'm assuming that it is a regression problem. Then you can use algorithms like Linear Regression, Support Vector Regression, Decision tree regression, Random forest regression etc.
If your data is unlabelled, you have to use a unsupervised learning method. You will have algorithms like K-Means clustering, Hierarchical clustering etc.
The main part of any solving machine learning problem is learning what your data is and choosing the right algorithm for your problem. So you may need to spend more time in analysing data and choosing the right algorithm.
Here are few links for the concepts mentioned above. You can find these algorithms in spark.
https://spark.apache.org/docs/latest/ml-guide.html
https://machinelearningmastery.com/classification-versus-regression-in-machine-learning/
https://machinelearningmastery.com/supervised-and-unsupervised-machine-learning-algorithms/
Happy machine learning 🙂
.
-Aditya
Created 03-27-2018 01:23 PM
If you are beginner and want to start with ML, I'd suggest ditching Mahout and better learn Spark. Mahout is older project, which uses MapReduce. Spark on the other hand is in memory processing, and way more developed.
There's literally no one that uses Mahout... everyone is focusing on Spark:
https://hortonworks.com/apache/spark/
Check out some nice Spark tutorials we have:
https://hortonworks.com/tutorial/hands-on-tour-of-apache-spark-in-5-minutes/
https://hortonworks.com/hadoop-tutorial/interacting-with-data-on-hdp-using-scala-and-apache-spark/
Created 03-28-2018 02:23 AM
Thanks.
As you suggested i M using spark now. Must i use MLIB ?
I have a csv file in spark now. val mk = sc.textFile("hdfs:path filenmae.csv");
I have 4 string values and 3 double values.
I need to calculate the outlier now and need to apply any prediction modelling with results in a visualized way.
What can i use now? Any suggestions? Thanks.
Created 03-28-2018 06:06 AM
This prediction depends on the date you have. You may have labelled or unlabelled data based on which you have different algorithms.
Assuming your data is labelled, then you have to find if you are trying to solve a regression problem or a classification problem. Based on that you can choose the algorithms.
Since you have written that you want to find outliers , I'm assuming that it is a regression problem. Then you can use algorithms like Linear Regression, Support Vector Regression, Decision tree regression, Random forest regression etc.
If your data is unlabelled, you have to use a unsupervised learning method. You will have algorithms like K-Means clustering, Hierarchical clustering etc.
The main part of any solving machine learning problem is learning what your data is and choosing the right algorithm for your problem. So you may need to spend more time in analysing data and choosing the right algorithm.
Here are few links for the concepts mentioned above. You can find these algorithms in spark.
https://spark.apache.org/docs/latest/ml-guide.html
https://machinelearningmastery.com/classification-versus-regression-in-machine-learning/
https://machinelearningmastery.com/supervised-and-unsupervised-machine-learning-algorithms/
Happy machine learning 🙂
.
-Aditya