A Tour of Machine Learning AlgorithmsOriginally published by Jason Brownlee in 2013, it still is a goldmine for all machine learning professionals.  The algorithms are broken down in several categories. Here we provide a high-level summary, a much longer and detailed version can be found here. You can even download an algorithm map from the original article. Below is a much smaller version.

 

It would be interesting to list, for each algorithm,

  • examples of real world applications,
  • in which contexts it performs well,
  • if it can be used as a black box,
  • ease of use and interpretation,
  • how it handles missing data,
  • enterprise version available or not,
  • integration with existing analytics platforms or real-time systems,
  • constraints on data (e.g. Naive Bayes performs poorly on correlated variables),
  • maintenance/scalability issues,
  • distributed implementation,
  • speed or computational complexity,
  • can easily be blended with other algorithms

For on how Machine learning, detecting anomalies /behavior and sentiment, click here.

Bayesian Algorithms

  • Naive Bayes
  • Gaussian Naive Bayes
  • Multinomial Naive Bayes
  • Averaged One-Dependence Estimators (AODE)
  • Bayesian Belief Network (BBN)
  • Bayesian Network (BN)

Clustering Algorithms

  • k-Means
  • k-Medians
  • Expectation Maximisation (EM)
  • Hierarchical Clustering

Deep Learning Algorithms

  • Deep Boltzmann Machine (DBM)
  • Deep Belief Networks (DBN)
  • Convolutional Neural Network (CNN)
  • Stacked Auto-Encoders

 

This article originally appeared in datasciencecentrial.com.  To read the full article, click here.