Prerequisites: No prerequisites, knowledge of some undergraduate level mathematics would help but is not mandatory. Working knowledge of Python would be helpful if you want to run the source code that is provided.
Taught by a Stanford-educated, ex-Googler and an IIT, IIM - educated ex-Flipkart lead analyst. This team has decades of practical experience in quant trading, analytics and e-commerce.
This course is a down-to-earth, shy but confident take on machine learning techniques that you can put to work today
Let’s parse that.
The course is down-to-earth: it makes everything as simple as possible - but not simpler
The course is shy but confident: It is authoritative, drawn from decades of practical experience -but shies away from needlessly complicating stuff.
You can put ML to work today: If Machine Learning is a car, this car will have you driving today. It won't tell you what the carburetor is.
The course is very visual: most of the techniques are explained with the help of animations to help you understand better.
This course is practical as well: There are hundreds of lines of source code with comments that can be used directly to implement natural language processing and machine learning for text summarization, text classification in Python.
The course is also quirky. The examples are irreverent. Lots of little touches: repetition, zooming out so we remember the big picture, active learning with plenty of quizzes. There’s also a peppy soundtrack, and art - all shown by studies to improve cognition and recall.
What's Covered:
Machine Learning:
Supervised/Unsupervised learning, Classification, Clustering, Association Detection, Anomaly Detection, Dimensionality Reduction, Regression.
Naive Bayes, K-nearest neighbours, Support Vector Machines, Artificial Neural Networks, K-means, Hierarchical clustering, Principal Components Analysis, Linear regression, Logistics regression, Random variables, Bayes theorem, Bias-variance tradeoff
Natural Language Processing with Python:
Corpora, stopwords, sentence and word parsing, auto-summarization, sentiment analysis (as a special case of classification), TF-IDF, Document Distance, Text Summarization, Text classification with Naive Bayes and K-Nearest Neighbours and Clustering with K-Means
Sentiment Analysis:
Why it's useful, Approaches to solving - Rule-Based, ML-Based, Training, Feature Extraction, Sentiment Lexicons, Regular Expressions, Twitter API, Sentiment Analysis of Tweets with Python
Mitigating Overfitting with Ensemble Learning:
Decision trees and decision tree learning, Overfitting in decision trees, Techniques to mitigate overfitting (cross-validation, regularization), Ensemble learning and Random forests
Recommendations: Content-based filtering, Collaborative filtering and Association Rules learning
Get started with Deep learning: Apply Multi-layer perceptrons to the MNIST Digit recognition problem
A Note on Python: The code-alongs in this class all use Python 2.7. Source code (with copious amounts of comments) is attached as a resource with all the code-alongs. The source code has been provided for both Python 2 and Python 3 wherever possible.
Who is the target audience?
BASIC KNOWLEDGE
WHAT YOU WILL LEARN