Search  


A minimum viable learning framework for self-learning AI (machine learning and deep learning) 
Thursday, October 21, 2021, 10:25 AM
Posted by Administrator
AI is a complex subject and hard to learn


Often, in the early stages, people make mistake such as

a) They try to learn everything

b) They do not know in which order to learn

c) They go to deep into one subtopic initially

Hence, I created a minimum viable learning framework for self learning AI (machine learning and deep learning)

Because its concise and its minimal, it does not include topics like GANs. Reinforcement learning etc. tt also does not cover Bayesian approaches n detail

9732915286
However, this syllabus should get you to about 80 percent of your journey for a typical data science role

Statistics

Central limit theorem

Sampling methods

Type i vs type ii error

Selection bias

Non gaussian distributions

Bias variance tradeoff

Confusion matrix

Normal distribution

Correlation

Covariance

Point estimates and confidence interval

a/b testing

p-value

re-sampling

Methods to overcome combat overfitting and underfitting

Treatment of outliers

Treatment of missing values

Confounding variables

Entropy and information gain

Cross validation

Basic concepts

Between a validation set and a test set

Supervised learning

Unsupervised learning

Parameters v.s. hyperparameters

Cost function

Regression

Linear regression

Assumptions required for linear regression

Limitations of linear regression

Deep learning

What is the difference between machine learning and deep learning?

Basic working of Neural networks

Soft-max

Relu

Learning rate

Epoch / batch and iteration

The convolution operation

Layers of a CNN

Pooling operation

Kernels and Parameter sharing

Back propagation

Gradient descent

Vanishing gradients

Activation functions

LSTM

Models

Regression

Classification

logistic regression

SVM

Tree based

Clustering

PCA / Dimensionality reduction

MLP

CNN

Autoencoders

Regularization

Lasso

Ridge

Regularization in deep learning (ex dropout)

Ensemble methods

Boosting

Bagging

Optimization techniques

Matrix optimization techniques (contrast to)

Gradient descent including specific optimizers like Adam etc

Back propagation

Statistical inference

Models

Parametric models

Non-Parametric models

Paradigms

Frequentist

Bayesian

Statistical proposition/outcome

A point estimate

An interval estimate

A credible interval

rejection of a hypothesis

Clustering or classification of data points into groups.

Parameter Estimation techniques

Ordinary least square estimation

Maximum likelihood estimators

Hyperparameter tuning techniques

Grid search

Random search

Feature Engineering

Feature Extraction (ex PCA)

Feature Transformation (ex binning, log transforms)

Feature Selection (filter methods, wrapper methods etc)

Model evaluation

Regression metrics

(R)MSE

MAE



Classification metrics

Accuracy

Recall

Precision

F1 score

Confusion matrix

Hope you find it useful

By ajitjaokar
add comment ( 352 views )   |  permalink   |  $star_image$star_image$star_image$star_image$star_image ( 3 / 507 )

<<First <Back | 1072 | 1073 | 1074 | 1075 | 1076 | 1077 | 1078 | 1079 | 1080 | 1081 | Next> Last>>







Share CertificationPoint & Stay Informed Socially About EduTech?