ML
目录 |
定义
- 约定:
- [math]x_j^{(i)}[/math]:训练数据中的第i列中的第j个特征值 value of feature j in the ith training example
- [math]x^{(i)}[/math]:训练数据中第i列 the input (features) of the ith training example
- [math]m[/math]:训练数据集条数 the number of training examples
- [math]n[/math]:特征数量 the number of features
Week1 - 机器学习基本概念
Cost Function损失函数
Squared error function/Mean squared function均方误差: [math]J(θ)=\frac{1}{2m}\sum_{i=1}^m(h_θ(x^{(i)})-y^{(i)})^2[/math]
Cross entropy交叉熵: [math]J(θ)=-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))][/math]
Gradient Descent梯度下降
[math]θ_j:=θ_j-α\frac{∂}{∂θ_j}J(θ)[/math]
对于线性回归模型,其损失函数为均方误差,故有:
[math]\frac{∂}{∂θ_j}J(θ)= \frac{∂}{∂θ_j}(\frac{1}{2m}\sum_{i=1}^m(h_θ(x^{(i)})-y^{(i)})^2)[/math]
- [math]= \frac{1}{2m}\frac{∂}{∂θ_j}(\sum_{i=1}^m(h_θ(x^{(i)})-y^{(i)})^2)[/math]
- [math]= \frac{1}{2m}\sum_{i=1}^m( \frac{∂}{∂θ_j}(h_θ(x^{(i)})-y^{(i)})^2 )[/math]
- [math]= \frac{1}{m}\sum_{i=1}^m( (h_θ(x^{(i)})-y^{(i)}) \frac{∂}{∂θ_j}h_θ(x^{(i)}) ) //链式求导法式[/math]
- [math]= \frac{1}{m}\sum_{i=1}^m( (h_θ(x^{(i)})-y^{(i)}) \frac{∂}{∂θ_j}x^{(i)}θ ) [/math]
- [math]= \frac{1}{m}\sum_{i=1}^m( (h_θ(x^{(i)})-y^{(i)}) \frac{∂}{∂θ_j}\sum_{k=0}^{n}x_k^{(i)}θ_k ) [/math]
对于j>=1:
- [math]= \frac{1}{m}\sum_{i=1}^m( (h_θ(x^{(i)})-y^{(i)}) x_j^{(i)} ) [/math]
- [math]= \frac{1}{m} (h_θ(x)-y) x_{j} [/math]
Week2 - Multivariate Linear Regression
Multivariate Linear Regression模型的计算
[math]h_θ(x) = θ_0x_0 + θ_1x_1 + θ_2x_2 + ... + θ_nx_n[/math]
- [math] = [θ_0x_0^{(1)}, θ_0x_0^{(2)}, ..., θ_0x_0^{(m)}] + [θ_1x_1^{(1)}, θ_1x_1^{(2)}, ..., θ_1x_1^{(m)}] + ... + [θ_nx_n^{(1)}, θ_nx_n^{(2)}, ..., θ_nx_n^{(m)}] [/math]
- [math] = [θ_0x_0^{(1)}+θ_1x_1^{(1)}+...+θ_nx_n^{(1)}, \ \ \ θ_0x_0^{(2)}+θ_1x_1^{(2)}+...+θ_nx_n^{(2)}, \ \ \ θ_0x_0^{(m)}+θ_1x_1^{(m)}+...+θ_nx_n^{(m)}] [/math]
- [math] = θ^Tx[/math]
其中,
[math]
x=\begin{vmatrix}
x_0 \\
x_1 \\
x_2 \\
... \\
x_n
\end{vmatrix}
= \begin{vmatrix}
x_0^{(1)} & x_0^{(2)} & ... & x_0^{(m)} \\
x_1^{(1)} & x_1^{(2)} & ... & x_1^{(m)} \\
x_2^{(1)} & x_2^{(2)} & ... & x_2^{(m)} \\
... & ... & ... & ...\\
x_n^{(1)} & x_n^{(2)} & ... & x_n^{(m)} \\
\end{vmatrix}
,
θ=\begin{vmatrix}
θ_0 \\
θ_1\\
θ_2\\
...\\
θ_n
\end{vmatrix}
[/math]
- m为训练数据组数,n为特征个数(通常,为了方便处理,会令[math]x_0^{(i)}=1, i=1,2,...,m)[/math]。
数据归一化:Feature Scaling & Standard Normalization
[math]
x_i := \frac{x_i-μ_i}{s_i}
[/math]
其中,[math]μ_i[/math]是第i个特征数据x_i的均值,而 [math]s_i[/math]则要视情况而定:
- Feature Scaling:[math]s_i[/math]为[math]x_i[/math]中最大值与最小值的差(max-min);
- Standard Normalization:[math]s_i[/math]为[math]x_i[/math]中数据标准差(standard deviation)。
特别注意,通过 Feature scaling训练出模型后,在进行预测时,同样需要对输入特征数据进行归一化。
Normal Equation标准工程
[math]θ = (X^TX)^{-1}X^Ty[/math]
Week3 - Logistic Regression & Overfitting
Logistic Regression
Sigmoid Function - S函数
[math]h_θ(x)=g(θ^Tx)[/math]
[math]z = θ^Tx[/math]
[math]g(z) = \frac{1}{1+e^{-z}}[/math]
Cost Function
[math]J(θ)=-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))][/math]
向量化形式:
[math]
J(θ) = \frac{1}{m}( -y^Tlog(h) - (1-y)^Tlog(1-h) )
[/math]
Gradient Descent
[math]J(θ)=-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))][/math]
[math]θ_j:=θ_j-α\frac{∂}{∂θ_j}J(θ)[/math]
- [math]= θ_j-\frac{α}{m}\sum_{i=1}^m( (h_θ(x^{(i)})-y^{(i)}) x_j^{(i)} ) [/math]
[math]\frac{∂}{∂θ_j}J(θ) = \frac{∂}{∂θ_j}\{-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))]\}[/math]
- [math]=-\frac{1}{m}\sum_{i=1}^m\frac{∂}{∂θ_j}[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))][/math]
- 其中,
- [math]\frac{∂}{∂θ_j}[y^{(i)}*logh_θ(x^{(i)})] = y^{(i)}*\frac{∂}{∂θ_j}[logh_θ(x^{(i)})][/math]
- [math] = y^{(i)}*\frac{1}{h_θ(x^{(i)})*ln(2)}*\frac{∂}{∂θ_j}h_θ(x^{(i)})[/math]
- 而[math] \frac{∂}{∂θ_j}h_θ(x^{(i)}) = g'(z)*z'(θ^Tx^{(i)}) = (\frac{1}{1+e^{-z}})'*z'(θ^Tx^{(i)})[/math]
- [math] = ((1+e^{-z})^{-1})'*z'(θ^Tx^{(i)})[/math]
- [math] = \frac{e^{-z}}{(1+e^{-z})^{2}}*z'(θ^Tx^{(i)})[/math]
- [math] = \frac{e^{-z}}{(1+e^{-z})^{2}}*\frac{∂}{∂θ_j}(θ^Tx^{(i)})[/math]
向量化形式:
[math]
θ = θ - \frac{α}{m}X^T(g(Xθ) - \vec y)
[/math]
解决Overfitting
针对 hypothesis function,引入 Regularation parameter([math]λ[/math])到 Cost function中:
[math]J(θ)=\frac{1}{2m}\sum_{i=1}^m(h_θ(x^{(i)})-y^{(i)})^2 + λ\sum_{j=1}^nθ_j^2[/math]