“ML”的版本间的差异

来自个人维基
跳转至: 导航搜索
Week2
Week4 - Neural networks神经网络
 
(未显示1个用户的43个中间版本)
第1行: 第1行:
=Week1=
+
=定义=
 +
:约定:
 +
::<math>x_j^{(i)}</math>:训练数据中的第i列中的第j个特征值 value of feature j in the ith training example
 +
::<math>x^{(i)}</math>:训练数据中第i列 the input (features) of the ith training example
 +
::<math>m</math>:训练数据集条数 the number of training examples
 +
::<math>n</math>:特征数量 the number of features
 +
 
 +
=Week1 - 机器学习基本概念=
 
==Cost Function损失函数==
 
==Cost Function损失函数==
Squared error function/Mean squared function均方误差: <math>J(&theta;)=\frac{1}{2m}\sum_{i=1}^m(h_&theta;(x_i)-y_i)^2</math>
+
Squared error function/Mean squared function均方误差: <math>J(&theta;)=\frac{1}{2m}\sum_{i=1}^m(h_&theta;(x^{(i)})-y^{(i)})^2</math>
 
Cross entropy交叉熵: <math>J(&theta;)=-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]</math>
 
Cross entropy交叉熵: <math>J(&theta;)=-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]</math>
  
 
==Gradient Descent梯度下降==
 
==Gradient Descent梯度下降==
<math>&theta;_j:=&theta;_j+&alpha;\frac{&part;}{&part;&theta;_j}J(&theta;)</math>
+
<math>&theta;_j:=&theta;_j-&alpha;\frac{&part;}{&part;&theta;_j}J(&theta;)</math>
对于线性模型,其损失函数为均方误差,故有(这里输入训练数据x为m*n矩阵, 线性参数<math>&theta;</math>为n*1,<math>x_i</math>代表训练矩阵中的第i行,<math>x_{ik}</math>代表第i行第k列):
+
对于'''线性回归模型''',其损失函数为均方误差,故有:
<math>\frac{&part;}{&part;&theta;_j}J(&theta;)= \frac{&part;}{&part;&theta;_j}(\frac{1}{2m}\sum_{i=1}^m(h_&theta;(x_i)-y_i)^2)</math>
+
<math>\frac{&part;}{&part;&theta;_j}J(&theta;)= \frac{&part;}{&part;&theta;_j}(\frac{1}{2m}\sum_{i=1}^m(h_&theta;(x^{(i)})-y^{(i)})^2)</math>
:<math>= \frac{1}{2m}\frac{&part;}{&part;&theta;_j}(\sum_{i=1}^m(h_&theta;(x_i)-y_i)^2)</math>
+
:<math>= \frac{1}{2m}\frac{&part;}{&part;&theta;_j}(\sum_{i=1}^m(h_&theta;(x^{(i)})-y^{(i)})^2)</math>
:<math>= \frac{1}{2m}\sum_{i=1}^m( \frac{&part;}{&part;&theta;_j}(h_&theta;(x_i)-y_i)^2 )</math>
+
:<math>= \frac{1}{2m}\sum_{i=1}^m( \frac{&part;}{&part;&theta;_j}(h_&theta;(x^{(i)})-y^{(i)})^2 )</math>
:<math>= \frac{1}{m}\sum_{i=1}^m( (h_&theta;(x_i)-y_i) \frac{&part;}{&part;&theta;_j}h_&theta;(x_i) )  //链式求导法式</math>
+
:<math>= \frac{1}{m}\sum_{i=1}^m( (h_&theta;(x^{(i)})-y^{(i)}) \frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) )  //链式求导法式</math>
:<math>= \frac{1}{m}\sum_{i=1}^m( (h_&theta;(x_i)-y_i) \frac{&part;}{&part;&theta;_j}x_i&theta; ) </math>
+
:<math>= \frac{1}{m}\sum_{i=1}^m( (h_&theta;(x^{(i)})-y^{(i)}) \frac{&part;}{&part;&theta;_j}x^{(i)}&theta; ) </math>
:<math>= \frac{1}{m}\sum_{i=1}^m( (h_&theta;(x_i)-y_i) \frac{&part;}{&part;&theta;_j}\sum_{k=0}^{n-1}x_{ik}&theta;_k ) </math>
+
:<math>= \frac{1}{m}\sum_{i=1}^m( (h_&theta;(x^{(i)})-y^{(i)}) \frac{&part;}{&part;&theta;_j}\sum_{k=0}^{n}x_k^{(i)}&theta;_k ) </math>
 
对于j>=1:
 
对于j>=1:
:<math>= \frac{1}{m}\sum_{i=1}^m( (h_&theta;(x_i)-y_i) x_{ij} ) </math>
+
:<math>= \frac{1}{m}\sum_{i=1}^m( (h_&theta;(x^{(i)})-y^{(i)}) x_j^{(i)} ) </math>
 
:<math>= \frac{1}{m} (h_&theta;(x)-y) x_{j}  </math>
 
:<math>= \frac{1}{m} (h_&theta;(x)-y) x_{j}  </math>
  
=Week2=
+
=Week2 - Multivariate Linear Regression=
==Multivariate Linear Regression==
+
==Multivariate Linear Regression模型的计算==
<math>h_&theta;(x) = &theta;_0x_0 + &theta;_1x_1 + &theta;_2x_2 + ... + &theta;_nx_n = &theta;^Tx</math>
+
<math>h_&theta;(x) = &theta;_0x_0 + &theta;_1x_1 + &theta;_2x_2 + ... + &theta;_nx_n</math>
 +
::<math> = [&theta;_0x_0^{(1)}, &theta;_0x_0^{(2)}, ..., &theta;_0x_0^{(m)}] + [&theta;_1x_1^{(1)}, &theta;_1x_1^{(2)}, ..., &theta;_1x_1^{(m)}] + ... + [&theta;_nx_n^{(1)}, &theta;_nx_n^{(2)}, ..., &theta;_nx_n^{(m)}] </math>
 +
::<math> = [&theta;_0x_0^{(1)}+&theta;_1x_1^{(1)}+...+&theta;_nx_n^{(1)}, \ \ \ &theta;_0x_0^{(2)}+&theta;_1x_1^{(2)}+...+&theta;_nx_n^{(2)}, \ \ \ &theta;_0x_0^{(m)}+&theta;_1x_1^{(m)}+...+&theta;_nx_n^{(m)}] </math>
 +
::<math> = &theta;^Tx</math>
 
其中,
 
其中,
 
<math>
 
<math>
第30行: 第40行:
 
\end{vmatrix}
 
\end{vmatrix}
 
  = \begin{vmatrix}
 
  = \begin{vmatrix}
x_0^{(0)} & x_0^{(1)} & x_0^{(2)} & ... & x_0^{(m)} \\
+
x_0^{(1)} & x_0^{(2)} & ... & x_0^{(m)} \\
x_1^{(0)} & x_1^{(1)} & x_1^{(2)} & ... & x_1^{(m)} \\
+
x_1^{(1)} & x_1^{(2)} & ... & x_1^{(m)} \\
... & ... & ... & ... & ...\\
+
x_2^{(1)} & x_2^{(2)} & ... & x_2^{(m)} \\
x_n^{(0)} & x_m^{(1)} & x_m^{(2)} & ... & x_n^{(m)} \\
+
... & ... & ... & ...\\
 +
x_n^{(1)} & x_n^{(2)} & ... & x_n^{(m)} \\
 
\end{vmatrix}
 
\end{vmatrix}
 
,  
 
,  
第44行: 第55行:
 
\end{vmatrix}
 
\end{vmatrix}
 
</math>
 
</math>
:m为训练数据组数,n为特征个数(通常,为了方便处理,会令<math>x_0^{(i)}=1, i=0,1,2,...,m)</math>。
+
:m为训练数据组数,n为特征个数(通常,为了方便处理,会令<math>x_0^{(i)}=1, i=1,2,...,m)</math>。
  
==Feature Scaling & Standard Normalization==
+
==数据归一化:Feature Scaling & Standard Normalization==
 
<math>
 
<math>
 
x_i := \frac{x_i-&mu;_i}{s_i}
 
x_i := \frac{x_i-&mu;_i}{s_i}
第53行: 第64行:
 
:*Feature Scaling:<math>s_i</math>为<math>x_i</math>中最大值与最小值的差(max-min);
 
:*Feature Scaling:<math>s_i</math>为<math>x_i</math>中最大值与最小值的差(max-min);
 
:*Standard Normalization:<math>s_i</math>为<math>x_i</math>中数据标准差(standard deviation)。
 
:*Standard Normalization:<math>s_i</math>为<math>x_i</math>中数据标准差(standard deviation)。
 +
特别注意,通过 Feature scaling训练出模型后,在进行预测时,同样需要对输入特征数据进行归一化。
 +
 +
==Normal Equation标准工程==
 +
<math>&theta; = (X^TX)^{-1}X^Ty</math>
 +
 +
=Week3 - Logistic Regression & Overfitting=
 +
==Logistic Regression==
 +
===Sigmoid Function - S函数===
 +
<math>h_&theta;(x)=g(&theta;^Tx)</math>
 +
<math>z = &theta;^Tx</math>
 +
<math>g(z) = \frac{1}{1+e^{-z}}</math>
 +
 +
===Cost Function===
 +
<math>J(&theta;)=-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]</math>
 +
向量化形式:
 +
<math>
 +
J(&theta;) = \frac{1}{m}( -y^Tlog(h) - (1-y)^Tlog(1-h) )
 +
</math>
 +
 +
===Gradient Descent===
 +
<math>&theta;_j:=&theta;_j-&alpha;\frac{&part;}{&part;&theta;_j}J(&theta;)</math>
 +
:<math>= &theta;_j-\frac{&alpha;}{m}\sum_{i=1}^m( (h_&theta;(x^{(i)})-y^{(i)}) x_j^{(i)} ) </math>
 +
 +
附推导过程如下:
 +
:::<math>\frac{&part;}{&part;&theta;_j}J(&theta;) = \frac{&part;}{&part;&theta;_j}\{-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]\}</math>
 +
::::::<math>=-\frac{1}{m}\sum_{i=1}^m\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]</math> <math>------式1)</math>
 +
:::其中,
 +
::::<math>\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})] = y^{(i)}*\frac{&part;}{&part;&theta;_j}[logh_&theta;(x^{(i)})] = \frac{y^{(i)}}{h_&theta;(x^{(i)})*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)})</math>
 +
::::<math>\frac{&part;}{&part;&theta;_j}[(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))] = (1-y^{(i)})*\frac{&part;}{&part;&theta;_j}[log(1-h_&theta;(x^{(i)}))] = \frac{(1-y^{(i)})}{(1-h_&theta;(x^{(i)}))*ln(e)}*\frac{&part;}{&part;&theta;_j}(1-h_&theta;(x^{(i)}))</math>
 +
:::由于<math> \frac{&part;}{&part;&theta;_j}(1-h_&theta;(x^{(i)})) = -\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)})</math>,故有:
 +
::::<math>\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))] = \frac{y^{(i)}}{h_&theta;(x^{(i)})*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) + \frac{(1-y^{(i)})}{(1-h_&theta;(x^{(i)}))*ln(e)}*\frac{&part;}{&part;&theta;_j}(1-h_&theta;(x^{(i)}))</math>
 +
:::::::::::::::::::::<math> = \frac{y^{(i)}}{h_&theta;(x^{(i)})*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) - \frac{(1-y^{(i)})}{(1-h_&theta;(x^{(i)}))*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)})</math>
 +
:::::::::::::::::::::<math> = (\frac{y^{(i)}}{h_&theta;(x^{(i)})*ln(e)}- \frac{(1-y^{(i)})}{(1-h_&theta;(x^{(i)}))*ln(e)})*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) </math>
 +
:::::::::::::::::::::<math> = \frac{y^{(i)}-h_&theta;(x^{(i)})}{h_&theta;(x^{(i)})*(1-h_&theta;(x^{(i)}))*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) </math> //将 <math>h_&theta;(x^{(i)})=g(z)=\frac{1}{1+e^{-z}}</math>代入
 +
:::::::::::::::::::::<math> = \frac{y^{(i)}*(1+e^{-z})^2-(1+e^{-z})}{e^{-z}*ln(e)} * \frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) </math>
 +
:::::::::::::::::::::<math> = \frac{y^{(i)}*(1+e^{-z})^2-(1+e^{-z})}{e^{-z}} * \frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) </math> <math>------式2)</math>
 +
 +
 +
::::而<math> \frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) = g'(z)*z'(&theta;^Tx^{(i)}) = (\frac{1}{1+e^{-z}})'*z'(&theta;^Tx^{(i)})</math>
 +
:::::::::<math> = ((1+e^{-z})^{-1})'*z'(&theta;^Tx^{(i)})</math>
 +
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*z'(&theta;^Tx^{(i)})</math>
 +
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*\frac{&part;}{&part;&theta;_j}(&theta;^Tx^{(i)})</math>
 +
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*\frac{&part;}{&part;&theta;_j}(&theta;_0*x_0^{(i)} + &theta;_1*x_1^{(i)} + &theta;_2*x_2^{(i)} +...+  &theta;_j*x_j^{(i)} +...+ &theta;_n*x_n^{(i)} )</math>
 +
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*x_j^{(i)}</math> <math>------式3)</math>
 +
::::将式3)代入式2):
 +
:::::::::<math>\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))] = (y^{(i)} - \frac{1}{1+e^{-z}})*x_j^{(i)}</math>
 +
::::::::::::::::::::::::::<math> = (y^{(i)} - h_&theta;(x^{(i)}))*x_j^{(i)}</math> <math>------式4)</math>
 +
::::将式4)代入式1):
 +
:::::::::<math>&theta;_j:= &theta;_j-\frac{&alpha;}{m}\sum_{i=1}^m( (h_&theta;(x^{(i)})-y^{(i)}) x_j^{(i)} ) </math>
 +
 +
 +
 +
向量化形式:
 +
<math>
 +
&theta; = &theta; - \frac{&alpha;}{m}X^T(g(X&theta;) - \vec y)
 +
</math>
 +
 +
==解决Overfitting==
 +
针对 hypothesis function,引入 '''Regularation parameter'''(<math>&lambda;</math>)到 Cost function中:
 +
<math>J(&theta;)=\frac{1}{2m}\sum_{i=1}^m(h_&theta;(x^{(i)})-y^{(i)})^2 + &lambda;\sum_{j=1}^n&theta;_j^2</math>
 +
 +
=Week4 - Neural networks神经网络=
 +
[[文件:Neural_netorwk.png|400px]]
 +
:对于上述神经网络,其各个layer可如下计算:
 +
::<math>a_1^{(2)} = g( &theta;_{10}^{(1)}x_0 + &theta;_{11}^{(1)}x_1 + &theta;_{12}^{(1)}x_2 + &theta;_{13}^{(1)}x_3 )</math>
 +
::<math>a_2^{(2)} = g( &theta;_{20}^{(1)}x_0 + &theta;_{21}^{(1)}x_1 + &theta;_{22}^{(1)}x_2 + &theta;_{23}^{(1)}x_3 )</math>
 +
::<math>a_3^{(2)} = g( &theta;_{30}^{(1)}x_0 + &theta;_{31}^{(1)}x_1 + &theta;_{32}^{(1)}x_2 + &theta;_{33}^{(1)}x_3 )</math>
 +
::<math>h_&theta;(x) = a_1^{(3)} = g( &theta;_{10}^{(2)}a_0^{(2)} + &theta;_{11}^{(2)}a_1^{(2)} + &theta;_{12}^{(2)}a_2^{(2)} + &theta;_{13}^{(2)}a_3^{(2)} )</math>
 +
*一个神经网络,如果其在<math>j</math>层有<math>s_j</math>个神经元,在<math>j+1</math>层有<math>s_{j+1}</math>个神经元,则<math>&theta;_j</math>将是 <math>s_{j+1} * (s_j+1) 的矩阵。

2019年1月2日 (三) 21:20的最后版本

目录

定义

约定:
[math]x_j^{(i)}[/math]:训练数据中的第i列中的第j个特征值 value of feature j in the ith training example
[math]x^{(i)}[/math]:训练数据中第i列 the input (features) of the ith training example
[math]m[/math]:训练数据集条数 the number of training examples
[math]n[/math]:特征数量 the number of features

Week1 - 机器学习基本概念

Cost Function损失函数

Squared error function/Mean squared function均方误差: [math]J(θ)=\frac{1}{2m}\sum_{i=1}^m(h_θ(x^{(i)})-y^{(i)})^2[/math]
Cross entropy交叉熵: [math]J(θ)=-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))][/math]

Gradient Descent梯度下降

[math]θ_j:=θ_j-α\frac{∂}{∂θ_j}J(θ)[/math]
对于线性回归模型,其损失函数为均方误差,故有:
[math]\frac{∂}{∂θ_j}J(θ)= \frac{∂}{∂θ_j}(\frac{1}{2m}\sum_{i=1}^m(h_θ(x^{(i)})-y^{(i)})^2)[/math]

[math]= \frac{1}{2m}\frac{∂}{∂θ_j}(\sum_{i=1}^m(h_θ(x^{(i)})-y^{(i)})^2)[/math]
[math]= \frac{1}{2m}\sum_{i=1}^m( \frac{∂}{∂θ_j}(h_θ(x^{(i)})-y^{(i)})^2 )[/math]
[math]= \frac{1}{m}\sum_{i=1}^m( (h_θ(x^{(i)})-y^{(i)}) \frac{∂}{∂θ_j}h_θ(x^{(i)}) ) //链式求导法式[/math]
[math]= \frac{1}{m}\sum_{i=1}^m( (h_θ(x^{(i)})-y^{(i)}) \frac{∂}{∂θ_j}x^{(i)}θ ) [/math]
[math]= \frac{1}{m}\sum_{i=1}^m( (h_θ(x^{(i)})-y^{(i)}) \frac{∂}{∂θ_j}\sum_{k=0}^{n}x_k^{(i)}θ_k ) [/math]

对于j>=1:

[math]= \frac{1}{m}\sum_{i=1}^m( (h_θ(x^{(i)})-y^{(i)}) x_j^{(i)} ) [/math]
[math]= \frac{1}{m} (h_θ(x)-y) x_{j} [/math]

Week2 - Multivariate Linear Regression

Multivariate Linear Regression模型的计算

[math]h_θ(x) = θ_0x_0 + θ_1x_1 + θ_2x_2 + ... + θ_nx_n[/math]

[math] = [θ_0x_0^{(1)}, θ_0x_0^{(2)}, ..., θ_0x_0^{(m)}] + [θ_1x_1^{(1)}, θ_1x_1^{(2)}, ..., θ_1x_1^{(m)}] + ... + [θ_nx_n^{(1)}, θ_nx_n^{(2)}, ..., θ_nx_n^{(m)}] [/math]
[math] = [θ_0x_0^{(1)}+θ_1x_1^{(1)}+...+θ_nx_n^{(1)}, \ \ \ θ_0x_0^{(2)}+θ_1x_1^{(2)}+...+θ_nx_n^{(2)}, \ \ \ θ_0x_0^{(m)}+θ_1x_1^{(m)}+...+θ_nx_n^{(m)}] [/math]
[math] = θ^Tx[/math]

其中,
[math] x=\begin{vmatrix} x_0 \\ x_1 \\ x_2 \\ ... \\ x_n \end{vmatrix} = \begin{vmatrix} x_0^{(1)} & x_0^{(2)} & ... & x_0^{(m)} \\ x_1^{(1)} & x_1^{(2)} & ... & x_1^{(m)} \\ x_2^{(1)} & x_2^{(2)} & ... & x_2^{(m)} \\ ... & ... & ... & ...\\ x_n^{(1)} & x_n^{(2)} & ... & x_n^{(m)} \\ \end{vmatrix} , θ=\begin{vmatrix} θ_0 \\ θ_1\\ θ_2\\ ...\\ θ_n \end{vmatrix} [/math]

m为训练数据组数,n为特征个数(通常,为了方便处理,会令[math]x_0^{(i)}=1, i=1,2,...,m)[/math]

数据归一化:Feature Scaling & Standard Normalization

[math] x_i := \frac{x_i-μ_i}{s_i} [/math]
其中,[math]μ_i[/math]是第i个特征数据x_i的均值,而 [math]s_i[/math]则要视情况而定:

  • Feature Scaling:[math]s_i[/math][math]x_i[/math]中最大值与最小值的差(max-min);
  • Standard Normalization:[math]s_i[/math][math]x_i[/math]中数据标准差(standard deviation)。

特别注意,通过 Feature scaling训练出模型后,在进行预测时,同样需要对输入特征数据进行归一化。

Normal Equation标准工程

[math]θ = (X^TX)^{-1}X^Ty[/math]

Week3 - Logistic Regression & Overfitting

Logistic Regression

Sigmoid Function - S函数

[math]h_θ(x)=g(θ^Tx)[/math]
[math]z = θ^Tx[/math]
[math]g(z) = \frac{1}{1+e^{-z}}[/math]

Cost Function

[math]J(θ)=-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))][/math]
向量化形式:
[math] J(θ) = \frac{1}{m}( -y^Tlog(h) - (1-y)^Tlog(1-h) ) [/math]

Gradient Descent

[math]θ_j:=θ_j-α\frac{∂}{∂θ_j}J(θ)[/math]

[math]= θ_j-\frac{α}{m}\sum_{i=1}^m( (h_θ(x^{(i)})-y^{(i)}) x_j^{(i)} ) [/math]

附推导过程如下:

[math]\frac{∂}{∂θ_j}J(θ) = \frac{∂}{∂θ_j}\{-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))]\}[/math]
[math]=-\frac{1}{m}\sum_{i=1}^m\frac{∂}{∂θ_j}[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))][/math] [math]------式1)[/math]
其中,
[math]\frac{∂}{∂θ_j}[y^{(i)}*logh_θ(x^{(i)})] = y^{(i)}*\frac{∂}{∂θ_j}[logh_θ(x^{(i)})] = \frac{y^{(i)}}{h_θ(x^{(i)})*ln(e)}*\frac{∂}{∂θ_j}h_θ(x^{(i)})[/math]
[math]\frac{∂}{∂θ_j}[(1-y^{(i)})*log(1-h_θ(x^{(i)}))] = (1-y^{(i)})*\frac{∂}{∂θ_j}[log(1-h_θ(x^{(i)}))] = \frac{(1-y^{(i)})}{(1-h_θ(x^{(i)}))*ln(e)}*\frac{∂}{∂θ_j}(1-h_θ(x^{(i)}))[/math]
由于[math] \frac{∂}{∂θ_j}(1-h_θ(x^{(i)})) = -\frac{∂}{∂θ_j}h_θ(x^{(i)})[/math],故有:
[math]\frac{∂}{∂θ_j}[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))] = \frac{y^{(i)}}{h_θ(x^{(i)})*ln(e)}*\frac{∂}{∂θ_j}h_θ(x^{(i)}) + \frac{(1-y^{(i)})}{(1-h_θ(x^{(i)}))*ln(e)}*\frac{∂}{∂θ_j}(1-h_θ(x^{(i)}))[/math]
[math] = \frac{y^{(i)}}{h_θ(x^{(i)})*ln(e)}*\frac{∂}{∂θ_j}h_θ(x^{(i)}) - \frac{(1-y^{(i)})}{(1-h_θ(x^{(i)}))*ln(e)}*\frac{∂}{∂θ_j}h_θ(x^{(i)})[/math]
[math] = (\frac{y^{(i)}}{h_θ(x^{(i)})*ln(e)}- \frac{(1-y^{(i)})}{(1-h_θ(x^{(i)}))*ln(e)})*\frac{∂}{∂θ_j}h_θ(x^{(i)}) [/math]
[math] = \frac{y^{(i)}-h_θ(x^{(i)})}{h_θ(x^{(i)})*(1-h_θ(x^{(i)}))*ln(e)}*\frac{∂}{∂θ_j}h_θ(x^{(i)}) [/math] //将 [math]h_θ(x^{(i)})=g(z)=\frac{1}{1+e^{-z}}[/math]代入
[math] = \frac{y^{(i)}*(1+e^{-z})^2-(1+e^{-z})}{e^{-z}*ln(e)} * \frac{∂}{∂θ_j}h_θ(x^{(i)}) [/math]
[math] = \frac{y^{(i)}*(1+e^{-z})^2-(1+e^{-z})}{e^{-z}} * \frac{∂}{∂θ_j}h_θ(x^{(i)}) [/math] [math]------式2)[/math]


[math] \frac{∂}{∂θ_j}h_θ(x^{(i)}) = g'(z)*z'(θ^Tx^{(i)}) = (\frac{1}{1+e^{-z}})'*z'(θ^Tx^{(i)})[/math]
[math] = ((1+e^{-z})^{-1})'*z'(θ^Tx^{(i)})[/math]
[math] = \frac{e^{-z}}{(1+e^{-z})^{2}}*z'(θ^Tx^{(i)})[/math]
[math] = \frac{e^{-z}}{(1+e^{-z})^{2}}*\frac{∂}{∂θ_j}(θ^Tx^{(i)})[/math]
[math] = \frac{e^{-z}}{(1+e^{-z})^{2}}*\frac{∂}{∂θ_j}(θ_0*x_0^{(i)} + θ_1*x_1^{(i)} + θ_2*x_2^{(i)} +...+ θ_j*x_j^{(i)} +...+ θ_n*x_n^{(i)} )[/math]
[math] = \frac{e^{-z}}{(1+e^{-z})^{2}}*x_j^{(i)}[/math] [math]------式3)[/math]
将式3)代入式2):
[math]\frac{∂}{∂θ_j}[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))] = (y^{(i)} - \frac{1}{1+e^{-z}})*x_j^{(i)}[/math]
[math] = (y^{(i)} - h_θ(x^{(i)}))*x_j^{(i)}[/math] [math]------式4)[/math]
将式4)代入式1):
[math]θ_j:= θ_j-\frac{α}{m}\sum_{i=1}^m( (h_θ(x^{(i)})-y^{(i)}) x_j^{(i)} ) [/math]


向量化形式:
[math] θ = θ - \frac{α}{m}X^T(g(Xθ) - \vec y) [/math]

解决Overfitting

针对 hypothesis function,引入 Regularation parameter([math]λ[/math])到 Cost function中:
[math]J(θ)=\frac{1}{2m}\sum_{i=1}^m(h_θ(x^{(i)})-y^{(i)})^2 + λ\sum_{j=1}^nθ_j^2[/math]

Week4 - Neural networks神经网络

Neural netorwk.png

对于上述神经网络,其各个layer可如下计算:
[math]a_1^{(2)} = g( θ_{10}^{(1)}x_0 + θ_{11}^{(1)}x_1 + θ_{12}^{(1)}x_2 + θ_{13}^{(1)}x_3 )[/math]
[math]a_2^{(2)} = g( θ_{20}^{(1)}x_0 + θ_{21}^{(1)}x_1 + θ_{22}^{(1)}x_2 + θ_{23}^{(1)}x_3 )[/math]
[math]a_3^{(2)} = g( θ_{30}^{(1)}x_0 + θ_{31}^{(1)}x_1 + θ_{32}^{(1)}x_2 + θ_{33}^{(1)}x_3 )[/math]
[math]h_θ(x) = a_1^{(3)} = g( θ_{10}^{(2)}a_0^{(2)} + θ_{11}^{(2)}a_1^{(2)} + θ_{12}^{(2)}a_2^{(2)} + θ_{13}^{(2)}a_3^{(2)} )[/math]
  • 一个神经网络,如果其在[math]j[/math]层有[math]s_j[/math]个神经元,在[math]j+1[/math]层有[math]s_{j+1}[/math]个神经元,则[math]θ_j[/math]将是 [math]s_{j+1} * (s_j+1) 的矩阵。[/math]