“ML”的版本间的差异

来自个人维基
跳转至: 导航搜索
Gradient Descent
Week4 - Neural networks神经网络
 
(未显示1个用户的13个中间版本)
第84行: 第84行:
  
 
===Gradient Descent===
 
===Gradient Descent===
<math>J(&theta;)=-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]</math>
 
 
<math>&theta;_j:=&theta;_j-&alpha;\frac{&part;}{&part;&theta;_j}J(&theta;)</math>
 
<math>&theta;_j:=&theta;_j-&alpha;\frac{&part;}{&part;&theta;_j}J(&theta;)</math>
 
:<math>= &theta;_j-\frac{&alpha;}{m}\sum_{i=1}^m( (h_&theta;(x^{(i)})-y^{(i)}) x_j^{(i)} ) </math>
 
:<math>= &theta;_j-\frac{&alpha;}{m}\sum_{i=1}^m( (h_&theta;(x^{(i)})-y^{(i)}) x_j^{(i)} ) </math>
  
<math>\frac{&part;}{&part;&theta;_j}J(&theta;) = \frac{&part;}{&part;&theta;_j}\{-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]\}</math>
+
附推导过程如下:
:::<math>=-\frac{1}{m}\sum_{i=1}^m\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]</math>
+
:::<math>\frac{&part;}{&part;&theta;_j}J(&theta;) = \frac{&part;}{&part;&theta;_j}\{-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]\}</math>
 +
::::::<math>=-\frac{1}{m}\sum_{i=1}^m\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]</math> <math>------式1)</math>
 
:::其中,
 
:::其中,
:::<math>\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})] = y^{(i)}*\frac{&part;}{&part;&theta;_j}[logh_&theta;(x^{(i)})]</math>
+
::::<math>\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})] = y^{(i)}*\frac{&part;}{&part;&theta;_j}[logh_&theta;(x^{(i)})] = \frac{y^{(i)}}{h_&theta;(x^{(i)})*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)})</math>
::::::::::<math> = y^{(i)}*\frac{1}{h_&theta;(x^{(i)})*ln(2)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)})</math>
+
::::<math>\frac{&part;}{&part;&theta;_j}[(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))] = (1-y^{(i)})*\frac{&part;}{&part;&theta;_j}[log(1-h_&theta;(x^{(i)}))] = \frac{(1-y^{(i)})}{(1-h_&theta;(x^{(i)}))*ln(e)}*\frac{&part;}{&part;&theta;_j}(1-h_&theta;(x^{(i)}))</math>
::::::<math> \frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) = g'(z)*z'(&theta;^Tx) = (\frac{1}{1+e^{-z}})'*z'(&theta;^Tx)</math>
+
:::由于<math> \frac{&part;}{&part;&theta;_j}(1-h_&theta;(x^{(i)})) = -\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)})</math>,故有:
 +
::::<math>\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))] = \frac{y^{(i)}}{h_&theta;(x^{(i)})*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) + \frac{(1-y^{(i)})}{(1-h_&theta;(x^{(i)}))*ln(e)}*\frac{&part;}{&part;&theta;_j}(1-h_&theta;(x^{(i)}))</math>
 +
:::::::::::::::::::::<math> = \frac{y^{(i)}}{h_&theta;(x^{(i)})*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) - \frac{(1-y^{(i)})}{(1-h_&theta;(x^{(i)}))*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)})</math>
 +
:::::::::::::::::::::<math> = (\frac{y^{(i)}}{h_&theta;(x^{(i)})*ln(e)}- \frac{(1-y^{(i)})}{(1-h_&theta;(x^{(i)}))*ln(e)})*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) </math>
 +
:::::::::::::::::::::<math> = \frac{y^{(i)}-h_&theta;(x^{(i)})}{h_&theta;(x^{(i)})*(1-h_&theta;(x^{(i)}))*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) </math> //将 <math>h_&theta;(x^{(i)})=g(z)=\frac{1}{1+e^{-z}}</math>代入
 +
:::::::::::::::::::::<math> = \frac{y^{(i)}*(1+e^{-z})^2-(1+e^{-z})}{e^{-z}*ln(e)} * \frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) </math>
 +
:::::::::::::::::::::<math> = \frac{y^{(i)}*(1+e^{-z})^2-(1+e^{-z})}{e^{-z}} * \frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) </math> <math>------式2)</math>
  
  
 +
::::而<math> \frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) = g'(z)*z'(&theta;^Tx^{(i)}) = (\frac{1}{1+e^{-z}})'*z'(&theta;^Tx^{(i)})</math>
 +
:::::::::<math> = ((1+e^{-z})^{-1})'*z'(&theta;^Tx^{(i)})</math>
 +
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*z'(&theta;^Tx^{(i)})</math>
 +
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*\frac{&part;}{&part;&theta;_j}(&theta;^Tx^{(i)})</math>
 +
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*\frac{&part;}{&part;&theta;_j}(&theta;_0*x_0^{(i)} + &theta;_1*x_1^{(i)} + &theta;_2*x_2^{(i)} +...+  &theta;_j*x_j^{(i)} +...+ &theta;_n*x_n^{(i)} )</math>
 +
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*x_j^{(i)}</math> <math>------式3)</math>
 +
::::将式3)代入式2):
 +
:::::::::<math>\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))] = (y^{(i)} - \frac{1}{1+e^{-z}})*x_j^{(i)}</math>
 +
::::::::::::::::::::::::::<math> = (y^{(i)} - h_&theta;(x^{(i)}))*x_j^{(i)}</math> <math>------式4)</math>
 +
::::将式4)代入式1):
 +
:::::::::<math>&theta;_j:= &theta;_j-\frac{&alpha;}{m}\sum_{i=1}^m( (h_&theta;(x^{(i)})-y^{(i)}) x_j^{(i)} ) </math>
  
  
第107行: 第124行:
 
针对 hypothesis function,引入 '''Regularation parameter'''(<math>&lambda;</math>)到 Cost function中:
 
针对 hypothesis function,引入 '''Regularation parameter'''(<math>&lambda;</math>)到 Cost function中:
 
<math>J(&theta;)=\frac{1}{2m}\sum_{i=1}^m(h_&theta;(x^{(i)})-y^{(i)})^2 + &lambda;\sum_{j=1}^n&theta;_j^2</math>
 
<math>J(&theta;)=\frac{1}{2m}\sum_{i=1}^m(h_&theta;(x^{(i)})-y^{(i)})^2 + &lambda;\sum_{j=1}^n&theta;_j^2</math>
 +
 +
=Week4 - Neural networks神经网络=
 +
[[文件:Neural_netorwk.png|400px]]
 +
:对于上述神经网络,其各个layer可如下计算:
 +
::<math>a_1^{(2)} = g( &theta;_{10}^{(1)}x_0 + &theta;_{11}^{(1)}x_1 + &theta;_{12}^{(1)}x_2 + &theta;_{13}^{(1)}x_3 )</math>
 +
::<math>a_2^{(2)} = g( &theta;_{20}^{(1)}x_0 + &theta;_{21}^{(1)}x_1 + &theta;_{22}^{(1)}x_2 + &theta;_{23}^{(1)}x_3 )</math>
 +
::<math>a_3^{(2)} = g( &theta;_{30}^{(1)}x_0 + &theta;_{31}^{(1)}x_1 + &theta;_{32}^{(1)}x_2 + &theta;_{33}^{(1)}x_3 )</math>
 +
::<math>h_&theta;(x) = a_1^{(3)} = g( &theta;_{10}^{(2)}a_0^{(2)} + &theta;_{11}^{(2)}a_1^{(2)} + &theta;_{12}^{(2)}a_2^{(2)} + &theta;_{13}^{(2)}a_3^{(2)} )</math>
 +
*一个神经网络,如果其在<math>j</math>层有<math>s_j</math>个神经元,在<math>j+1</math>层有<math>s_{j+1}</math>个神经元,则<math>&theta;_j</math>将是 <math>s_{j+1} * (s_j+1) 的矩阵。

2019年1月2日 (三) 21:20的最后版本

目录

 [隐藏

定义

约定:
x(i)j:训练数据中的第i列中的第j个特征值 value of feature j in the ith training example
x(i):训练数据中第i列 the input (features) of the ith training example
m:训练数据集条数 the number of training examples
n:特征数量 the number of features

Week1 - 机器学习基本概念

Cost Function损失函数

Squared error function/Mean squared function均方误差:
Cross entropy交叉熵:

Gradient Descent梯度下降


对于线性回归模型,其损失函数为均方误差,故有:

对于j>=1:

Week2 - Multivariate Linear Regression

Multivariate Linear Regression模型的计算

其中,

m为训练数据组数,n为特征个数(通常,为了方便处理,会令

数据归一化:Feature Scaling & Standard Normalization


其中,是第i个特征数据x_i的均值,而 则要视情况而定:

  • Feature Scaling:中最大值与最小值的差(max-min);
  • Standard Normalization:中数据标准差(standard deviation)。

特别注意,通过 Feature scaling训练出模型后,在进行预测时,同样需要对输入特征数据进行归一化。

Normal Equation标准工程

Week3 - Logistic Regression & Overfitting

Logistic Regression

Sigmoid Function - S函数



Cost Function


向量化形式:

Gradient Descent

附推导过程如下:

其中,
由于,故有:
//将 代入


将式3)代入式2):
将式4)代入式1):


向量化形式:

解决Overfitting

针对 hypothesis function,引入 Regularation parameter()到 Cost function中:

Week4 - Neural networks神经网络

Neural netorwk.png

对于上述神经网络,其各个layer可如下计算:
  • 一个神经网络,如果其在层有个神经元,在层有个神经元,则将是