“ML”的版本间的差异

来自个人维基
跳转至: 导航搜索
Week2
Week4 - Neural networks神经网络
 
(未显示1个用户的28个中间版本)
第6行: 第6行:
 
::<math>n</math>:特征数量 the number of features
 
::<math>n</math>:特征数量 the number of features
  
=Week1=
+
=Week1 - 机器学习基本概念=
 
==Cost Function损失函数==
 
==Cost Function损失函数==
 
Squared error function/Mean squared function均方误差: <math>J(&theta;)=\frac{1}{2m}\sum_{i=1}^m(h_&theta;(x^{(i)})-y^{(i)})^2</math>
 
Squared error function/Mean squared function均方误差: <math>J(&theta;)=\frac{1}{2m}\sum_{i=1}^m(h_&theta;(x^{(i)})-y^{(i)})^2</math>
第12行: 第12行:
  
 
==Gradient Descent梯度下降==
 
==Gradient Descent梯度下降==
<math>&theta;_j:=&theta;_j+&alpha;\frac{&part;}{&part;&theta;_j}J(&theta;)</math>
+
<math>&theta;_j:=&theta;_j-&alpha;\frac{&part;}{&part;&theta;_j}J(&theta;)</math>
 
对于'''线性回归模型''',其损失函数为均方误差,故有:
 
对于'''线性回归模型''',其损失函数为均方误差,故有:
 
<math>\frac{&part;}{&part;&theta;_j}J(&theta;)= \frac{&part;}{&part;&theta;_j}(\frac{1}{2m}\sum_{i=1}^m(h_&theta;(x^{(i)})-y^{(i)})^2)</math>
 
<math>\frac{&part;}{&part;&theta;_j}J(&theta;)= \frac{&part;}{&part;&theta;_j}(\frac{1}{2m}\sum_{i=1}^m(h_&theta;(x^{(i)})-y^{(i)})^2)</math>
第24行: 第24行:
 
:<math>= \frac{1}{m} (h_&theta;(x)-y) x_{j}  </math>
 
:<math>= \frac{1}{m} (h_&theta;(x)-y) x_{j}  </math>
  
=Week2=
+
=Week2 - Multivariate Linear Regression=
==Multivariate Linear Regression==
+
==Multivariate Linear Regression模型的计算==
 
<math>h_&theta;(x) = &theta;_0x_0 + &theta;_1x_1 + &theta;_2x_2 + ... + &theta;_nx_n</math>
 
<math>h_&theta;(x) = &theta;_0x_0 + &theta;_1x_1 + &theta;_2x_2 + ... + &theta;_nx_n</math>
 
::<math> = [&theta;_0x_0^{(1)}, &theta;_0x_0^{(2)}, ..., &theta;_0x_0^{(m)}] + [&theta;_1x_1^{(1)}, &theta;_1x_1^{(2)}, ..., &theta;_1x_1^{(m)}] + ... + [&theta;_nx_n^{(1)}, &theta;_nx_n^{(2)}, ..., &theta;_nx_n^{(m)}] </math>
 
::<math> = [&theta;_0x_0^{(1)}, &theta;_0x_0^{(2)}, ..., &theta;_0x_0^{(m)}] + [&theta;_1x_1^{(1)}, &theta;_1x_1^{(2)}, ..., &theta;_1x_1^{(m)}] + ... + [&theta;_nx_n^{(1)}, &theta;_nx_n^{(2)}, ..., &theta;_nx_n^{(m)}] </math>
第57行: 第57行:
 
:m为训练数据组数,n为特征个数(通常,为了方便处理,会令<math>x_0^{(i)}=1, i=1,2,...,m)</math>。
 
:m为训练数据组数,n为特征个数(通常,为了方便处理,会令<math>x_0^{(i)}=1, i=1,2,...,m)</math>。
  
==Feature Scaling & Standard Normalization==
+
==数据归一化:Feature Scaling & Standard Normalization==
 
<math>
 
<math>
 
x_i := \frac{x_i-&mu;_i}{s_i}
 
x_i := \frac{x_i-&mu;_i}{s_i}
第68行: 第68行:
 
==Normal Equation标准工程==
 
==Normal Equation标准工程==
 
<math>&theta; = (X^TX)^{-1}X^Ty</math>
 
<math>&theta; = (X^TX)^{-1}X^Ty</math>
 +
 +
=Week3 - Logistic Regression & Overfitting=
 +
==Logistic Regression==
 +
===Sigmoid Function - S函数===
 +
<math>h_&theta;(x)=g(&theta;^Tx)</math>
 +
<math>z = &theta;^Tx</math>
 +
<math>g(z) = \frac{1}{1+e^{-z}}</math>
 +
 +
===Cost Function===
 +
<math>J(&theta;)=-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]</math>
 +
向量化形式:
 +
<math>
 +
J(&theta;) = \frac{1}{m}( -y^Tlog(h) - (1-y)^Tlog(1-h) )
 +
</math>
 +
 +
===Gradient Descent===
 +
<math>&theta;_j:=&theta;_j-&alpha;\frac{&part;}{&part;&theta;_j}J(&theta;)</math>
 +
:<math>= &theta;_j-\frac{&alpha;}{m}\sum_{i=1}^m( (h_&theta;(x^{(i)})-y^{(i)}) x_j^{(i)} ) </math>
 +
 +
附推导过程如下:
 +
:::<math>\frac{&part;}{&part;&theta;_j}J(&theta;) = \frac{&part;}{&part;&theta;_j}\{-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]\}</math>
 +
::::::<math>=-\frac{1}{m}\sum_{i=1}^m\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]</math> <math>------式1)</math>
 +
:::其中,
 +
::::<math>\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})] = y^{(i)}*\frac{&part;}{&part;&theta;_j}[logh_&theta;(x^{(i)})] = \frac{y^{(i)}}{h_&theta;(x^{(i)})*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)})</math>
 +
::::<math>\frac{&part;}{&part;&theta;_j}[(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))] = (1-y^{(i)})*\frac{&part;}{&part;&theta;_j}[log(1-h_&theta;(x^{(i)}))] = \frac{(1-y^{(i)})}{(1-h_&theta;(x^{(i)}))*ln(e)}*\frac{&part;}{&part;&theta;_j}(1-h_&theta;(x^{(i)}))</math>
 +
:::由于<math> \frac{&part;}{&part;&theta;_j}(1-h_&theta;(x^{(i)})) = -\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)})</math>,故有:
 +
::::<math>\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))] = \frac{y^{(i)}}{h_&theta;(x^{(i)})*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) + \frac{(1-y^{(i)})}{(1-h_&theta;(x^{(i)}))*ln(e)}*\frac{&part;}{&part;&theta;_j}(1-h_&theta;(x^{(i)}))</math>
 +
:::::::::::::::::::::<math> = \frac{y^{(i)}}{h_&theta;(x^{(i)})*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) - \frac{(1-y^{(i)})}{(1-h_&theta;(x^{(i)}))*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)})</math>
 +
:::::::::::::::::::::<math> = (\frac{y^{(i)}}{h_&theta;(x^{(i)})*ln(e)}- \frac{(1-y^{(i)})}{(1-h_&theta;(x^{(i)}))*ln(e)})*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) </math>
 +
:::::::::::::::::::::<math> = \frac{y^{(i)}-h_&theta;(x^{(i)})}{h_&theta;(x^{(i)})*(1-h_&theta;(x^{(i)}))*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) </math> //将 <math>h_&theta;(x^{(i)})=g(z)=\frac{1}{1+e^{-z}}</math>代入
 +
:::::::::::::::::::::<math> = \frac{y^{(i)}*(1+e^{-z})^2-(1+e^{-z})}{e^{-z}*ln(e)} * \frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) </math>
 +
:::::::::::::::::::::<math> = \frac{y^{(i)}*(1+e^{-z})^2-(1+e^{-z})}{e^{-z}} * \frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) </math> <math>------式2)</math>
 +
 +
 +
::::而<math> \frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) = g'(z)*z'(&theta;^Tx^{(i)}) = (\frac{1}{1+e^{-z}})'*z'(&theta;^Tx^{(i)})</math>
 +
:::::::::<math> = ((1+e^{-z})^{-1})'*z'(&theta;^Tx^{(i)})</math>
 +
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*z'(&theta;^Tx^{(i)})</math>
 +
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*\frac{&part;}{&part;&theta;_j}(&theta;^Tx^{(i)})</math>
 +
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*\frac{&part;}{&part;&theta;_j}(&theta;_0*x_0^{(i)} + &theta;_1*x_1^{(i)} + &theta;_2*x_2^{(i)} +...+  &theta;_j*x_j^{(i)} +...+ &theta;_n*x_n^{(i)} )</math>
 +
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*x_j^{(i)}</math> <math>------式3)</math>
 +
::::将式3)代入式2):
 +
:::::::::<math>\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))] = (y^{(i)} - \frac{1}{1+e^{-z}})*x_j^{(i)}</math>
 +
::::::::::::::::::::::::::<math> = (y^{(i)} - h_&theta;(x^{(i)}))*x_j^{(i)}</math> <math>------式4)</math>
 +
::::将式4)代入式1):
 +
:::::::::<math>&theta;_j:= &theta;_j-\frac{&alpha;}{m}\sum_{i=1}^m( (h_&theta;(x^{(i)})-y^{(i)}) x_j^{(i)} ) </math>
 +
 +
 +
 +
向量化形式:
 +
<math>
 +
&theta; = &theta; - \frac{&alpha;}{m}X^T(g(X&theta;) - \vec y)
 +
</math>
 +
 +
==解决Overfitting==
 +
针对 hypothesis function,引入 '''Regularation parameter'''(<math>&lambda;</math>)到 Cost function中:
 +
<math>J(&theta;)=\frac{1}{2m}\sum_{i=1}^m(h_&theta;(x^{(i)})-y^{(i)})^2 + &lambda;\sum_{j=1}^n&theta;_j^2</math>
 +
 +
=Week4 - Neural networks神经网络=
 +
[[文件:Neural_netorwk.png|400px]]
 +
:对于上述神经网络,其各个layer可如下计算:
 +
::<math>a_1^{(2)} = g( &theta;_{10}^{(1)}x_0 + &theta;_{11}^{(1)}x_1 + &theta;_{12}^{(1)}x_2 + &theta;_{13}^{(1)}x_3 )</math>
 +
::<math>a_2^{(2)} = g( &theta;_{20}^{(1)}x_0 + &theta;_{21}^{(1)}x_1 + &theta;_{22}^{(1)}x_2 + &theta;_{23}^{(1)}x_3 )</math>
 +
::<math>a_3^{(2)} = g( &theta;_{30}^{(1)}x_0 + &theta;_{31}^{(1)}x_1 + &theta;_{32}^{(1)}x_2 + &theta;_{33}^{(1)}x_3 )</math>
 +
::<math>h_&theta;(x) = a_1^{(3)} = g( &theta;_{10}^{(2)}a_0^{(2)} + &theta;_{11}^{(2)}a_1^{(2)} + &theta;_{12}^{(2)}a_2^{(2)} + &theta;_{13}^{(2)}a_3^{(2)} )</math>
 +
*一个神经网络,如果其在<math>j</math>层有<math>s_j</math>个神经元,在<math>j+1</math>层有<math>s_{j+1}</math>个神经元,则<math>&theta;_j</math>将是 <math>s_{j+1} * (s_j+1) 的矩阵。

2019年1月2日 (三) 21:20的最后版本

目录

 [隐藏

定义

约定:
x(i)j:训练数据中的第i列中的第j个特征值 value of feature j in the ith training example
x(i):训练数据中第i列 the input (features) of the ith training example
m:训练数据集条数 the number of training examples
n:特征数量 the number of features

Week1 - 机器学习基本概念

Cost Function损失函数

Squared error function/Mean squared function均方误差: J(θ)=12mmi=1(hθ(x(i))y(i))2
Cross entropy交叉熵: J(θ)=1mmi=1[y(i)loghθ(x(i))+(1y(i))log(1hθ(x(i)))]

Gradient Descent梯度下降

θj:=θjαθjJ(θ)
对于线性回归模型,其损失函数为均方误差,故有:
θjJ(θ)=θj(12mmi=1(hθ(x(i))y(i))2)

=12mθj(mi=1(hθ(x(i))y(i))2)
=12mmi=1(θj(hθ(x(i))y(i))2)
=1mmi=1((hθ(x(i))y(i))θjhθ(x(i)))//
=1mmi=1((hθ(x(i))y(i))θjx(i)θ)
=1mmi=1((hθ(x(i))y(i))θjnk=0x(i)kθk)

对于j>=1:

=1mmi=1((hθ(x(i))y(i))x(i)j)
=1m(hθ(x)y)xj

Week2 - Multivariate Linear Regression

Multivariate Linear Regression模型的计算

hθ(x)=θ0x0+θ1x1+θ2x2+...+θnxn

=[θ0x(1)0,θ0x(2)0,...,θ0x(m)0]+[θ1x(1)1,θ1x(2)1,...,θ1x(m)1]+...+[θnx(1)n,θnx(2)n,...,θnx(m)n]
=[θ0x(1)0+θ1x(1)1+...+θnx(1)n,   θ0x(2)0+θ1x(2)1+...+θnx(2)n,   θ0x(m)0+θ1x(m)1+...+θnx(m)n]
=θTx

其中,
x=|x0x1x2...xn|=|x(1)0x(2)0...x(m)0x(1)1x(2)1...x(m)1x(1)2x(2)2...x(m)2............x(1)nx(2)n...x(m)n|,θ=|θ0θ1θ2...θn|

m为训练数据组数,n为特征个数(通常,为了方便处理,会令x(i)0=1,i=1,2,...,m

数据归一化:Feature Scaling & Standard Normalization

xi:=xiμisi
其中,μi是第i个特征数据x_i的均值,而 si则要视情况而定:

  • Feature Scaling:sixi中最大值与最小值的差(max-min);
  • Standard Normalization:sixi中数据标准差(standard deviation)。

特别注意,通过 Feature scaling训练出模型后,在进行预测时,同样需要对输入特征数据进行归一化。

Normal Equation标准工程

θ=(XTX)1XTy

Week3 - Logistic Regression & Overfitting

Logistic Regression

Sigmoid Function - S函数

hθ(x)=g(θTx)
z=θTx
g(z)=11+ez

Cost Function

J(θ)=1mmi=1[y(i)loghθ(x(i))+(1y(i))log(1hθ(x(i)))]
向量化形式:
J(θ)=1m(yTlog(h)(1y)Tlog(1h))

Gradient Descent

θj:=θjαθjJ(θ)

=θjαmmi=1((hθ(x(i))y(i))x(i)j)

附推导过程如下:

θjJ(θ)=θj{1mmi=1[y(i)loghθ(x(i))+(1y(i))log(1hθ(x(i)))]}
=1mmi=1θj[y(i)loghθ(x(i))+(1y(i))log(1hθ(x(i)))] 1)
其中,
θj[y(i)loghθ(x(i))]=y(i)θj[loghθ(x(i))]=y(i)hθ(x(i))ln(e)θjhθ(x(i))
θj[(1y(i))log(1hθ(x(i)))]=(1y(i))θj[log(1hθ(x(i)))]=(1y(i))(1hθ(x(i)))ln(e)θj(1hθ(x(i)))
由于θj(1hθ(x(i)))=θjhθ(x(i)),故有:
θj[y(i)loghθ(x(i))+(1y(i))log(1hθ(x(i)))]=y(i)hθ(x(i))ln(e)θjhθ(x(i))+(1y(i))(1hθ(x(i)))ln(e)θj(1hθ(x(i)))
=y(i)hθ(x(i))ln(e)θjhθ(x(i))(1y(i))(1hθ(x(i)))ln(e)θjhθ(x(i))
=(y(i)hθ(x(i))ln(e)(1y(i))(1hθ(x(i)))ln(e))θjhθ(x(i))
=y(i)hθ(x(i))hθ(x(i))(1hθ(x(i)))ln(e)θjhθ(x(i)) //将 hθ(x(i))=g(z)=11+ez代入
=y(i)(1+ez)2(1+ez)ezln(e)θjhθ(x(i))
=y(i)(1+ez)2(1+ez)ezθjhθ(x(i)) 2)


θjhθ(x(i))=g(z)z(θTx(i))=(11+ez)z(θTx(i))
=((1+ez)1)z(θTx(i))
=ez(1+ez)2z(θTx(i))
=ez(1+ez)2θj(θTx(i))
=ez(1+ez)2θj(θ0x(i)0+θ1x(i)1+θ2x(i)2+...+θjx(i)j+...+θnx(i)n)
=ez(1+ez)2x(i)j 3)
将式3)代入式2):
θj[y(i)loghθ(x(i))+(1y(i))log(1hθ(x(i)))]=(y(i)11+ez)x(i)j
=(y(i)hθ(x(i)))x(i)j 4)
将式4)代入式1):
θj:=θjαmmi=1((hθ(x(i))y(i))x(i)j)


向量化形式:
θ=θαmXT(g(Xθ)y)

解决Overfitting

针对 hypothesis function,引入 Regularation parameter(λ)到 Cost function中:
J(θ)=12mmi=1(hθ(x(i))y(i))2+λnj=1θ2j

Week4 - Neural networks神经网络

Neural netorwk.png

对于上述神经网络,其各个layer可如下计算:
a(2)1=g(θ(1)10x0+θ(1)11x1+θ(1)12x2+θ(1)13x3)
a(2)2=g(θ(1)20x0+θ(1)21x1+θ(1)22x2+θ(1)23x3)
a(2)3=g(θ(1)30x0+θ(1)31x1+θ(1)32x2+θ(1)33x3)
hθ(x)=a(3)1=g(θ(2)10a(2)0+θ(2)11a(2)1+θ(2)12a(2)2+θ(2)13a(2)3)
  • 一个神经网络,如果其在j层有sj个神经元,在j+1层有sj+1个神经元,则θj将是 sj+1(sj+1)