“ML”的版本间的差异

来自个人维基
跳转至: 导航搜索
Gradient Descent
Gradient Descent
第88行: 第88行:
 
:<math>= &theta;_j-\frac{&alpha;}{m}\sum_{i=1}^m( (h_&theta;(x^{(i)})-y^{(i)}) x_j^{(i)} ) </math>
 
:<math>= &theta;_j-\frac{&alpha;}{m}\sum_{i=1}^m( (h_&theta;(x^{(i)})-y^{(i)}) x_j^{(i)} ) </math>
  
 +
推导过程如下:
 
<math>\frac{&part;}{&part;&theta;_j}J(&theta;) = \frac{&part;}{&part;&theta;_j}\{-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]\}</math>
 
<math>\frac{&part;}{&part;&theta;_j}J(&theta;) = \frac{&part;}{&part;&theta;_j}\{-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]\}</math>
:::<math>=-\frac{1}{m}\sum_{i=1}^m\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]</math>
+
:::<math>=-\frac{1}{m}\sum_{i=1}^m\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]</math> <math>------式1)</math>
 
:::其中,
 
:::其中,
 
::::<math>\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})] = y^{(i)}*\frac{&part;}{&part;&theta;_j}[logh_&theta;(x^{(i)})] = \frac{y^{(i)}}{h_&theta;(x^{(i)})*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)})</math>
 
::::<math>\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})] = y^{(i)}*\frac{&part;}{&part;&theta;_j}[logh_&theta;(x^{(i)})] = \frac{y^{(i)}}{h_&theta;(x^{(i)})*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)})</math>
第98行: 第99行:
 
:::::::::::::::::::::<math> = (\frac{y^{(i)}}{h_&theta;(x^{(i)})*ln(e)}- \frac{(1-y^{(i)})}{(1-h_&theta;(x^{(i)}))*ln(e)})*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) </math>
 
:::::::::::::::::::::<math> = (\frac{y^{(i)}}{h_&theta;(x^{(i)})*ln(e)}- \frac{(1-y^{(i)})}{(1-h_&theta;(x^{(i)}))*ln(e)})*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) </math>
 
:::::::::::::::::::::<math> = \frac{y^{(i)}-h_&theta;(x^{(i)})}{h_&theta;(x^{(i)})*(1-h_&theta;(x^{(i)}))*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) </math> //将 <math>h_&theta;(x^{(i)})=g(z)=\frac{1}{1+e^{-z}}</math>代入
 
:::::::::::::::::::::<math> = \frac{y^{(i)}-h_&theta;(x^{(i)})}{h_&theta;(x^{(i)})*(1-h_&theta;(x^{(i)}))*ln(e)}*\frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) </math> //将 <math>h_&theta;(x^{(i)})=g(z)=\frac{1}{1+e^{-z}}</math>代入
:::::::::::::::::::::<math> = \frac{y^{(i)}*(1+e^{-z})^2-(1+e^{-z})}{e^{-z}*ln(e)}</math>
+
:::::::::::::::::::::<math> = \frac{y^{(i)}*(1+e^{-z})^2-(1+e^{-z})}{e^{-z}*ln(e)} * \frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) </math>
 +
:::::::::::::::::::::<math> = \frac{y^{(i)}*(1+e^{-z})^2-(1+e^{-z})}{e^{-z}} * \frac{&part;}{&part;&theta;_j}h_&theta;(x^{(i)}) </math> <math>------式2)</math>
  
  
第106行: 第108行:
 
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*\frac{&part;}{&part;&theta;_j}(&theta;^Tx^{(i)})</math>
 
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*\frac{&part;}{&part;&theta;_j}(&theta;^Tx^{(i)})</math>
 
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*\frac{&part;}{&part;&theta;_j}(&theta;_0*x_0^{(i)} + &theta;_1*x_1^{(i)} + &theta;_2*x_2^{(i)} +...+  &theta;_j*x_j^{(i)} +...+ &theta;_n*x_n^{(i)} )</math>
 
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*\frac{&part;}{&part;&theta;_j}(&theta;_0*x_0^{(i)} + &theta;_1*x_1^{(i)} + &theta;_2*x_2^{(i)} +...+  &theta;_j*x_j^{(i)} +...+ &theta;_n*x_n^{(i)} )</math>
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*x_j^{(i)}</math>
+
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*x_j^{(i)}</math> <math>------式3)</math>
 
+
::::将式3)代入式2):
 
+
:::::::::<math>\frac{&part;}{&part;&theta;_j}[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))] = (y^{(i)} - \frac{1}{1+e^{-z}})*x_j^{(i)}</math>
 +
::::::::::::::::::::::::::<math> = (y^{(i)} - h_&theta;(x^{(i)}))*x_j^{(i)}</math> <math>------式4)</math>
 +
::::将式4)代入式1):
 +
:::::::::<math>&theta;_j:= &theta;_j-\frac{&alpha;}{m}\sum_{i=1}^m( (h_&theta;(x^{(i)})-y^{(i)}) x_j^{(i)} ) </math>
  
  

2018年12月25日 (二) 18:56的版本

目录

 [隐藏

定义

约定:
x(i)j:训练数据中的第i列中的第j个特征值 value of feature j in the ith training example
x(i):训练数据中第i列 the input (features) of the ith training example
m:训练数据集条数 the number of training examples
n:特征数量 the number of features

Week1 - 机器学习基本概念

Cost Function损失函数

Squared error function/Mean squared function均方误差: J(θ)=12mmi=1(hθ(x(i))y(i))2
Cross entropy交叉熵: J(θ)=1mmi=1[y(i)loghθ(x(i))+(1y(i))log(1hθ(x(i)))]

Gradient Descent梯度下降

θj:=θjαθjJ(θ)
对于线性回归模型,其损失函数为均方误差,故有:
θjJ(θ)=θj(12mmi=1(hθ(x(i))y(i))2)

=12mθj(mi=1(hθ(x(i))y(i))2)
=12mmi=1(θj(hθ(x(i))y(i))2)
=1mmi=1((hθ(x(i))y(i))θjhθ(x(i)))//
=1mmi=1((hθ(x(i))y(i))θjx(i)θ)
=1mmi=1((hθ(x(i))y(i))θjnk=0x(i)kθk)

对于j>=1:

=1mmi=1((hθ(x(i))y(i))x(i)j)
=1m(hθ(x)y)xj

Week2 - Multivariate Linear Regression

Multivariate Linear Regression模型的计算

hθ(x)=θ0x0+θ1x1+θ2x2+...+θnxn

=[θ0x(1)0,θ0x(2)0,...,θ0x(m)0]+[θ1x(1)1,θ1x(2)1,...,θ1x(m)1]+...+[θnx(1)n,θnx(2)n,...,θnx(m)n]
=[θ0x(1)0+θ1x(1)1+...+θnx(1)n,   θ0x(2)0+θ1x(2)1+...+θnx(2)n,   θ0x(m)0+θ1x(m)1+...+θnx(m)n]
=θTx

其中,
x=|x0x1x2...xn|=|x(1)0x(2)0...x(m)0x(1)1x(2)1...x(m)1x(1)2x(2)2...x(m)2............x(1)nx(2)n...x(m)n|,θ=|θ0θ1θ2...θn|

m为训练数据组数,n为特征个数(通常,为了方便处理,会令x(i)0=1,i=1,2,...,m

数据归一化:Feature Scaling & Standard Normalization

xi:=xiμisi
其中,μi是第i个特征数据x_i的均值,而 si则要视情况而定:

  • Feature Scaling:sixi中最大值与最小值的差(max-min);
  • Standard Normalization:sixi中数据标准差(standard deviation)。

特别注意,通过 Feature scaling训练出模型后,在进行预测时,同样需要对输入特征数据进行归一化。

Normal Equation标准工程

θ=(XTX)1XTy

Week3 - Logistic Regression & Overfitting

Logistic Regression

Sigmoid Function - S函数

hθ(x)=g(θTx)
z=θTx
g(z)=11+ez

Cost Function

J(θ)=1mmi=1[y(i)loghθ(x(i))+(1y(i))log(1hθ(x(i)))]
向量化形式:
J(θ)=1m(yTlog(h)(1y)Tlog(1h))

Gradient Descent

J(θ)=1mmi=1[y(i)loghθ(x(i))+(1y(i))log(1hθ(x(i)))]
θj:=θjαθjJ(θ)

=θjαmmi=1((hθ(x(i))y(i))x(i)j)

推导过程如下:
θjJ(θ)=θj{1mmi=1[y(i)loghθ(x(i))+(1y(i))log(1hθ(x(i)))]}

=1mmi=1θj[y(i)loghθ(x(i))+(1y(i))log(1hθ(x(i)))] 1)
其中,
θj[y(i)loghθ(x(i))]=y(i)θj[loghθ(x(i))]=y(i)hθ(x(i))ln(e)θjhθ(x(i))
θj[(1y(i))log(1hθ(x(i)))]=(1y(i))θj[log(1hθ(x(i)))]=(1y(i))(1hθ(x(i)))ln(e)θj(1hθ(x(i)))
由于θj(1hθ(x(i)))=θjhθ(x(i)),故有:
θj[y(i)loghθ(x(i))+(1y(i))log(1hθ(x(i)))]=y(i)hθ(x(i))ln(e)θjhθ(x(i))+(1y(i))(1hθ(x(i)))ln(e)θj(1hθ(x(i)))
=y(i)hθ(x(i))ln(e)θjhθ(x(i))(1y(i))(1hθ(x(i)))ln(e)θjhθ(x(i))
=(y(i)hθ(x(i))ln(e)(1y(i))(1hθ(x(i)))ln(e))θjhθ(x(i))
=y(i)hθ(x(i))hθ(x(i))(1hθ(x(i)))ln(e)θjhθ(x(i)) //将 hθ(x(i))=g(z)=11+ez代入
=y(i)(1+ez)2(1+ez)ezln(e)θjhθ(x(i))
=y(i)(1+ez)2(1+ez)ezθjhθ(x(i)) 2)


θjhθ(x(i))=g(z)z(θTx(i))=(11+ez)z(θTx(i))
=((1+ez)1)z(θTx(i))
=ez(1+ez)2z(θTx(i))
=ez(1+ez)2θj(θTx(i))
=ez(1+ez)2θj(θ0x(i)0+θ1x(i)1+θ2x(i)2+...+θjx(i)j+...+θnx(i)n)
=ez(1+ez)2x(i)j 3)
将式3)代入式2):
θj[y(i)loghθ(x(i))+(1y(i))log(1hθ(x(i)))]=(y(i)11+ez)x(i)j
=(y(i)hθ(x(i)))x(i)j 4)
将式4)代入式1):
θj:=θjαmmi=1((hθ(x(i))y(i))x(i)j)


向量化形式:
θ=θαmXT(g(Xθ)y)

解决Overfitting

针对 hypothesis function,引入 Regularation parameter(λ)到 Cost function中:
J(θ)=12mmi=1(hθ(x(i))y(i))2+λnj=1θ2j