“ML”的版本间的差异
小 (→Gradient Descent) |
小 (→Week4 - Neural networks神经网络) |
||
(未显示1个用户的7个中间版本) | |||
第84行: | 第84行: | ||
===Gradient Descent=== | ===Gradient Descent=== | ||
− | |||
<math>θ_j:=θ_j-α\frac{∂}{∂θ_j}J(θ)</math> | <math>θ_j:=θ_j-α\frac{∂}{∂θ_j}J(θ)</math> | ||
:<math>= θ_j-\frac{α}{m}\sum_{i=1}^m( (h_θ(x^{(i)})-y^{(i)}) x_j^{(i)} ) </math> | :<math>= θ_j-\frac{α}{m}\sum_{i=1}^m( (h_θ(x^{(i)})-y^{(i)}) x_j^{(i)} ) </math> | ||
− | <math>\frac{∂}{∂θ_j}J(θ) = \frac{∂}{∂θ_j}\{-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))]\}</math> | + | 附推导过程如下: |
− | :::<math>=-\frac{1}{m}\sum_{i=1}^m\frac{∂}{∂θ_j}[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))]</math> | + | :::<math>\frac{∂}{∂θ_j}J(θ) = \frac{∂}{∂θ_j}\{-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))]\}</math> |
+ | ::::::<math>=-\frac{1}{m}\sum_{i=1}^m\frac{∂}{∂θ_j}[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))]</math> <math>------式1)</math> | ||
:::其中, | :::其中, | ||
− | ::::<math>\frac{∂}{∂θ_j}[y^{(i)}*logh_θ(x^{(i)})] = y^{(i)}*\frac{∂}{∂θ_j}[logh_θ(x^{(i)})] = \frac{y^{(i)}}{h_θ(x^{(i)})*ln( | + | ::::<math>\frac{∂}{∂θ_j}[y^{(i)}*logh_θ(x^{(i)})] = y^{(i)}*\frac{∂}{∂θ_j}[logh_θ(x^{(i)})] = \frac{y^{(i)}}{h_θ(x^{(i)})*ln(e)}*\frac{∂}{∂θ_j}h_θ(x^{(i)})</math> |
− | ::::<math>\frac{∂}{∂θ_j}[(1-y^{(i)})*log(1-h_θ(x^{(i)}))] = (1-y^{(i)})*\frac{∂}{∂θ_j}[log(1-h_θ(x^{(i)}))] = \frac{(1-y^{(i)})}{(1-h_θ(x^{(i)}))*ln( | + | ::::<math>\frac{∂}{∂θ_j}[(1-y^{(i)})*log(1-h_θ(x^{(i)}))] = (1-y^{(i)})*\frac{∂}{∂θ_j}[log(1-h_θ(x^{(i)}))] = \frac{(1-y^{(i)})}{(1-h_θ(x^{(i)}))*ln(e)}*\frac{∂}{∂θ_j}(1-h_θ(x^{(i)}))</math> |
:::由于<math> \frac{∂}{∂θ_j}(1-h_θ(x^{(i)})) = -\frac{∂}{∂θ_j}h_θ(x^{(i)})</math>,故有: | :::由于<math> \frac{∂}{∂θ_j}(1-h_θ(x^{(i)})) = -\frac{∂}{∂θ_j}h_θ(x^{(i)})</math>,故有: | ||
− | ::::<math>\frac{∂}{∂θ_j}[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))] = \frac{y^{(i)}}{h_θ(x^{(i)})*ln( | + | ::::<math>\frac{∂}{∂θ_j}[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))] = \frac{y^{(i)}}{h_θ(x^{(i)})*ln(e)}*\frac{∂}{∂θ_j}h_θ(x^{(i)}) + \frac{(1-y^{(i)})}{(1-h_θ(x^{(i)}))*ln(e)}*\frac{∂}{∂θ_j}(1-h_θ(x^{(i)}))</math> |
− | :::::::::::::::::::::<math> = \frac{y^{(i)}}{h_θ(x^{(i)})*ln( | + | :::::::::::::::::::::<math> = \frac{y^{(i)}}{h_θ(x^{(i)})*ln(e)}*\frac{∂}{∂θ_j}h_θ(x^{(i)}) - \frac{(1-y^{(i)})}{(1-h_θ(x^{(i)}))*ln(e)}*\frac{∂}{∂θ_j}h_θ(x^{(i)})</math> |
− | :::::::::::::::::::::<math> = (\frac{y^{(i)}}{h_θ(x^{(i)})*ln( | + | :::::::::::::::::::::<math> = (\frac{y^{(i)}}{h_θ(x^{(i)})*ln(e)}- \frac{(1-y^{(i)})}{(1-h_θ(x^{(i)}))*ln(e)})*\frac{∂}{∂θ_j}h_θ(x^{(i)}) </math> |
+ | :::::::::::::::::::::<math> = \frac{y^{(i)}-h_θ(x^{(i)})}{h_θ(x^{(i)})*(1-h_θ(x^{(i)}))*ln(e)}*\frac{∂}{∂θ_j}h_θ(x^{(i)}) </math> //将 <math>h_θ(x^{(i)})=g(z)=\frac{1}{1+e^{-z}}</math>代入 | ||
+ | :::::::::::::::::::::<math> = \frac{y^{(i)}*(1+e^{-z})^2-(1+e^{-z})}{e^{-z}*ln(e)} * \frac{∂}{∂θ_j}h_θ(x^{(i)}) </math> | ||
+ | :::::::::::::::::::::<math> = \frac{y^{(i)}*(1+e^{-z})^2-(1+e^{-z})}{e^{-z}} * \frac{∂}{∂θ_j}h_θ(x^{(i)}) </math> <math>------式2)</math> | ||
第104行: | 第107行: | ||
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*\frac{∂}{∂θ_j}(θ^Tx^{(i)})</math> | :::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*\frac{∂}{∂θ_j}(θ^Tx^{(i)})</math> | ||
:::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*\frac{∂}{∂θ_j}(θ_0*x_0^{(i)} + θ_1*x_1^{(i)} + θ_2*x_2^{(i)} +...+ θ_j*x_j^{(i)} +...+ θ_n*x_n^{(i)} )</math> | :::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*\frac{∂}{∂θ_j}(θ_0*x_0^{(i)} + θ_1*x_1^{(i)} + θ_2*x_2^{(i)} +...+ θ_j*x_j^{(i)} +...+ θ_n*x_n^{(i)} )</math> | ||
− | :::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*x_j^{(i)}</math> | + | :::::::::<math> = \frac{e^{-z}}{(1+e^{-z})^{2}}*x_j^{(i)}</math> <math>------式3)</math> |
− | + | ::::将式3)代入式2): | |
− | + | :::::::::<math>\frac{∂}{∂θ_j}[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))] = (y^{(i)} - \frac{1}{1+e^{-z}})*x_j^{(i)}</math> | |
+ | ::::::::::::::::::::::::::<math> = (y^{(i)} - h_θ(x^{(i)}))*x_j^{(i)}</math> <math>------式4)</math> | ||
+ | ::::将式4)代入式1): | ||
+ | :::::::::<math>θ_j:= θ_j-\frac{α}{m}\sum_{i=1}^m( (h_θ(x^{(i)})-y^{(i)}) x_j^{(i)} ) </math> | ||
第118行: | 第124行: | ||
针对 hypothesis function,引入 '''Regularation parameter'''(<math>λ</math>)到 Cost function中: | 针对 hypothesis function,引入 '''Regularation parameter'''(<math>λ</math>)到 Cost function中: | ||
<math>J(θ)=\frac{1}{2m}\sum_{i=1}^m(h_θ(x^{(i)})-y^{(i)})^2 + λ\sum_{j=1}^nθ_j^2</math> | <math>J(θ)=\frac{1}{2m}\sum_{i=1}^m(h_θ(x^{(i)})-y^{(i)})^2 + λ\sum_{j=1}^nθ_j^2</math> | ||
+ | |||
+ | =Week4 - Neural networks神经网络= | ||
+ | [[文件:Neural_netorwk.png|400px]] | ||
+ | :对于上述神经网络,其各个layer可如下计算: | ||
+ | ::<math>a_1^{(2)} = g( θ_{10}^{(1)}x_0 + θ_{11}^{(1)}x_1 + θ_{12}^{(1)}x_2 + θ_{13}^{(1)}x_3 )</math> | ||
+ | ::<math>a_2^{(2)} = g( θ_{20}^{(1)}x_0 + θ_{21}^{(1)}x_1 + θ_{22}^{(1)}x_2 + θ_{23}^{(1)}x_3 )</math> | ||
+ | ::<math>a_3^{(2)} = g( θ_{30}^{(1)}x_0 + θ_{31}^{(1)}x_1 + θ_{32}^{(1)}x_2 + θ_{33}^{(1)}x_3 )</math> | ||
+ | ::<math>h_θ(x) = a_1^{(3)} = g( θ_{10}^{(2)}a_0^{(2)} + θ_{11}^{(2)}a_1^{(2)} + θ_{12}^{(2)}a_2^{(2)} + θ_{13}^{(2)}a_3^{(2)} )</math> | ||
+ | *一个神经网络,如果其在<math>j</math>层有<math>s_j</math>个神经元,在<math>j+1</math>层有<math>s_{j+1}</math>个神经元,则<math>θ_j</math>将是 <math>s_{j+1} * (s_j+1) 的矩阵。 |
2019年1月2日 (三) 21:20的最后版本
目录[隐藏] |
定义
- 约定:
- x(i)j:训练数据中的第i列中的第j个特征值 value of feature j in the ith training example
- x(i):训练数据中第i列 the input (features) of the ith training example
- m:训练数据集条数 the number of training examples
- n:特征数量 the number of features
Week1 - 机器学习基本概念
Cost Function损失函数
Squared error function/Mean squared function均方误差: J(θ)=12mm∑i=1(hθ(x(i))−y(i))2
Cross entropy交叉熵: J(θ)=−1mm∑i=1[y(i)∗loghθ(x(i))+(1−y(i))∗log(1−hθ(x(i)))]
Gradient Descent梯度下降
θj:=θj−α∂∂θjJ(θ)
对于线性回归模型,其损失函数为均方误差,故有:
∂∂θjJ(θ)=∂∂θj(12mm∑i=1(hθ(x(i))−y(i))2)
- =12m∂∂θj(m∑i=1(hθ(x(i))−y(i))2)
- =12mm∑i=1(∂∂θj(hθ(x(i))−y(i))2)
- =1mm∑i=1((hθ(x(i))−y(i))∂∂θjhθ(x(i)))//链式求导法式
- =1mm∑i=1((hθ(x(i))−y(i))∂∂θjx(i)θ)
- =1mm∑i=1((hθ(x(i))−y(i))∂∂θjn∑k=0x(i)kθk)
对于j>=1:
- =1mm∑i=1((hθ(x(i))−y(i))x(i)j)
- =1m(hθ(x)−y)xj
Week2 - Multivariate Linear Regression
Multivariate Linear Regression模型的计算
hθ(x)=θ0x0+θ1x1+θ2x2+...+θnxn
- =[θ0x(1)0,θ0x(2)0,...,θ0x(m)0]+[θ1x(1)1,θ1x(2)1,...,θ1x(m)1]+...+[θnx(1)n,θnx(2)n,...,θnx(m)n]
- =[θ0x(1)0+θ1x(1)1+...+θnx(1)n, θ0x(2)0+θ1x(2)1+...+θnx(2)n, θ0x(m)0+θ1x(m)1+...+θnx(m)n]
- =θTx
其中,
x=|x0x1x2...xn|=|x(1)0x(2)0...x(m)0x(1)1x(2)1...x(m)1x(1)2x(2)2...x(m)2............x(1)nx(2)n...x(m)n|,θ=|θ0θ1θ2...θn|
- m为训练数据组数,n为特征个数(通常,为了方便处理,会令x(i)0=1,i=1,2,...,m)。
数据归一化:Feature Scaling & Standard Normalization
xi:=xi−μisi
其中,μi是第i个特征数据x_i的均值,而 si则要视情况而定:
- Feature Scaling:si为xi中最大值与最小值的差(max-min);
- Standard Normalization:si为xi中数据标准差(standard deviation)。
特别注意,通过 Feature scaling训练出模型后,在进行预测时,同样需要对输入特征数据进行归一化。
Normal Equation标准工程
θ=(XTX)−1XTy
Week3 - Logistic Regression & Overfitting
Logistic Regression
Sigmoid Function - S函数
hθ(x)=g(θTx)
z=θTx
g(z)=11+e−z
Cost Function
J(θ)=−1mm∑i=1[y(i)∗loghθ(x(i))+(1−y(i))∗log(1−hθ(x(i)))]
向量化形式:
J(θ)=1m(−yTlog(h)−(1−y)Tlog(1−h))
Gradient Descent
θj:=θj−α∂∂θjJ(θ)
- =θj−αmm∑i=1((hθ(x(i))−y(i))x(i)j)
附推导过程如下:
- ∂∂θjJ(θ)=∂∂θj{−1mm∑i=1[y(i)∗loghθ(x(i))+(1−y(i))∗log(1−hθ(x(i)))]}
- =−1mm∑i=1∂∂θj[y(i)∗loghθ(x(i))+(1−y(i))∗log(1−hθ(x(i)))] −−−−−−式1)
- 其中,
- ∂∂θj[y(i)∗loghθ(x(i))]=y(i)∗∂∂θj[loghθ(x(i))]=y(i)hθ(x(i))∗ln(e)∗∂∂θjhθ(x(i))
- ∂∂θj[(1−y(i))∗log(1−hθ(x(i)))]=(1−y(i))∗∂∂θj[log(1−hθ(x(i)))]=(1−y(i))(1−hθ(x(i)))∗ln(e)∗∂∂θj(1−hθ(x(i)))
- 由于∂∂θj(1−hθ(x(i)))=−∂∂θjhθ(x(i)),故有:
- ∂∂θj[y(i)∗loghθ(x(i))+(1−y(i))∗log(1−hθ(x(i)))]=y(i)hθ(x(i))∗ln(e)∗∂∂θjhθ(x(i))+(1−y(i))(1−hθ(x(i)))∗ln(e)∗∂∂θj(1−hθ(x(i)))
- =y(i)hθ(x(i))∗ln(e)∗∂∂θjhθ(x(i))−(1−y(i))(1−hθ(x(i)))∗ln(e)∗∂∂θjhθ(x(i))
- =(y(i)hθ(x(i))∗ln(e)−(1−y(i))(1−hθ(x(i)))∗ln(e))∗∂∂θjhθ(x(i))
- =y(i)−hθ(x(i))hθ(x(i))∗(1−hθ(x(i)))∗ln(e)∗∂∂θjhθ(x(i)) //将 hθ(x(i))=g(z)=11+e−z代入
- =y(i)∗(1+e−z)2−(1+e−z)e−z∗ln(e)∗∂∂θjhθ(x(i))
- =y(i)∗(1+e−z)2−(1+e−z)e−z∗∂∂θjhθ(x(i)) −−−−−−式2)
- ∂∂θj[y(i)∗loghθ(x(i))+(1−y(i))∗log(1−hθ(x(i)))]=y(i)hθ(x(i))∗ln(e)∗∂∂θjhθ(x(i))+(1−y(i))(1−hθ(x(i)))∗ln(e)∗∂∂θj(1−hθ(x(i)))
- ∂∂θjJ(θ)=∂∂θj{−1mm∑i=1[y(i)∗loghθ(x(i))+(1−y(i))∗log(1−hθ(x(i)))]}
- 而∂∂θjhθ(x(i))=g′(z)∗z′(θTx(i))=(11+e−z)′∗z′(θTx(i))
- =((1+e−z)−1)′∗z′(θTx(i))
- =e−z(1+e−z)2∗z′(θTx(i))
- =e−z(1+e−z)2∗∂∂θj(θTx(i))
- =e−z(1+e−z)2∗∂∂θj(θ0∗x(i)0+θ1∗x(i)1+θ2∗x(i)2+...+θj∗x(i)j+...+θn∗x(i)n)
- =e−z(1+e−z)2∗x(i)j −−−−−−式3)
- 将式3)代入式2):
- ∂∂θj[y(i)∗loghθ(x(i))+(1−y(i))∗log(1−hθ(x(i)))]=(y(i)−11+e−z)∗x(i)j
- =(y(i)−hθ(x(i)))∗x(i)j −−−−−−式4)
- ∂∂θj[y(i)∗loghθ(x(i))+(1−y(i))∗log(1−hθ(x(i)))]=(y(i)−11+e−z)∗x(i)j
- 将式4)代入式1):
- θj:=θj−αmm∑i=1((hθ(x(i))−y(i))x(i)j)
- 而∂∂θjhθ(x(i))=g′(z)∗z′(θTx(i))=(11+e−z)′∗z′(θTx(i))
向量化形式:
θ=θ−αmXT(g(Xθ)−→y)
解决Overfitting
针对 hypothesis function,引入 Regularation parameter(λ)到 Cost function中:
J(θ)=12mm∑i=1(hθ(x(i))−y(i))2+λn∑j=1θ2j
Week4 - Neural networks神经网络
- 对于上述神经网络,其各个layer可如下计算:
- a(2)1=g(θ(1)10x0+θ(1)11x1+θ(1)12x2+θ(1)13x3)
- a(2)2=g(θ(1)20x0+θ(1)21x1+θ(1)22x2+θ(1)23x3)
- a(2)3=g(θ(1)30x0+θ(1)31x1+θ(1)32x2+θ(1)33x3)
- hθ(x)=a(3)1=g(θ(2)10a(2)0+θ(2)11a(2)1+θ(2)12a(2)2+θ(2)13a(2)3)
- 一个神经网络,如果其在j层有sj个神经元,在j+1层有sj+1个神经元,则θj将是 sj+1∗(sj+1)的矩阵。