“ML”的版本间的差异
来自个人维基
小 |
小 |
||
第2行: | 第2行: | ||
Squared error function/Mean squared function均方误差: <math>J(θ)=\frac{1}{2m}\sum_{i=1}^m(h_θ(x_i)-y_i)^2</math> | Squared error function/Mean squared function均方误差: <math>J(θ)=\frac{1}{2m}\sum_{i=1}^m(h_θ(x_i)-y_i)^2</math> | ||
Cross entropy交叉熵: <math>J(θ)=-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))]</math> | Cross entropy交叉熵: <math>J(θ)=-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))]</math> | ||
+ | |||
+ | =Gradient Descent梯度下降= | ||
+ | |||
+ | <math>θ_j:=θ_j+α\frac{∂}{∂θ_j}J(θ)</math> |
2018年12月21日 (五) 11:00的版本
Cost Function损失函数
Squared error function/Mean squared function均方误差: [math]J(θ)=\frac{1}{2m}\sum_{i=1}^m(h_θ(x_i)-y_i)^2[/math]
Cross entropy交叉熵: [math]J(θ)=-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))][/math]
Gradient Descent梯度下降
[math]θ_j:=θ_j+α\frac{∂}{∂θ_j}J(θ)[/math]