“ML”的版本间的差异

来自个人维基
跳转至: 导航搜索
第2行: 第2行:
 
Squared error function/Mean squared function均方误差: <math>J(&theta;)=\frac{1}{2m}\sum_{i=1}^m(h_&theta;(x_i)-y_i)^2</math>
 
Squared error function/Mean squared function均方误差: <math>J(&theta;)=\frac{1}{2m}\sum_{i=1}^m(h_&theta;(x_i)-y_i)^2</math>
 
Cross entropy交叉熵: <math>J(&theta;)=-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]</math>
 
Cross entropy交叉熵: <math>J(&theta;)=-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_&theta;(x^{(i)})+(1-y^{(i)})*log(1-h_&theta;(x^{(i)}))]</math>
 +
 +
=Gradient Descent梯度下降=
 +
 +
<math>&theta;_j:=&theta;_j+&alpha;\frac{&part;}{&part;&theta;_j}J(&theta;)</math>

2018年12月21日 (五) 11:00的版本

Cost Function损失函数

Squared error function/Mean squared function均方误差: [math]J(θ)=\frac{1}{2m}\sum_{i=1}^m(h_θ(x_i)-y_i)^2[/math]
Cross entropy交叉熵: [math]J(θ)=-\frac{1}{m}\sum_{i=1}^m[y^{(i)}*logh_θ(x^{(i)})+(1-y^{(i)})*log(1-h_θ(x^{(i)}))][/math]

Gradient Descent梯度下降

[math]θ_j:=θ_j+α\frac{∂}{∂θ_j}J(θ)[/math]