给出用牛顿法优化Huber回归的损失函数的MATLAB代码
以下是用牛顿法优化Huber回归的损失函数的MATLAB代码:
function [theta, J_history] = huber_regression(X, y, alpha, tol, max_iter) % HUBER_REGRESSION performs Huber regression on the input data X and output % variable y using Newton's method. % % INPUTS: % X: Input data matrix of size (m x n), where m is the number of training examples % and n is the number of features. % y: Output variable vector of size (m x 1). % alpha: Huber loss threshold. % tol: Tolerance value for convergence. % max_iter: Maximum number of iterations. % % OUTPUTS: % theta: Learned parameters of size (n x 1). % J_history: Vector containing the history of the loss function value % after each iteration.
% Initialize parameters [m, n] = size(X); theta = zeros(n, 1); J_history = zeros(max_iter, 1);
% Define the Huber loss function and its gradient loss = @(theta) huber_loss(X, y, theta, alpha); grad = @(theta) huber_grad(X, y, theta, alpha);
% Perform Newton's method to minimize the loss function for i = 1:max_iter % Compute the Hessian matrix H = hessian(X, theta, alpha);
% Compute the Newton direction update
delta_theta = -inv(H) * grad(theta);
% Update the parameters
theta = theta + delta_theta;
% Compute the new loss function value
J_history(i) = loss(theta);
% Check for convergence
if norm(delta_theta) < tol
break;
end
end
end
function H = hessian(X, theta, alpha) % HESSIAN computes the Hessian matrix of the Huber loss function.
% Initialize variables [m, n] = size(X); H = zeros(n, n);
% Compute the Hessian matrix for i = 1:m xi = X(i, :)'; yi = y(i); zi = xi' * theta; if abs(yi - zi) <= alpha w = 1; else w = alpha / abs(yi - zi); end H = H + w * xi * xi'; end
end
function J = huber_loss(X, y, theta, alpha) % HUBER_LOSS computes the Huber loss function.
% Initialize variables [m, n] = size(X); J = 0;
% Compute the loss function for i = 1:m xi = X(i, :)'; yi = y(i); zi = xi' * theta; if abs(yi - zi) <= alpha J = J + 0.5 * (yi - zi)^2; else J = J + alpha * abs(yi - zi) - 0.5 * alpha^2; end end
end
function g = huber_grad(X, y, theta, alpha) % HUBER_GRAD computes the gradient of the Huber loss function.
% Initialize variables [m, n] = size(X); g = zeros(n, 1);
% Compute the gradient for i = 1:m xi = X(i, :)'; yi = y(i); zi = xi' * theta; if abs(yi - zi) <= alpha g = g + xi * (zi - yi); else g = g + alpha * xi * sign(zi - yi); end end
en
原文地址: https://www.cveoy.top/t/topic/dHOV 著作权归作者所有。请勿转载和采集!