CostFunctionModeling|CostFunctionModeling Non-linear Least Squares2


  • NormalPrior
  • LossFunction
  • LocalParameterization
  • AutoDiffLocalParameterization

NormalPrior
class NormalPrior: public CostFunction { public: // Check that the number of rows in the vector b are the same as the // number of columns in the matrix A, crash otherwise. NormalPrior(const Matrix& A, const Vector& b); virtual bool Evaluate(double const* const* parameters, double* residuals, double** jacobians) const; };

该cost function具有这种形式: cost(x)=∥A(x?b)∥2
我们更感兴趣的是: cost(x)=(x?μ)TS?1(x?μ)
μ是一个向量、 S 为协方差矩阵。因此 A=S?1/2 。
LossFunction 对于最小二乘问题,其中输入项可能包含outliers,因此需要增加lost function来降低这些外点的影响。
class LossFunction { public: virtual void Evaluate(double s, double out[3]) const = 0; };

ceres内定义好的loss function有TrivialLoss、HuberLoss、SoftLOneLoss、CauchyLoss、ArctanLoss、TolerantLoss,
另外还有ComposedLoss(组合两个loss function)、ScaledLoss、LossFunctionWrapper。
LossFunctionWrapper可以在优化时修改loss function的scale。
Problem problem; // Add parameter blocksCostFunction* cost_function = new AutoDiffCostFunction < UW_Camera_Mapper, 2, 9, 3>( new UW_Camera_Mapper(feature_x, feature_y)); LossFunctionWrapper* loss_function(new HuberLoss(1.0), TAKE_OWNERSHIP); problem.AddResidualBlock(cost_function, loss_function, parameters); Solver::Options options; Solver::Summary summary; Solve(options, &problem, &summary); loss_function->Reset(new HuberLoss(0.5), TAKE_OWNERSHIP); Solve(options, &problem, &summary);

NOTE:这里还要再看看手册和理论。
LocalParameterization 有时参数x存在overparameterize的问题。如果x存在与流形空间,我们可以对其正切空间进行参数化。例如,三维空间的球是一个二维的流形。在球体上的每个点,与该点相切的平面定义为正切空间。对于定义在球体上的costfunction,给定一个点x,将x沿法线方向移动是没有用的。因此,一个更好的方式是,对改点对应正切空间的二维向量 Δx 进行优化,然后将增量才投影回球体。
x′=x?Δx
x' 和 x 的维数相同, Δx 的维数小于或等于 x 的。 ? 代表了向量的加。满足 x′=x?0 。
LocalParameterization类需要实现以下几个函数:
int LocalParameterization::GlobalSize() int LocalParameterization::LocalSize() bool LocalParameterization::Plus(const double *x, const double *delta, double *x_plus_delta) const bool LocalParameterization::ComputeJacobian(const double *x, double *jacobian) const bool MultiplyByJacobian(const double *x, const int num_rows, const double *global_matrix, double *local_matrix) const

其中Plus实现 x?Δx
ComputeJacobian实现 J=?x?ΔxΔx∣∣Δx=0 ,J为row major形式。
MultiplyByJacobian实现 local_matrix = global_matrix * jacobian,其中local_matrix为 num_rows * GlobalSize的矩阵,global_matrix为 num_rows * LocalSize 的矩阵。
ceres定义了几个实例:
IdentityParameterization: x?Δx=x+Δx
SubsetParameterization: x?Δx=x+[01]Δx
QuaternionParameterization: x?Δx=[cos(∥Δx∥),sin(∥Δx∥)∥Δx∥]?x
HomogeneousVectorParameterization: x?Δx=[sin(0.5?∥Δx∥)∥Δx∥,cos(0.5?∥Δx∥),]?x
ProductParameterization:对于 SE(3) ,
ProductParameterization se3_param(new QuaternionParameterization(), new IdentityTransformation(3));

AutoDiffLocalParameterization 【CostFunctionModeling|CostFunctionModeling Non-linear Least Squares2】需要定义一个类含有templated operator() (a functor)计算 x?Δx 。
struct QuaternionPlus { template bool operator()(const T* x, const T* delta, T* x_plus_delta) const { const T squared_norm_delta = delta[0] * delta[0] + delta[1] * delta[1] + delta[2] * delta[2]; T q_delta[4]; if (squared_norm_delta > T(0.0)) { T norm_delta = sqrt(squared_norm_delta); const T sin_delta_by_delta = sin(norm_delta) / norm_delta; q_delta[0] = cos(norm_delta); q_delta[1] = sin_delta_by_delta * delta[0]; q_delta[2] = sin_delta_by_delta * delta[1]; q_delta[3] = sin_delta_by_delta * delta[2]; } else { // We do not just use q_delta = [1,0,0,0] here because that is a // constant and when used for automatic differentiation will // lead to a zero derivative. Instead we take a first order // approximation and evaluate it at zero. q_delta[0] = T(1.0); q_delta[1] = delta[0]; q_delta[2] = delta[1]; q_delta[3] = delta[2]; }Quaternionproduct(q_delta, x, x_plus_delta); return true; } };

LocalParameterization* local_parameterization = new AutoDiffLocalParameterization; || Global Size ---------------+| Local Size -------------------+

    推荐阅读