Ceres Solver 中文文档
  • 😜Ceres Solver 中文文档
  • Why Ceres ?
  • Installation
  • Tutorial
    • Non-linear Least Squares
      • Introduction
      • Hello World
      • Derivatives
        • Numeric Derivatives
        • Analytic Derivatives
        • More About Derivatives
      • Powell’s Function
      • Curve Fitting
      • Robust Curve Fitting
      • Bundle Adjustment
      • Other Examples
    • General Unconstrained Minimization
      • General Unconstrained Minimization
  • On Derivatives
    • Spivak Notation
    • Analytic Derivatives
    • Numeric Derivatives
    • Automatic Derivatives
    • Interfacing with Automatic Differentiation
    • Using Inverse & Implicit Function Theorems
  • Modeling Non-linear Least Squares
    • Introduction
    • Main Class Interface
      • CostFunction
      • SizeCostFunction
      • AutoDiffCostFunction
      • DynamicAutoDiffCostFunction
      • NumericDiffCostFunction
      • DynamicNumericDifferCostFunction
      • CostFunctionToFunctor
      • DynamicCostFunctionToFunctor
      • ConditionedCostFunction
      • GradientChecker
      • NormalPrior
      • LossFunction
      • Manifold
      • AutoDIffManifold
      • Problem
      • EvaluatationCallback
      • Rotation
      • Cubic Interpolation
        • CubicInterpolator
        • BiCubicInterpolator
  • Solveing Non-linear Least Squares
    • Introduction
    • Trust Region Methodd
    • Line Search Methods
    • Linear Solvers
    • Mixed Precision Solves
    • Preconditioners
    • Ordering
    • Main Class Interfaces
      • Solver::Options
      • ParameterBlockOrdering
      • IterationSummary
      • IterationCallback
      • CRSMatrix
      • Solver::Summary
  • Covariance Estimation
    • Introduction
    • Gauge Invariance
    • Covariance
    • Rank of the Jacobian
      • Options
      • Covariance
      • GetCovarianceBlock
      • GetCovarianceBlockInTangentSpace
    • Example Usage
Powered by GitBook
On this page
  1. Modeling Non-linear Least Squares
  2. Main Class Interface

DynamicAutoDiffCostFunction

class DynamicAutoDiffCostFunction

AutoDiffCostFunction 在编译的时候需要确定每个参数块的大小,在一些应用场景中,这往往是不现实的,例如 Bezier curve fitting, Neural Network training 等。

template <typename CostFunctor, int Stride = 4>
class DynamicAutoDiffCostFunction : public CostFunction {
};

在这些场景下,可以选择使用 DynamicAutoDiffCostFunction ,和 AutoDiffCostFunction 一样,用户必须定义一个模板函数,但是语法上略有不同,语法大概如下

struct MyCostFunctor {
  template<typename T>
  bool operator()(T const* const* parameters, T* residuals) const {
  }
}

因为参数块的大小是在运行时确定的,因此用户在创建完 DynamicAutoDiffCostFunction 之后必须指定大小,例如

auto* cost_function = new DynamicAutoDiffCostFunction<MyCostFunctor, 4>();
cost_function->AddParameterBlock(5);
cost_function->AddParameterBlock(10);
cost_function->SetNumResiduals(21);

Under the hood, the implementation evaluates the cost function multiple times, computing a small set of the derivatives (four by default, controlled by the Stride template parameter) with each pass. There is a performance tradeoff with the size of the passes; Smaller sizes are more cache efficient but result in larger number of passes, and larger stride lengths can destroy cache-locality while reducing the number of passes over the cost function. The optimal value depends on the number and sizes of the various parameter blocks. 根据经验,在使用 DynamicAutoDiffCostFunction 之前,请尝试使用 AutoDiffCostFunction。

PreviousAutoDiffCostFunctionNextNumericDiffCostFunction

Last updated 1 year ago