All Classes and Interfaces

Class
Description
Counting sort for floating point numbers.
Counting sort for floating point numbers.
Used to assign a point to set of clusters.
Given a mixture model it will compute the hard and soft assignment of a point to Gaussians in the cluster.
Implementation of AssignCluster for K-Means.
Selects which axis the data should be split along when given a list of variances.
Selects the axis with the largest variance to split.
Randomly selects the larger variances.
Selects which dimension the set of points should be split by, which point is used to split the lists, and splits the lists into two sets.
Splits the points in K-D Tree node by selecting the axis with the largest variance.
Implementation of BigDogArrayBase for any Object[].
Implementation of BigDogArrayBase for boolean[].
Implementation of BigDogArrayBase for float[].
Implementation of BigDogArrayBase for double[].
Implementation of BigDogArrayBase for int[].
Implementation of BigDogArrayBase for long[].
Implementation of BigDogArrayBase for byte[].
A growable array that is composed of internal blocks.
Assigns values to elements in a newly constructed array
 
 
Specifies which strategy is used to select internal array size when adding new blocks.
From separate classes for the function and gradient computations implements GradientLineFunction.
Numerically computes the gradient and line derivative.
A circular queue which can grow as needed.
A circular queue which can grow as needed.
A circular queue which can grow as needed.
A circular queue which can grow as needed.
Computes the combinations of size k given a set S of size N.
Given a set of points in N-dimensional space, compute a set of unique assignment for each point to a cluster.
Abstract way to update the cluster mean values from a set of points which have been assigned to a single cluster.
Configuration for GaussNewtonBase_F64.
Configuration for K-Means clustering
Configuration for LevenbergMarquardt_F64
Generic configuration for all NearestNeighbor algorithms.
 
 
 
Configuration for QuasiNewtonBFGS
 
Configuration parameters for Trust Region
A O(N) sorting routine for integer valued elements with a known upper and lower bound.
 
 
 
Used to turn on and off functions/classes which can automatically switch between concurrent operations
Automatically generated file containing build version information.
Used to validate an algebraic Jacobian numerically.
Computes the distance a sample point is from the provided model.
Growable array which automatically creates, recycles, and resets its elements.
Growable array composed of booleans.
 
 
 
Growable array composed of floats.
 
 
 
 
Growable array composed of doubles.
 
 
 
 
Growable array composed of short.
 
 
 
 
Growable array composed of ints.
 
 
 
 
Growable array composed of longs.
 
 
 
 
Growable array composed of bytes.
 
 
 
 
 
Wrapper around queue which allows it to act as a List.
Interface for growable queues of primitive types
Common lambdas used in DDogleg
 
 
 
 
 
 
 
 
 
A double linked list.
 
Function which processes an object
Default implementation which does nothing
Function which processes an object and passes in an integer that represents the index
Default implementation which does nothing
 
Equations for updating the approximate Hessian matrix using BFGS equations.
Returns Euclidean distance squared for double[] points.
Exhaustively finds the nearest-neighbor to a n-dimensional point by considering every possibility.
Standard expectation maximization based approach to fitting mixture-of-Gaussian models to a set of data.
 
Factory for creating an object with no arguments
Factory for creating clustering algorithms.
Factory for creating implementations of NearestNeighbor.
Functions for creating numerical derivatives
Creates optimization algorithms using easy to use interfaces.
Factory for sparse optimization algorithms.
Base class for FastArray and DogArray.
 
 
 
A growable array which provides access to the raw array but does not own the elements inside of the array.
Wrapper around FastArray which allows it to act as a List.
Uses a Sturm sequence to bound the location of real roots in the polynomial.
Computes the mean error and prunes points based on the number of standard deviations they are away.
Computes the median error and prunes points if they have more than the specified percentile error.
Fits the coefficients for a quadratic polynomial to a set of even spaced data in an array.
Quadratic solver for an arbitrary 2D region
Fit observations to a 2D quadratic around a 3x3 region.
Base class for functions which take in an array of inputs and outputs
Function which takes in N parameters as input and outputs M elements.
Function that takes in a vector of length N and outputs a matrix with dimension M x N.
Function with N inputs and N outputs.
Function for non-linear optimization that has a single output and N inputs.
Function with a single scalar input and a single scalar output.
A Gaussian in a Gaussian Mixture Model.
Computes the likelihood of a Gaussian distribution.
Base class for Gauss-Newton based approaches for unconstrained optimization.
Optimization mode
Contains functions for optimization algorithms that perform a line search and require the function's value and its gradient.
Handles creating and recycling data in a graph.
Hessian update using BFGS.
Uses DFP to estimate the Hessian and BFGS to estimate the inverse Hessian.
 
 
 
Abstraction layer for operations related to hessian matrix.
Hessian is represented as a dense matrix.
Hessian is represented as a sparse compact column matrix.
Given the already computed Jacobian (broken up into a left and right side) compute the decomposed approximate Hessian matrix, i.e.
The approximate Hessian matrix (J'*J) is assumed to have the following block triangle form: [A B;C D].
Takes two functions which independently computes the function's value and derivative and allows them to be used in a coupled function.
Selects initial Gaussians for EM algorithm.
Selects the initial cluster positions for k-means
Implementation of the seeding strategy described in [1].
Concurrent implementation of InitializePlusPlus
Seeds are selects by randomly picking points.
Specifies inlier set as a fraction
Specifies a fit threshold for an inlier set based upon a maximum allowed error
Interface for iterative optimization classes.
K-D tree search which searches through multiple trees.
K-D Tree is short for k-dimensional tree and is a binary tree data structure used for quickly finding the nearest-neighbor of a k-dimensional point in a set.
Data type for each node in the binary tree.
Creates a new KD-Tree from a list of points and (optional) associated data.
Computes the distance between two points.
Euclidian squared distance
Euclidian squared distance
 
Memory management for recycling KdTree data structures.
Wrapper around KdTree for NearestNeighbor
Storage for the results of a K-D Tree search.
Interface for searching a single tree for the nearest-neighbor
Implementation of KdTreeSearchBestBinFirst which searches for the single best nearest-neighbor.
Standard algorithm for searching a KdTree for the nearest-neighbor of a search.
Approximate search for K-D Trees using the best-bin-first method [1] that supports multiple trees.
Contains information on a node
Interface for searching a single tree for the N nearest-neighbors.
Implementation of KdTreeSearchBestBinFirst which searches for the N nearest-neighbors.
Standard algorithm for searching a KdTree for the nearest-neighbor of a search.
List of different initialization techniques for k-means
Provides access to the elements inside very large or small arrays.
Another technique similar to RANSAC known as Least Median of Squares (LMedS).
Concurrent version of LeastMedianOfSquares
Implementation of Levenberg-Marquardt non-linear least squares optimization.
Line search for nonlinear optimization.
Line search which meets the strong Wolfe line condition.
 
Line search algorithm that provides a guaranteed sufficient decrease according to the Wolfe condition.
Wrapper around List for LArrayAccessor.
Converts a least squares function into a nonlinear optimization function.
Convert the Jacobian of a least squares function into a nonlinear optimization gradient.
Operations on matrices that abstracts away the matrix type.
 
 
Computes the mean for points composed of double[]
Used to convert a model to and from an array parameterized format.
Computes a model from a set of points and optionally an initial estimate.
Given a set of points create a model hypothesis.
Can be used to create new instances of a model and copy the value of one model into another
Default model manager.
Given a set of points and it finds a set of model parameters which fit the data robustly.
Given a set of points and a set of models, it selects which model and model parameters best fits the points robustly.
Extension of ModelMatcher where the models are specified after construction.
Draw a number from a multivariate Gaussian distribution.
Abstract interface for finding the nearest neighbor to a user specified point inside of a set of points in K-dimensional space.
An independent search instance.
Results from a Nearest-Neighbor search.
 
Finite difference numerical derivative calculation using the forward+backwards equation.
Finite difference numerical gradient calculation using forward equation.
Finite difference numerical gradient calculation using the forward+backwards equation.
Finite difference numerical gradient calculation using forward equation.
Finite difference numerical jacobian calculation using the forward+backwards equation.
Finite difference numerical gradient calculation using forward equation.
Finite difference numerical gradient calculation using forward equation.
Interface for computing the gradient of a set of functions given a set of model parameters.
This message is thrown if something bad happens while optimizing that would be the results invalid
Exhaustively computes all the permutations of a set, without recursion.
Computes the distance between two points.
Contains a reference to a point and the original index of the point
Data structure for storing polynomials.
 
Interface for finding the roots of a polynomial.
Provides functions for finding the roots of polynomials
Various functions for manipulating primitive arrays
 
Quasi-Newton nonlinear optimization using BFGS update on the approximate inverse Hessian with a line search.
QuickSelect searches for the k-th largest item in the list.
An implementation of the quick sort algorithm from Numerical Recipes Third Edition that is specified for arrays of floats.
An implementation of the quick sort algorithm from Numerical Recipes Third Edition that is specified for arrays of doubles.
An implementation of the quick sort algorithm from Numerical Recipes Third Edition that is specified for arrays of doubles.
An implementation of the quick sort algorithm from Numerical Recipes Third Edition that is specified for arrays of floats.
An implementation of the quick sort algorithm from Numerical Recipes Third Edition that is specified for arrays of floats.
An implementation of the quick sort algorithm from Numerical Recipes Third Edition that is specified for arrays of floats.
An implementation of the quick sort algorithm from Numerical Recipes Third Edition that is specified for arrays of floats.
RANSAC is an abbreviation for "RANdom SAmple Consensus" and is an iterative algorithm.
Concurrent implementation of Ransac.
 
Modification of RANSAC that finds the best fit model and model parameters to a set of data.
Describes a model and RANSAC fit parameters for specific type of object.
Simple class which helps minimize declaring new objects by helping you recycle them.
RecycleManager which maintains a used list.
Finds the roots of a polynomial using a companion matrix and eigenvalue decomposition.
Types of polynomial root finding algorithms that can be used
Jacobian calculation for use in Schur Complement.
Provides a way to convert a regular Jacobian matrix into a SchurJacobian
 
 
Contains interpolation functions for use in line searches.
 
Initializes the mixture models by applying K-Means first.
Implementation of the shell sort algorithm from Numerical Recipes Third Edition.
Class which can be extended and allows the object to be sorted faster than a generic Comparable
Class which can be extended and allows the object to be sorted faster than a generic Comparable
Standard implementation of k-means [1], summary is provided below:
Concurrent implementation of StandardKMeans
Outliers are removed by first fitting a model to all the data points.
Interface for computing error metrics and pruning features.
 
Provides a mechanism to stop the processing before completion.
Exception which is thrown when a process has been stopped early.
An implementation of the straight insert sort algorithm.
A Sturm sequence has the property that the number of sign changes can be used to compute the number of real roots in a polynomial.
Convenience class for swapping elements in arrays and lists.
Base class for all trust region implementations.
 
The Cauchy point is an approximate solution to the Trust Region subproblem.
Approximates the optimal solution to the Trust Region's sub-problem using a piecewise linear approach [1,2].
Simple data structure for passing a pair of data.
Simple data structure for passing a triple of data.
Simple data structure for passing a quad of data.
Implementations of Trust Region for UnconstrainedLeastSquares.
Implementations of UnconstrainedLeastSquaresSchur.
Non-linear least squares problems have a special structure which can be taken advantage of for optimization.
A variant on UnconstrainedLeastSquares for solving large scale systems which can be simplified using the Schur Complement.
Optimization algorithm which seeks to minimize F(X) ∈ ℜ and X ∈ ℜN
 
 
Performs common optimization tasks.
 
Generic interface for implementing verbose output to a PrintStream.
Vantage point tree implementation for nearest neighbor search.