diff --git a/modules/contrib/doc/openfabmap.rst b/modules/contrib/doc/openfabmap.rst index d93b4ee1a86636b7924459760e5061ddba6e852c..6e8a9f6d54f559dae068aafcedcb876461599f59 100644 --- a/modules/contrib/doc/openfabmap.rst +++ b/modules/contrib/doc/openfabmap.rst @@ -9,10 +9,10 @@ FAB-MAP is an approach to appearance-based place recognition. FAB-MAP compares i openFABMAP requires training data (e.g. a collection of images from a similar but not identical environment) to construct a visual vocabulary for the visual bag-of-words model, along with a Chow-Liu tree representation of feature likelihood and for use in the Sampled new place method (see below). -FabMap +of2::FabMap -------------------- -.. ocv:class:: FabMap +.. ocv:class:: of2::FabMap The main FabMap class performs the comparison between visual bags-of-words extracted from one or more images. The FabMap class is instantiated as one of the four inherited FabMap classes (FabMap1, FabMapLUT, FabMapFBO, FabMap2). Each inherited class performs the comparison differently based on algorithm iterations as published (see each class below for specifics). A Chow-Liu tree, detector model parameters and some option flags are common to all Fabmap variants and are supplied on class creation. Training data (visual bag-of-words) is supplied to the class if using the SAMPLED new place method. Test data (visual bag-of-words) is supplied as images to which query bag-of-words are compared against. The common flags are listed below: :: @@ -149,10 +149,10 @@ The inverted index FAB-MAP as in [IJRR2010]_. This version of FAB-MAP is the fas .. [ICRA2011] A. Glover, et al., "OpenFABMAP: An Open Source Toolbox for Appearance-based Loop Closure Detection," in IEEE International Conference on Robotics and Automation, St Paul, Minnesota, 2011 -ImageMatch +of2::IMatch -------------------- -.. ocv:struct:: IMatch +.. ocv:struct:: of2::IMatch FAB-MAP comparison results are stored in a vector of IMatch structs. Each IMatch structure provides the index of the provided query bag-of-words, the index of the test bag-of-words, the raw log-likelihood of the match (independent of other comparisons), and the match probability (normalised over other comparison likelihoods). @@ -180,48 +180,48 @@ FAB-MAP comparison results are stored in a vector of IMatch structs. Each IMatch }; -Chow-Liu Tree +of2::ChowLiuTree -------------------- -.. ocv:class:: ChowLiuTree +.. ocv:class:: of2::ChowLiuTree The Chow-Liu tree is a probabilistic model of the environment in terms of feature occurance and co-occurance. The Chow-Liu tree is a form of Bayesian network. FAB-MAP uses the model when calculating bag-of-words similarity by taking into account feature saliency. Training data is provided to the ChowLiuTree class in the form of bag-of-words image descriptors. The make function produces a cv::Mat that encodes the tree structure. -.. ocv:function:: ChowLiuTree::ChowLiuTree() +.. ocv:function:: of2::ChowLiuTree::ChowLiuTree() -.. ocv:function:: void add(const Mat& imgDescriptor) +.. ocv:function:: void of2::ChowLiuTree::add(const Mat& imgDescriptor) :param imgDescriptor: bag-of-words image descriptors stored as rows in a Mat -.. ocv:function:: void add(const vector& imgDescriptors) +.. ocv:function:: void of2::ChowLiuTree::add(const vector& imgDescriptors) :param imgDescriptors: a vector containing multiple bag-of-words image descriptors -.. ocv:function:: const vector& getImgDescriptors() const +.. ocv:function:: const vector& of2::ChowLiuTree::getImgDescriptors() const Returns a vector containing multiple bag-of-words image descriptors -.. ocv:function:: Mat make(double infoThreshold = 0.0) +.. ocv:function:: Mat of2::ChowLiuTree::make(double infoThreshold = 0.0) :param infoThreshold: a threshold can be set to reduce the amount of memory used when making the Chow-Liu tree, which can occur with large vocabulary sizes. This function can fail if the threshold is set too high. If memory is an issue the value must be set by trial and error (~0.0005) -BOWMSCTrainer +of2::BOWMSCTrainer -------------------- -.. ocv:class:: BOWMSCTrainer : public BOWTrainer +.. ocv:class:: of2::BOWMSCTrainer : public of2::BOWTrainer BOWMSCTrainer is a custom clustering algorithm used to produce the feature vocabulary required to create bag-of-words representations. The algorithm is an implementation of [AVC2007]_. Arguments against using K-means for the FAB-MAP algorithm are discussed in [IJRR2010]_. The BOWMSCTrainer inherits from the cv::BOWTrainer class, overwriting the cluster function. -.. ocv:function:: BOWMSCTrainer::BOWMSCTrainer(double clusterSize = 0.4) +.. ocv:function:: of2::BOWMSCTrainer::BOWMSCTrainer(double clusterSize = 0.4) :param clusterSize: the specificity of the vocabulary produced. A smaller cluster size will instigate a larger vocabulary. -.. ocv:function:: virtual Mat cluster() const +.. ocv:function:: virtual Mat of2::BOWMSCTrainer::cluster() const Cluster using features added to the class -.. ocv:function:: virtual Mat cluster(const Mat& descriptors) const +.. ocv:function:: virtual Mat of2::BOWMSCTrainer::cluster(const Mat& descriptors) const :param descriptors: feature descriptors provided as rows of the Mat. diff --git a/modules/core/doc/operations_on_arrays.rst b/modules/core/doc/operations_on_arrays.rst index 1489334d3da138cdec51a71372863b569f775b92..371c5413bbbb74be1fc22895b67ca3cc5c12ef63 100644 --- a/modules/core/doc/operations_on_arrays.rst +++ b/modules/core/doc/operations_on_arrays.rst @@ -1085,7 +1085,7 @@ Calculates eigenvalues and eigenvectors of a symmetric matrix. .. ocv:function:: bool eigen(InputArray src, OutputArray eigenvalues, OutputArray eigenvectors, int lowindex=-1,int highindex=-1) -.. ocv:pyfunction:: cv2.eigen(src, calculateEigenvectors[, eigenvalues[, eigenvectors]]) -> retval, eigenvalues, eigenvectors +.. ocv:pyfunction:: cv2.eigen(src, computeEigenvectors[, eigenvalues[, eigenvectors]]) -> retval, eigenvalues, eigenvectors .. ocv:cfunction:: void cvEigenVV( CvArr* mat, CvArr* evects, CvArr* evals, double eps=0, int lowindex=-1, int highindex=-1 ) diff --git a/modules/gpu/doc/feature_detection_and_description.rst b/modules/gpu/doc/feature_detection_and_description.rst index 4129ba8f09d78759f4064f53b3d83d499457e907..5a6f85c5f70113f36802736b12aefdde55d191c2 100644 --- a/modules/gpu/doc/feature_detection_and_description.rst +++ b/modules/gpu/doc/feature_detection_and_description.rst @@ -350,7 +350,7 @@ gpu::ORB_GPU::downloadKeyPoints ------------------------------------- Download keypoints from GPU to CPU memory. -.. ocv:function:: void gpu::ORB_GPU::downloadKeyPoints( GpuMat& d_keypoints, std::vector& keypoints ) +.. ocv:function:: static void gpu::ORB_GPU::downloadKeyPoints( const GpuMat& d_keypoints, std::vector& keypoints ) @@ -358,7 +358,7 @@ gpu::ORB_GPU::convertKeyPoints ------------------------------------- Converts keypoints from GPU representation to vector of ``KeyPoint``. -.. ocv:function:: void gpu::ORB_GPU::convertKeyPoints( Mat& d_keypoints, std::vector& keypoints ) +.. ocv:function:: static void gpu::ORB_GPU::convertKeyPoints( const Mat& d_keypoints, std::vector& keypoints ) diff --git a/modules/gpu/doc/image_processing.rst b/modules/gpu/doc/image_processing.rst index ce05089b97df06383ba3752531c046adae44ca02..69b171e74eae66355a82195202398cc8396e0289 100644 --- a/modules/gpu/doc/image_processing.rst +++ b/modules/gpu/doc/image_processing.rst @@ -824,7 +824,7 @@ gpu::bilateralFilter -------------------- Performs bilateral filtering of passed image -.. ocv:function:: void gpu::bilateralFilter(const GpuMat& src, GpuMat& dst, int kernel_size, float sigma_color, float sigma_spatial, int borderMode, Stream& stream = Stream::Null()) +.. ocv:function:: void gpu::bilateralFilter( const GpuMat& src, GpuMat& dst, int kernel_size, float sigma_color, float sigma_spatial, int borderMode=BORDER_DEFAULT, Stream& stream=Stream::Null() ) :param src: Source image. Supports only (channles != 2 && depth() != CV_8S && depth() != CV_32S && depth() != CV_64F). @@ -849,7 +849,7 @@ gpu::nonLocalMeans ------------------- Performs pure non local means denoising without any simplification, and thus it is not fast. -.. ocv:function:: void nonLocalMeans(const GpuMat& src, GpuMat& dst, float h, int search_window = 21, int block_size = 7, int borderMode = BORDER_DEFAULT, Stream& s = Stream::Null()) +.. ocv:function:: void gpu::nonLocalMeans(const GpuMat& src, GpuMat& dst, float h, int search_window = 21, int block_size = 7, int borderMode = BORDER_DEFAULT, Stream& s = Stream::Null()) :param src: Source image. Supports only CV_8UC1, CV_8UC2 and CV_8UC3. @@ -877,10 +877,10 @@ gpu::FastNonLocalMeansDenoising { public: //! Simple method, recommended for grayscale images (though it supports multichannel images) - void simpleMethod(const GpuMat& src, GpuMat& dst, float h, int search_window = 21, int block_size = 7, Stream& s = Stream::Null()); + void simpleMethod(const GpuMat& src, GpuMat& dst, float h, int search_window = 21, int block_size = 7, Stream& s = Stream::Null()) //! Processes luminance and color components separatelly - void labMethod(const GpuMat& src, GpuMat& dst, float h_luminance, float h_color, int search_window = 21, int block_size = 7, Stream& s = Stream::Null()); + void labMethod(const GpuMat& src, GpuMat& dst, float h_luminance, float h_color, int search_window = 21, int block_size = 7, Stream& s = Stream::Null()) }; The class implements fast approximate Non Local Means Denoising algorithm. @@ -889,7 +889,7 @@ gpu::FastNonLocalMeansDenoising::simpleMethod() ------------------------------------- Perform image denoising using Non-local Means Denoising algorithm http://www.ipol.im/pub/algo/bcm_non_local_means_denoising with several computational optimizations. Noise expected to be a gaussian white noise -.. ocv:function:: void gpu::FastNonLocalMeansDenoising::simpleMethod(const GpuMat& src, GpuMat& dst, float h, int search_window = 21, int block_size = 7, Stream& s = Stream::Null()); +.. ocv:function:: void gpu::FastNonLocalMeansDenoising::simpleMethod(const GpuMat& src, GpuMat& dst, float h, int search_window = 21, int block_size = 7, Stream& s = Stream::Null()) :param src: Input 8-bit 1-channel, 2-channel or 3-channel image. @@ -913,7 +913,7 @@ gpu::FastNonLocalMeansDenoising::labMethod() ------------------------------------- Modification of ``FastNonLocalMeansDenoising::simpleMethod`` for color images -.. ocv:function:: void gpu::FastNonLocalMeansDenoising::labMethod(const GpuMat& src, GpuMat& dst, float h_luminance, float h_color, int search_window = 21, int block_size = 7, Stream& s = Stream::Null()); +.. ocv:function:: void gpu::FastNonLocalMeansDenoising::labMethod(const GpuMat& src, GpuMat& dst, float h_luminance, float h_color, int search_window = 21, int block_size = 7, Stream& s = Stream::Null()) :param src: Input 8-bit 3-channel image. diff --git a/modules/gpu/doc/video.rst b/modules/gpu/doc/video.rst index 31d24d17becd7ada697180d609f08bf5b9af98d7..dbfd93aaa82ef120000e2d7758fc36600e538317 100644 --- a/modules/gpu/doc/video.rst +++ b/modules/gpu/doc/video.rst @@ -739,7 +739,7 @@ gpu::GMG_GPU::operator() ------------------------ Updates the background model and returns the foreground mask -.. ocv:function:: void gpu::GMG_GPU::operator()(const GpuMat& frame, GpuMat& fgmask, Stream& stream = Stream::Null()) +.. ocv:function:: void gpu::GMG_GPU::operator ()( const GpuMat& frame, GpuMat& fgmask, float learningRate=-1.0f, Stream& stream=Stream::Null() ) :param frame: Next video frame. diff --git a/modules/nonfree/doc/feature_detection.rst b/modules/nonfree/doc/feature_detection.rst index 5d25c1e19b8507efde6e42c95d264e9588b10a02..e4ac357429af377befa398d6326c4dcd269988c4 100644 --- a/modules/nonfree/doc/feature_detection.rst +++ b/modules/nonfree/doc/feature_detection.rst @@ -104,8 +104,7 @@ Detects keypoints and computes SURF descriptors for them. .. ocv:function:: void SURF::operator()(InputArray img, InputArray mask, vector& keypoints) const .. ocv:function:: void SURF::operator()(InputArray img, InputArray mask, vector& keypoints, OutputArray descriptors, bool useProvidedKeypoints=false) -.. ocv:pyfunction:: cv2.SURF.detect(img, mask) -> keypoints -.. ocv:pyfunction:: cv2.SURF.detect(img, mask[, descriptors[, useProvidedKeypoints]]) -> keypoints, descriptors +.. ocv:pyfunction:: cv2.SURF.detect(image[, mask]) -> keypoints .. ocv:cfunction:: void cvExtractSURF( const CvArr* image, const CvArr* mask, CvSeq** keypoints, CvSeq** descriptors, CvMemStorage* storage, CvSURFParams params ) diff --git a/modules/ocl/doc/structures_and_functions.rst b/modules/ocl/doc/structures_and_functions.rst index 474558f1df4a53291d5f2f3e3436c9eded47c7b8..7ac30b6430ee910f3da8f1ee973a63ece650b8e8 100644 --- a/modules/ocl/doc/structures_and_functions.rst +++ b/modules/ocl/doc/structures_and_functions.rst @@ -13,7 +13,7 @@ ocl::getDevice ------------------ Returns the list of devices -.. ocv:function:: int ocl::getDevice(std::vector& oclinfo, int devicetype = CVCL_DEVICE_TYPE_GPU) +.. ocv:function:: int ocl::getDevice( std::vector & oclinfo, int devicetype=CVCL_DEVICE_TYPE_GPU ) :param oclinfo: Output vector of ``ocl::Info`` structures diff --git a/modules/photo/doc/denoising.rst b/modules/photo/doc/denoising.rst index 8e53e5a698c1b0d8437c2fd3ff9436a494795ff9..97625d3b31c94d4b653f4b9428179822da9b18f2 100644 --- a/modules/photo/doc/denoising.rst +++ b/modules/photo/doc/denoising.rst @@ -4,11 +4,11 @@ Denoising .. highlight:: cpp fastNlMeansDenoising ------------ +-------------------- Perform image denoising using Non-local Means Denoising algorithm http://www.ipol.im/pub/algo/bcm_non_local_means_denoising/ with several computational optimizations. Noise expected to be a gaussian white noise -.. ocv:function:: void fastNlMeansDenoising( Mat& src, Mat& dst, int templateWindowSize, int searchWindowSize, int h ) +.. ocv:function:: void fastNlMeansDenoising( InputArray src, OutputArray dst, float h=3, int templateWindowSize=7, int searchWindowSize=21 ) :param src: Input 8-bit 1-channel, 2-channel or 3-channel image. @@ -25,10 +25,10 @@ Advanced usage of this functions can be manual denoising of colored image in dif Such approach is used in ``fastNlMeansDenoisingColored`` by converting image to CIELAB colorspace and then separately denoise L and AB components with different h parameter. fastNlMeansDenoisingColored ------------ +--------------------------- Modification of ``fastNlMeansDenoising`` function for colored images -.. ocv:function:: void fastNlMeansDenoisingColored( Mat& src, Mat& dst, int templateWindowSize, int searchWindowSize, int h, int hForColorComponents ) +.. ocv:function:: void fastNlMeansDenoisingColored( InputArray src, OutputArray dst, float h=3, float hColor=3, int templateWindowSize=7, int searchWindowSize=21 ) :param src: Input 8-bit 3-channel image. @@ -45,11 +45,11 @@ Modification of ``fastNlMeansDenoising`` function for colored images The function converts image to CIELAB colorspace and then separately denoise L and AB components with given h parameters using ``fastNlMeansDenoising`` function. fastNlMeansDenoisingMulti ------------ +------------------------- Modification of ``fastNlMeansDenoising`` function for images sequence where consequtive images have been captured in small period of time. For example video. This version of the function is for grayscale images or for manual manipulation with colorspaces. For more details see http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.131.6394 -.. ocv:function:: void fastNlMeansDenoisingMulti( const std::vector& srcImgs, int imgToDenoiseIndex, int temporalWindowSize, Mat& dst, int templateWindowSize, int searchWindowSize, int h) +.. ocv:function:: void fastNlMeansDenoisingMulti( InputArrayOfArrays srcImgs, OutputArray dst, int imgToDenoiseIndex, int temporalWindowSize, float h=3, int templateWindowSize=7, int searchWindowSize=21 ) :param srcImgs: Input 8-bit 1-channel, 2-channel or 3-channel images sequence. All images should have the same type and size. @@ -66,10 +66,10 @@ For more details see http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.131 :param h: Parameter regulating filter strength for luminance component. Bigger h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise fastNlMeansDenoisingColoredMulti ------------ +-------------------------------- Modification of ``fastNlMeansDenoisingMulti`` function for colored images sequences -.. ocv:function:: void fastNlMeansDenoisingColoredMulti( const std::vector& srcImgs, int imgToDenoiseIndex, int temporalWindowSize, Mat& dst, int templateWindowSize, int searchWindowSize, int h, int hForColorComponents) +.. ocv:function:: void fastNlMeansDenoisingColoredMulti( InputArrayOfArrays srcImgs, OutputArray dst, int imgToDenoiseIndex, int temporalWindowSize, float h=3, float hColor=3, int templateWindowSize=7, int searchWindowSize=21 ) :param srcImgs: Input 8-bit 3-channel images sequence. All images should have the same type and size. diff --git a/modules/video/doc/motion_analysis_and_object_tracking.rst b/modules/video/doc/motion_analysis_and_object_tracking.rst index 53b2a70806cdd69b0791a7da0defc128a05b017e..ca5f005c6ed54e750406fecf061f35c4068df7e0 100644 --- a/modules/video/doc/motion_analysis_and_object_tracking.rst +++ b/modules/video/doc/motion_analysis_and_object_tracking.rst @@ -598,12 +598,12 @@ See :ocv:func:`BackgroundSubtractor::getBackgroundImage`. calcOpticalFlowSF ------------ +----------------- Calculate an optical flow using "SimpleFlow" algorithm. -.. ocv:function:: void calcOpticalFlowSF( Mat& prev, Mat& next, Mat& flowX, Mat& flowY, int layers, int averaging_block_size, int max_flow) +.. ocv:function:: void calcOpticalFlowSF( Mat& from, Mat& to, Mat& flow, int layers, int averaging_block_size, int max_flow ) -.. ocv:function:: void calcOpticalFlowSF( Mat& prev, Mat& next, Mat& flowX, Mat& flowY, int layers, int averaging_block_size, int max_flow, double sigma_dist, double sigma_color, int postprocess_window, double sigma_dist_fix, double sigma_color_fix, double occ_thr, int upscale_averaging_radiud, double upscale_sigma_dist, double upscale_sigma_color, double speed_up_thr) +.. ocv:function:: calcOpticalFlowSF( Mat& from, Mat& to, Mat& flow, int layers, int averaging_block_size, int max_flow, double sigma_dist, double sigma_color, int postprocess_window, double sigma_dist_fix, double sigma_color_fix, double occ_thr, int upscale_averaging_radius, double upscale_sigma_dist, double upscale_sigma_color, double speed_up_thr ) :param prev: First 8-bit 3-channel image. diff --git a/modules/videostab/doc/fast_marching.rst b/modules/videostab/doc/fast_marching.rst index 73323025806a3a354069d9577a39072405bcd484..5df4a7227afedcf662a9d7c3ae2ba4a7ed54baf9 100644 --- a/modules/videostab/doc/fast_marching.rst +++ b/modules/videostab/doc/fast_marching.rst @@ -39,7 +39,7 @@ videostab::FastMarchingMethod::run Template method that runs the Fast Marching Method. -.. ocv:function:: Inpaint FastMarchingMethod::run(const Mat &mask, Inpaint inpaint) +.. ocv:function:: template Inpaint videostab::FastMarchingMethod::run(const Mat &mask, Inpaint inpaint) :param mask: Image mask. ``0`` value indicates that the pixel value must be inpainted, ``255`` indicates that the pixel value is known, other values aren't acceptable. diff --git a/modules/videostab/doc/global_motion.rst b/modules/videostab/doc/global_motion.rst index c4c764f54f34925c147cbae1effbccd13516ba32..f33e9041b49a36fbe6e9ec06adcae19f968035de 100644 --- a/modules/videostab/doc/global_motion.rst +++ b/modules/videostab/doc/global_motion.rst @@ -8,8 +8,6 @@ The video stabilization module contains a set of functions and classes for globa videostab::MotionModel ---------------------- -.. ocv:class:: videostab::MotionModel - Describes motion model between two point clouds. :: @@ -30,7 +28,7 @@ Describes motion model between two point clouds. videostab::RansacParams ----------------------- -.. ocv:class:: videostab::RansacParams +.. ocv:struct:: videostab::RansacParams Describes RANSAC method parameters. @@ -55,7 +53,7 @@ Describes RANSAC method parameters. videostab::RansacParams::RansacParams ------------------------------------- -.. ocv:function:: RansacParams::RansacParams() +.. ocv:function:: videostab::RansacParams::RansacParams() :return: RANSAC method empty parameters object. @@ -63,7 +61,7 @@ videostab::RansacParams::RansacParams videostab::RansacParams::RansacParams ------------------------------------- -.. ocv:function:: RansacParams::RansacParams(int size, float thresh, float eps, float prob) +.. ocv:function:: videostab::RansacParams::RansacParams(int size, float thresh, float eps, float prob) :param size: Subset size. @@ -79,7 +77,7 @@ videostab::RansacParams::RansacParams videostab::RansacParams::niters ------------------------------- -.. ocv:function:: int RansacParams::niters() const +.. ocv:function:: int videostab::RansacParams::niters() const :return: Number of iterations that'll be performed by RANSAC method. @@ -87,7 +85,7 @@ videostab::RansacParams::niters videostab::RansacParams::default2dMotion ---------------------------------------- -.. ocv:function:: static RansacParams RansacParams::default2dMotion(MotionModel model) +.. ocv:function:: static RansacParams videostab::RansacParams::default2dMotion(MotionModel model) :param model: Motion model. See :ocv:class:`videostab::MotionModel`. @@ -101,7 +99,7 @@ Estimates best global motion between two 2D point clouds in the least-squares se .. note:: Works in-place and changes input point arrays. -.. ocv:function:: Mat estimateGlobalMotionLeastSquares(InputOutputArray points0, InputOutputArray points1, int model = MM_AFFINE, float *rmse = 0) +.. ocv:function:: Mat videostab::estimateGlobalMotionLeastSquares(InputOutputArray points0, InputOutputArray points1, int model = MM_AFFINE, float *rmse = 0) :param points0: Source set of 2D points (``32F``). @@ -119,7 +117,7 @@ videostab::estimateGlobalMotionRansac Estimates best global motion between two 2D point clouds robustly (using RANSAC method). -.. ocv:function:: Mat estimateGlobalMotionRansac(InputArray points0, InputArray points1, int model = MM_AFFINE, const RansacParams ¶ms = RansacParams::default2dMotion(MM_AFFINE), float *rmse = 0, int *ninliers = 0) +.. ocv:function:: Mat videostab::estimateGlobalMotionRansac(InputArray points0, InputArray points1, int model = MM_AFFINE, const RansacParams ¶ms = RansacParams::default2dMotion(MM_AFFINE), float *rmse = 0, int *ninliers = 0) :param points0: Source set of 2D points (``32F``). @@ -139,7 +137,7 @@ videostab::getMotion Computes motion between two frames assuming that all the intermediate motions are known. -.. ocv:function:: Mat getMotion(int from, int to, const std::vector &motions) +.. ocv:function:: Mat videostab::getMotion(int from, int to, const std::vector &motions) :param from: Source frame index. @@ -176,7 +174,7 @@ videostab::MotionEstimatorBase::setMotionModel Sets motion model. -.. ocv:function:: void MotionEstimatorBase::setMotionModel(MotionModel val) +.. ocv:function:: void videostab::MotionEstimatorBase::setMotionModel(MotionModel val) :param val: Motion model. See :ocv:class:`videostab::MotionModel`. @@ -185,7 +183,7 @@ Sets motion model. videostab::MotionEstimatorBase::motionModel ---------------------------------------------- -.. ocv:function:: MotionModel MotionEstimatorBase::motionModel() const +.. ocv:function:: MotionModel videostab::MotionEstimatorBase::motionModel() const :return: Motion model. See :ocv:class:`videostab::MotionModel`. @@ -195,7 +193,7 @@ videostab::MotionEstimatorBase::estimate Estimates global motion between two 2D point clouds. -.. ocv:function:: Mat MotionEstimatorBase::estimate(InputArray points0, InputArray points1, bool *ok = 0) +.. ocv:function:: Mat videostab::MotionEstimatorBase::estimate(InputArray points0, InputArray points1, bool *ok = 0) :param points0: Source set of 2D points (``32F``). @@ -209,7 +207,7 @@ Estimates global motion between two 2D point clouds. videostab::MotionEstimatorRansacL2 ---------------------------------- -.. ocv:class:: videostab::MotionEstimatorRansacL2 +.. ocv:class:: videostab::MotionEstimatorRansacL2 : public videostab::MotionEstimatorBase Describes a robust RANSAC-based global 2D motion estimation method which minimizes L2 error. @@ -233,7 +231,7 @@ Describes a robust RANSAC-based global 2D motion estimation method which minimiz videostab::MotionEstimatorL1 ---------------------------- -.. ocv:class:: videostab::MotionEstimatorL1 +.. ocv:class:: videostab::MotionEstimatorL1 : public videostab::MotionEstimatorBase Describes a global 2D motion estimation method which minimizes L1 error. @@ -274,7 +272,7 @@ Base class for global 2D motion estimation methods which take frames as input. videostab::KeypointBasedMotionEstimator --------------------------------------- -.. ocv:class:: videostab::KeypointBasedMotionEstimator +.. ocv:class:: videostab::KeypointBasedMotionEstimator : public videostab::ImageMotionEstimatorBase Describes a global 2D motion estimation method which uses keypoints detection and optical flow for matching.