提交 26594536 编写于 作者: V Vladislav Vinogradov

merged revisions r7808 from 2.4 branch

上级 42e0214d
......@@ -446,14 +446,16 @@ gpu::reprojectImageTo3D
---------------------------
Reprojects a disparity image to 3D space.
.. ocv:function:: void gpu::reprojectImageTo3D(const GpuMat& disp, GpuMat& xyzw, const Mat& Q, Stream& stream = Stream::Null())
.. ocv:function:: void gpu::reprojectImageTo3D(const GpuMat& disp, GpuMat& xyzw, const Mat& Q, int dst_cn = 4, Stream& stream = Stream::Null())
:param disp: Input disparity image. ``CV_8U`` and ``CV_16S`` types are supported.
:param xyzw: Output 4-channel floating-point image of the same size as ``disp`` . Each element of ``xyzw(x,y)`` contains 3D coordinates ``(x,y,z,1)`` of the point ``(x,y)`` , computed from the disparity map.
:param xyzw: Output 3- or 4-channel floating-point image of the same size as ``disp`` . Each element of ``xyzw(x,y)`` contains 3D coordinates ``(x,y,z)`` or ``(x,y,z,1)`` of the point ``(x,y)`` , computed from the disparity map.
:param Q: :math:`4 \times 4` perspective transformation matrix that can be obtained via :ocv:func:`stereoRectify` .
:param dst_cn: The number of channels for output image. Can be 3 or 4.
:param stream: Stream for the asynchronous version.
.. seealso:: :ocv:func:`reprojectImageTo3D`
......
......@@ -11,18 +11,19 @@ gpu::SURF_GPU
Class used for extracting Speeded Up Robust Features (SURF) from an image. ::
class SURF_GPU : public CvSURFParams
class SURF_GPU
{
public:
enum KeypointLayout
{
SF_X = 0,
SF_Y,
SF_LAPLACIAN,
SF_SIZE,
SF_DIR,
SF_HESSIAN,
SF_FEATURE_STRIDE
X_ROW = 0,
Y_ROW,
LAPLACIAN_ROW,
OCTAVE_ROW,
SIZE_ROW,
ANGLE_ROW,
HESSIAN_ROW,
ROWS_COUNT
};
//! the default constructor
......@@ -69,6 +70,13 @@ Class used for extracting Speeded Up Robust Features (SURF) from an image. ::
void releaseMemory();
// SURF parameters
double hessianThreshold;
int nOctaves;
int nOctaveLayers;
bool extended;
bool upright;
//! max keypoints = keypointsRatio * img.size().area()
float keypointsRatio;
......@@ -82,14 +90,15 @@ Class used for extracting Speeded Up Robust Features (SURF) from an image. ::
The class ``SURF_GPU`` implements Speeded Up Robust Features descriptor. There is a fast multi-scale Hessian keypoint detector that can be used to find the keypoints (which is the default option). But the descriptors can also be computed for the user-specified keypoints. Only 8-bit grayscale images are supported.
The class ``SURF_GPU`` can store results in the GPU and CPU memory. It provides functions to convert results between CPU and GPU version ( ``uploadKeypoints``, ``downloadKeypoints``, ``downloadDescriptors`` ). The format of CPU results is the same as ``SURF`` results. GPU results are stored in ``GpuMat``. The ``keypoints`` matrix is :math:`\texttt{nFeatures} \times 6` matrix with the ``CV_32FC1`` type.
The class ``SURF_GPU`` can store results in the GPU and CPU memory. It provides functions to convert results between CPU and GPU version ( ``uploadKeypoints``, ``downloadKeypoints``, ``downloadDescriptors`` ). The format of CPU results is the same as ``SURF`` results. GPU results are stored in ``GpuMat``. The ``keypoints`` matrix is :math:`\texttt{nFeatures} \times 7` matrix with the ``CV_32FC1`` type.
* ``keypoints.ptr<float>(SF_X)[i]`` contains x coordinate of the i-th feature.
* ``keypoints.ptr<float>(SF_Y)[i]`` contains y coordinate of the i-th feature.
* ``keypoints.ptr<float>(SF_LAPLACIAN)[i]`` contains the laplacian sign of the i-th feature.
* ``keypoints.ptr<float>(SF_SIZE)[i]`` contains the size of the i-th feature.
* ``keypoints.ptr<float>(SF_DIR)[i]`` contain orientation of the i-th feature.
* ``keypoints.ptr<float>(SF_HESSIAN)[i]`` contains the response of the i-th feature.
* ``keypoints.ptr<float>(X_ROW)[i]`` contains x coordinate of the i-th feature.
* ``keypoints.ptr<float>(Y_ROW)[i]`` contains y coordinate of the i-th feature.
* ``keypoints.ptr<float>(LAPLACIAN_ROW)[i]`` contains the laplacian sign of the i-th feature.
* ``keypoints.ptr<float>(OCTAVE_ROW)[i]`` contains the octave of the i-th feature.
* ``keypoints.ptr<float>(SIZE_ROW)[i]`` contains the size of the i-th feature.
* ``keypoints.ptr<float>(ANGLE_ROW)[i]`` contain orientation of the i-th feature.
* ``keypoints.ptr<float>(HESSIAN_ROW)[i]`` contains the response of the i-th feature.
The ``descriptors`` matrix is :math:`\texttt{nFeatures} \times \texttt{descriptorSize}` matrix with the ``CV_32FC1`` type.
......@@ -258,8 +267,10 @@ Class for extracting ORB features and descriptors from an image. ::
DEFAULT_FAST_THRESHOLD = 20
};
explicit ORB_GPU(size_t n_features = 500,
const ORB::CommonParams& detector_params = ORB::CommonParams());
explicit ORB_GPU(int nFeatures = 500, float scaleFactor = 1.2f,
int nLevels = 8, int edgeThreshold = 31,
int firstLevel = 0, int WTA_K = 2,
int scoreType = 0, int patchSize = 31);
void operator()(const GpuMat& image, const GpuMat& mask,
std::vector<KeyPoint>& keypoints);
......@@ -292,11 +303,17 @@ gpu::ORB_GPU::ORB_GPU
-------------------------------------
Constructor.
.. ocv:function:: gpu::ORB_GPU::ORB_GPU(size_t n_features = 500, const ORB::CommonParams& detector_params = ORB::CommonParams())
.. ocv:function:: gpu::ORB_GPU::ORB_GPU(int nFeatures = 500, float scaleFactor = 1.2f, int nLevels = 8, int edgeThreshold = 31, int firstLevel = 0, int WTA_K = 2, int scoreType = 0, int patchSize = 31)
:param nFeatures: The number of desired features.
:param scaleFactor: Coefficient by which we divide the dimensions from one scale pyramid level to the next.
:param nLevels: The number of levels in the scale pyramid.
:param n_features: Number of features to detect.
:param edgeThreshold: How far from the boundary the points should be.
:param detector_params: ORB detector parameters.
:param firstLevel: The level at which the image is given. If 1, that means we will also look at the image `scaleFactor` times bigger.
......
......@@ -369,21 +369,19 @@ gpu::createLinearFilter_GPU
-------------------------------
Creates a non-separable linear filter.
.. ocv:function:: Ptr<FilterEngine_GPU> gpu::createLinearFilter_GPU(int srcType, int dstType, const Mat& kernel, const Point& anchor = Point(-1,-1))
.. ocv:function:: Ptr<FilterEngine_GPU> gpu::createLinearFilter_GPU(int srcType, int dstType, const Mat& kernel, Point anchor = Point(-1,-1), int borderType = BORDER_DEFAULT)
.. ocv:function:: Ptr<BaseFilter_GPU> gpu::getLinearFilter_GPU(int srcType, int dstType, const Mat& kernel, const Size& ksize, Point anchor = Point(-1, -1))
:param srcType: Input image type. ``CV_8UC1`` and ``CV_8UC4`` types are supported.
:param srcType: Input image type. Supports ``CV_8U`` , ``CV_16U`` and ``CV_32F`` one and four channel image.
:param dstType: Output image type. The same type as ``src`` is supported.
:param kernel: 2D array of filter coefficients. Floating-point coefficients will be converted to fixed-point representation before the actual processing.
:param ksize: Kernel size. Supports size up to 16. For larger kernels use :ocv:func:`gpu::convolve`.
:param kernel: 2D array of filter coefficients. Floating-point coefficients will be converted to fixed-point representation before the actual processing. Supports size up to 16. For larger kernels use :ocv:func:`gpu::convolve`.
:param anchor: Anchor point. The default value Point(-1, -1) means that the anchor is at the kernel center.
.. note:: This filter does not check out-of-border accesses, so only a proper sub-matrix of a bigger matrix has to be passed to it.
:param borderType: Pixel extrapolation method. For details, see :ocv:func:`borderInterpolate` .
.. seealso:: :ocv:func:`createLinearFilter`
......@@ -393,9 +391,9 @@ gpu::filter2D
-----------------
Applies the non-separable 2D linear filter to an image.
.. ocv:function:: void gpu::filter2D(const GpuMat& src, GpuMat& dst, int ddepth, const Mat& kernel, Point anchor=Point(-1,-1), Stream& stream = Stream::Null())
.. ocv:function:: void gpu::filter2D(const GpuMat& src, GpuMat& dst, int ddepth, const Mat& kernel, Point anchor=Point(-1,-1), int borderType = BORDER_DEFAULT, Stream& stream = Stream::Null())
:param src: Source image. ``CV_8UC1`` , ``CV_8UC4`` and ``CV_32FC1`` source types are supported.
:param src: Source image. Supports ``CV_8U`` , ``CV_16U`` and ``CV_32F`` one and four channel image.
:param dst: Destination image. The size and the number of channels is the same as ``src`` .
......@@ -405,9 +403,9 @@ Applies the non-separable 2D linear filter to an image.
:param anchor: Anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor resides within the kernel. The special default value (-1,-1) means that the anchor is at the kernel center.
:param stream: Stream for the asynchronous version.
:param borderType: Pixel extrapolation method. For details, see :ocv:func:`borderInterpolate` .
.. note:: This filter does not check out-of-border accesses, so only a proper sub-matrix of a bigger matrix has to be passed to it.
:param stream: Stream for the asynchronous version.
.. seealso:: :ocv:func:`filter2D`, :ocv:func:`gpu::convolve`
......@@ -417,7 +415,7 @@ gpu::Laplacian
------------------
Applies the Laplacian operator to an image.
.. ocv:function:: void gpu::Laplacian(const GpuMat& src, GpuMat& dst, int ddepth, int ksize = 1, double scale = 1, Stream& stream = Stream::Null())
.. ocv:function:: void gpu::Laplacian(const GpuMat& src, GpuMat& dst, int ddepth, int ksize = 1, double scale = 1, int borderType = BORDER_DEFAULT, Stream& stream = Stream::Null())
:param src: Source image. ``CV_8UC1`` and ``CV_8UC4`` source types are supported.
......@@ -429,6 +427,8 @@ Applies the Laplacian operator to an image.
:param scale: Optional scale factor for the computed Laplacian values. By default, no scaling is applied (see :ocv:func:`getDerivKernels` ).
:param borderType: Pixel extrapolation method. For details, see :ocv:func:`borderInterpolate` .
:param stream: Stream for the asynchronous version.
.. note:: This filter does not check out-of-border accesses, so only a proper sub-matrix of a bigger matrix has to be passed to it.
......
......@@ -495,6 +495,28 @@ Applies an affine transformation to an image.
gpu::buildWarpAffineMaps
------------------------
Builds transformation maps for affine transformation.
.. ocv:function:: void buildWarpAffineMaps(const Mat& M, bool inverse, Size dsize, GpuMat& xmap, GpuMat& ymap, Stream& stream = Stream::Null());
:param M: *2x3* transformation matrix.
:param inverse: Flag specifying that ``M`` is an inverse transformation ( ``dst=>src`` ).
:param dsize: Size of the destination image.
:param xmap: X values with ``CV_32FC1`` type.
:param ymap: Y values with ``CV_32FC1`` type.
:param stream: Stream for the asynchronous version.
.. seealso:: :ocv:func:`gpu::warpAffine` , :ocv:func:`gpu::remap`
gpu::warpPerspective
------------------------
Applies a perspective transformation to an image.
......@@ -517,6 +539,28 @@ Applies a perspective transformation to an image.
gpu::buildWarpPerspectiveMaps
-----------------------------
Builds transformation maps for perspective transformation.
.. ocv:function:: void buildWarpAffineMaps(const Mat& M, bool inverse, Size dsize, GpuMat& xmap, GpuMat& ymap, Stream& stream = Stream::Null());
:param M: *3x3* transformation matrix.
:param inverse: Flag specifying that ``M`` is an inverse transformation ( ``dst=>src`` ).
:param dsize: Size of the destination image.
:param xmap: X values with ``CV_32FC1`` type.
:param ymap: Y values with ``CV_32FC1`` type.
:param stream: Stream for the asynchronous version.
.. seealso:: :ocv:func:`gpu::warpPerspective` , :ocv:func:`gpu::remap`
gpu::rotate
---------------
Rotates an image around the origin (0,0) and then shifts it.
......
......@@ -37,6 +37,8 @@ The function performs generalized matrix multiplication similar to the ``gemm``
\texttt{dst} = \texttt{alpha} \cdot \texttt{src1} ^T \cdot \texttt{src2} + \texttt{beta} \cdot \texttt{src3} ^T
.. note:: Transposition operation doesn't support ``CV_64FC2`` input type.
.. seealso:: :ocv:func:`gemm`
......
......@@ -202,6 +202,7 @@ Class used for calculating an optical flow. ::
double derivLambda;
bool useInitialFlow;
float minEigThreshold;
bool getMinEigenVals;
void releaseMemory();
};
......@@ -228,7 +229,7 @@ Calculate an optical flow for a sparse feature set.
:param status: Output status vector (CV_8UC1 type). Each element of the vector is set to 1 if the flow for the corresponding features has been found. Otherwise, it is set to 0.
:param err: Output vector (CV_32FC1 type) that contains min eigen value. It can be NULL, if not needed.
:param err: Output vector (CV_32FC1 type) that contains the difference between patches around the original and moved points or min eigen value if ``getMinEigenVals`` is checked. It can be NULL, if not needed.
.. seealso:: :ocv:func:`calcOpticalFlowPyrLK`
......@@ -248,7 +249,7 @@ Calculate dense optical flow.
:param v: Vertical component of the optical flow of the same size as input images, 32-bit floating-point, single-channel
:param err: Output vector (CV_32FC1 type) that contains min eigen value. It can be NULL, if not needed.
:param err: Output vector (CV_32FC1 type) that contains the difference between patches around the original and moved points or min eigen value if ``getMinEigenVals`` is checked. It can be NULL, if not needed.
......
......@@ -283,7 +283,7 @@ CV_EXPORTS Ptr<FilterEngine_GPU> createMorphologyFilter_GPU(int op, int type, co
const Point& anchor = Point(-1,-1), int iterations = 1);
//! returns 2D filter with the specified kernel
//! supports CV_8UC1 and CV_8UC4 types
//! supports CV_8U, CV_16U and CV_32F one and four channel image
CV_EXPORTS Ptr<BaseFilter_GPU> getLinearFilter_GPU(int srcType, int dstType, const Mat& kernel, Point anchor = Point(-1, -1), int borderType = BORDER_DEFAULT);
//! returns the non-separable linear filter engine
......@@ -1458,12 +1458,13 @@ public:
//! finds the keypoints using fast hessian detector used in SURF
//! supports CV_8UC1 images
//! keypoints will have nFeature cols and 6 rows
//! keypoints.ptr<float>(SF_X)[i] will contain x coordinate of i'th feature
//! keypoints.ptr<float>(SF_Y)[i] will contain y coordinate of i'th feature
//! keypoints.ptr<float>(SF_LAPLACIAN)[i] will contain laplacian sign of i'th feature
//! keypoints.ptr<float>(SF_SIZE)[i] will contain size of i'th feature
//! keypoints.ptr<float>(SF_DIR)[i] will contain orientation of i'th feature
//! keypoints.ptr<float>(SF_HESSIAN)[i] will contain response of i'th feature
//! keypoints.ptr<float>(X_ROW)[i] will contain x coordinate of i'th feature
//! keypoints.ptr<float>(Y_ROW)[i] will contain y coordinate of i'th feature
//! keypoints.ptr<float>(LAPLACIAN_ROW)[i] will contain laplacian sign of i'th feature
//! keypoints.ptr<float>(OCTAVE_ROW)[i] will contain octave of i'th feature
//! keypoints.ptr<float>(SIZE_ROW)[i] will contain size of i'th feature
//! keypoints.ptr<float>(ANGLE_ROW)[i] will contain orientation of i'th feature
//! keypoints.ptr<float>(HESSIAN_ROW)[i] will contain response of i'th feature
void operator()(const GpuMat& img, const GpuMat& mask, GpuMat& keypoints);
//! finds the keypoints and computes their descriptors.
//! Optionally it can compute descriptors for the user-provided keypoints and recompute keypoints direction
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册