calib.hpp 87.0 KB
Newer Older
1 2 3 4 5 6 7 8
// This file is part of OpenCV project.
// It is subject to the license terms in the LICENSE file found in the top-level directory
// of this distribution and at http://opencv.org/license.html

#ifndef OPENCV_CALIB_HPP
#define OPENCV_CALIB_HPP

#include "opencv2/core.hpp"
A
Alexander Alekhin 已提交
9
#include "opencv2/core/types.hpp"
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55
#include "opencv2/features2d.hpp"
#include "opencv2/core/affine.hpp"

/**
  @defgroup calib Camera Calibration

The functions in this section use a so-called pinhole camera model. The view of a scene
is obtained by projecting a scene's 3D point \f$P_w\f$ into the image plane using a perspective
transformation which forms the corresponding pixel \f$p\f$. Both \f$P_w\f$ and \f$p\f$ are
represented in homogeneous coordinates, i.e. as 3D and 2D homogeneous vector respectively. You will
find a brief introduction to projective geometry, homogeneous vectors and homogeneous
transformations at the end of this section's introduction. For more succinct notation, we often drop
the 'homogeneous' and say vector instead of homogeneous vector.

The distortion-free projective transformation given by a  pinhole camera model is shown below.

\f[s \; p = A \begin{bmatrix} R|t \end{bmatrix} P_w,\f]

where \f$P_w\f$ is a 3D point expressed with respect to the world coordinate system,
\f$p\f$ is a 2D pixel in the image plane, \f$A\f$ is the camera intrinsic matrix,
\f$R\f$ and \f$t\f$ are the rotation and translation that describe the change of coordinates from
world to camera coordinate systems (or camera frame) and \f$s\f$ is the projective transformation's
arbitrary scaling and not part of the camera model.

The camera intrinsic matrix \f$A\f$ (notation used as in @cite Zhang2000 and also generally notated
as \f$K\f$) projects 3D points given in the camera coordinate system to 2D pixel coordinates, i.e.

\f[p = A P_c.\f]

The camera intrinsic matrix \f$A\f$ is composed of the focal lengths \f$f_x\f$ and \f$f_y\f$, which are
expressed in pixel units, and the principal point \f$(c_x, c_y)\f$, that is usually close to the
image center:

\f[A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1},\f]

and thus

\f[s \vecthree{u}{v}{1} = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1} \vecthree{X_c}{Y_c}{Z_c}.\f]

The matrix of intrinsic parameters does not depend on the scene viewed. So, once estimated, it can
be re-used as long as the focal length is fixed (in case of a zoom lens). Thus, if an image from the
camera is scaled by a factor, all of these parameters need to be scaled (multiplied/divided,
respectively) by the same factor.

The joint rotation-translation matrix \f$[R|t]\f$ is the matrix product of a projective
transformation and a homogeneous transformation. The 3-by-4 projective transformation maps 3D points
A
Alexander Alekhin 已提交
56
represented in camera coordinates to 2D points in the image plane and represented in normalized
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393
camera coordinates \f$x' = X_c / Z_c\f$ and \f$y' = Y_c / Z_c\f$:

\f[Z_c \begin{bmatrix}
x' \\
y' \\
1
\end{bmatrix} = \begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0
\end{bmatrix}
\begin{bmatrix}
X_c \\
Y_c \\
Z_c \\
1
\end{bmatrix}.\f]

The homogeneous transformation is encoded by the extrinsic parameters \f$R\f$ and \f$t\f$ and
represents the change of basis from world coordinate system \f$w\f$ to the camera coordinate sytem
\f$c\f$. Thus, given the representation of the point \f$P\f$ in world coordinates, \f$P_w\f$, we
obtain \f$P\f$'s representation in the camera coordinate system, \f$P_c\f$, by

\f[P_c = \begin{bmatrix}
R & t \\
0 & 1
\end{bmatrix} P_w,\f]

This homogeneous transformation is composed out of \f$R\f$, a 3-by-3 rotation matrix, and \f$t\f$, a
3-by-1 translation vector:

\f[\begin{bmatrix}
R & t \\
0 & 1
\end{bmatrix} = \begin{bmatrix}
r_{11} & r_{12} & r_{13} & t_x \\
r_{21} & r_{22} & r_{23} & t_y \\
r_{31} & r_{32} & r_{33} & t_z \\
0 & 0 & 0 & 1
\end{bmatrix},
\f]

and therefore

\f[\begin{bmatrix}
X_c \\
Y_c \\
Z_c \\
1
\end{bmatrix} = \begin{bmatrix}
r_{11} & r_{12} & r_{13} & t_x \\
r_{21} & r_{22} & r_{23} & t_y \\
r_{31} & r_{32} & r_{33} & t_z \\
0 & 0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
X_w \\
Y_w \\
Z_w \\
1
\end{bmatrix}.\f]

Combining the projective transformation and the homogeneous transformation, we obtain the projective
transformation that maps 3D points in world coordinates into 2D points in the image plane and in
normalized camera coordinates:

\f[Z_c \begin{bmatrix}
x' \\
y' \\
1
\end{bmatrix} = \begin{bmatrix} R|t \end{bmatrix} \begin{bmatrix}
X_w \\
Y_w \\
Z_w \\
1
\end{bmatrix} = \begin{bmatrix}
r_{11} & r_{12} & r_{13} & t_x \\
r_{21} & r_{22} & r_{23} & t_y \\
r_{31} & r_{32} & r_{33} & t_z
\end{bmatrix}
\begin{bmatrix}
X_w \\
Y_w \\
Z_w \\
1
\end{bmatrix},\f]

with \f$x' = X_c / Z_c\f$ and \f$y' = Y_c / Z_c\f$. Putting the equations for instrincs and extrinsics together, we can write out
\f$s \; p = A \begin{bmatrix} R|t \end{bmatrix} P_w\f$ as

\f[s \vecthree{u}{v}{1} = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}
\begin{bmatrix}
r_{11} & r_{12} & r_{13} & t_x \\
r_{21} & r_{22} & r_{23} & t_y \\
r_{31} & r_{32} & r_{33} & t_z
\end{bmatrix}
\begin{bmatrix}
X_w \\
Y_w \\
Z_w \\
1
\end{bmatrix}.\f]

If \f$Z_c \ne 0\f$, the transformation above is equivalent to the following,

\f[\begin{bmatrix}
u \\
v
\end{bmatrix} = \begin{bmatrix}
f_x X_c/Z_c + c_x \\
f_y Y_c/Z_c + c_y
\end{bmatrix}\f]

with

\f[\vecthree{X_c}{Y_c}{Z_c} = \begin{bmatrix}
R|t
\end{bmatrix} \begin{bmatrix}
X_w \\
Y_w \\
Z_w \\
1
\end{bmatrix}.\f]

The following figure illustrates the pinhole camera model.

![Pinhole camera model](pics/pinhole_camera_model.png)

Real lenses usually have some distortion, mostly radial distortion, and slight tangential distortion.
So, the above model is extended as:

\f[\begin{bmatrix}
u \\
v
\end{bmatrix} = \begin{bmatrix}
f_x x'' + c_x \\
f_y y'' + c_y
\end{bmatrix}\f]

where

\f[\begin{bmatrix}
x'' \\
y''
\end{bmatrix} = \begin{bmatrix}
x' \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6} + 2 p_1 x' y' + p_2(r^2 + 2 x'^2) + s_1 r^2 + s_2 r^4 \\
y' \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6} + p_1 (r^2 + 2 y'^2) + 2 p_2 x' y' + s_3 r^2 + s_4 r^4 \\
\end{bmatrix}\f]

with

\f[r^2 = x'^2 + y'^2\f]

and

\f[\begin{bmatrix}
x'\\
y'
\end{bmatrix} = \begin{bmatrix}
X_c/Z_c \\
Y_c/Z_c
\end{bmatrix},\f]

if \f$Z_c \ne 0\f$.

The distortion parameters are the radial coefficients \f$k_1\f$, \f$k_2\f$, \f$k_3\f$, \f$k_4\f$, \f$k_5\f$, and \f$k_6\f$
,\f$p_1\f$ and \f$p_2\f$ are the tangential distortion coefficients, and \f$s_1\f$, \f$s_2\f$, \f$s_3\f$, and \f$s_4\f$,
are the thin prism distortion coefficients. Higher-order coefficients are not considered in OpenCV.

The next figures show two common types of radial distortion: barrel distortion
(\f$ 1 + k_1 r^2 + k_2 r^4 + k_3 r^6 \f$ monotonically decreasing)
and pincushion distortion (\f$ 1 + k_1 r^2 + k_2 r^4 + k_3 r^6 \f$ monotonically increasing).
Radial distortion is always monotonic for real lenses,
and if the estimator produces a non-monotonic result,
this should be considered a calibration failure.
More generally, radial distortion must be monotonic and the distortion function must be bijective.
A failed estimation result may look deceptively good near the image center
but will work poorly in e.g. AR/SFM applications.
The optimization method used in OpenCV camera calibration does not include these constraints as
the framework does not support the required integer programming and polynomial inequalities.
See [issue #15992](https://github.com/opencv/opencv/issues/15992) for additional information.

![](pics/distortion_examples.png)
![](pics/distortion_examples2.png)

In some cases, the image sensor may be tilted in order to focus an oblique plane in front of the
camera (Scheimpflug principle). This can be useful for particle image velocimetry (PIV) or
triangulation with a laser fan. The tilt causes a perspective distortion of \f$x''\f$ and
\f$y''\f$. This distortion can be modeled in the following way, see e.g. @cite Louhichi07.

\f[\begin{bmatrix}
u \\
v
\end{bmatrix} = \begin{bmatrix}
f_x x''' + c_x \\
f_y y''' + c_y
\end{bmatrix},\f]

where

\f[s\vecthree{x'''}{y'''}{1} =
\vecthreethree{R_{33}(\tau_x, \tau_y)}{0}{-R_{13}(\tau_x, \tau_y)}
{0}{R_{33}(\tau_x, \tau_y)}{-R_{23}(\tau_x, \tau_y)}
{0}{0}{1} R(\tau_x, \tau_y) \vecthree{x''}{y''}{1}\f]

and the matrix \f$R(\tau_x, \tau_y)\f$ is defined by two rotations with angular parameter
\f$\tau_x\f$ and \f$\tau_y\f$, respectively,

\f[
R(\tau_x, \tau_y) =
\vecthreethree{\cos(\tau_y)}{0}{-\sin(\tau_y)}{0}{1}{0}{\sin(\tau_y)}{0}{\cos(\tau_y)}
\vecthreethree{1}{0}{0}{0}{\cos(\tau_x)}{\sin(\tau_x)}{0}{-\sin(\tau_x)}{\cos(\tau_x)} =
\vecthreethree{\cos(\tau_y)}{\sin(\tau_y)\sin(\tau_x)}{-\sin(\tau_y)\cos(\tau_x)}
{0}{\cos(\tau_x)}{\sin(\tau_x)}
{\sin(\tau_y)}{-\cos(\tau_y)\sin(\tau_x)}{\cos(\tau_y)\cos(\tau_x)}.
\f]

In the functions below the coefficients are passed or returned as

\f[(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f]

vector. That is, if the vector contains four elements, it means that \f$k_3=0\f$ . The distortion
coefficients do not depend on the scene viewed. Thus, they also belong to the intrinsic camera
parameters. And they remain the same regardless of the captured image resolution. If, for example, a
camera has been calibrated on images of 320 x 240 resolution, absolutely the same distortion
coefficients can be used for 640 x 480 images from the same camera while \f$f_x\f$, \f$f_y\f$,
\f$c_x\f$, and \f$c_y\f$ need to be scaled appropriately.

The functions below use the above model to do the following:

-   Project 3D points to the image plane given intrinsic and extrinsic parameters.
-   Compute extrinsic parameters given intrinsic parameters, a few 3D points, and their
projections.
-   Estimate intrinsic and extrinsic camera parameters from several views of a known calibration
pattern (every view is described by several 3D-2D point correspondences).
-   Estimate the relative position and orientation of the stereo camera "heads" and compute the
*rectification* transformation that makes the camera optical axes parallel.

<B> Homogeneous Coordinates </B><br>
Homogeneous Coordinates are a system of coordinates that are used in projective geometry. Their use
allows to represent points at infinity by finite coordinates and simplifies formulas when compared
to the cartesian counterparts, e.g. they have the advantage that affine transformations can be
expressed as linear homogeneous transformation.

One obtains the homogeneous vector \f$P_h\f$ by appending a 1 along an n-dimensional cartesian
vector \f$P\f$ e.g. for a 3D cartesian vector the mapping \f$P \rightarrow P_h\f$ is:

\f[\begin{bmatrix}
X \\
Y \\
Z
\end{bmatrix} \rightarrow \begin{bmatrix}
X \\
Y \\
Z \\
1
\end{bmatrix}.\f]

For the inverse mapping \f$P_h \rightarrow P\f$, one divides all elements of the homogeneous vector
by its last element, e.g. for a 3D homogeneous vector one gets its 2D cartesian counterpart by:

\f[\begin{bmatrix}
X \\
Y \\
W
\end{bmatrix} \rightarrow \begin{bmatrix}
X / W \\
Y / W
\end{bmatrix},\f]

if \f$W \ne 0\f$.

Due to this mapping, all multiples \f$k P_h\f$, for \f$k \ne 0\f$, of a homogeneous point represent
the same point \f$P_h\f$. An intuitive understanding of this property is that under a projective
transformation, all multiples of \f$P_h\f$ are mapped to the same point. This is the physical
observation one does for pinhole cameras, as all points along a ray through the camera's pinhole are
projected to the same image point, e.g. all points along the red ray in the image of the pinhole
camera model above would be mapped to the same image coordinate. This property is also the source
for the scale ambiguity s in the equation of the pinhole camera model.

As mentioned, by using homogeneous coordinates we can express any change of basis parameterized by
\f$R\f$ and \f$t\f$ as a linear transformation, e.g. for the change of basis from coordinate system
0 to coordinate system 1 becomes:

\f[P_1 = R P_0 + t \rightarrow P_{h_1} = \begin{bmatrix}
R & t \\
0 & 1
\end{bmatrix} P_{h_0}.\f]

@note
    -   Many functions in this module take a camera intrinsic matrix as an input parameter. Although all
        functions assume the same structure of this parameter, they may name it differently. The
        parameter's description, however, will be clear in that a camera intrinsic matrix with the structure
        shown above is required.
    -   A calibration sample for 3 cameras in a horizontal position can be found at
        opencv_source_code/samples/cpp/3calibration.cpp
    -   A calibration sample based on a sequence of images can be found at
        opencv_source_code/samples/cpp/calibration.cpp
    -   A calibration sample in order to do 3D reconstruction can be found at
        opencv_source_code/samples/cpp/build3dmodel.cpp
    -   A calibration example on stereo calibration can be found at
        opencv_source_code/samples/cpp/stereo_calib.cpp
    -   A calibration example on stereo matching can be found at
        opencv_source_code/samples/cpp/stereo_match.cpp
    -   (Python) A camera calibration sample can be found at
        opencv_source_code/samples/python/calibrate.py

  @{
    @defgroup calib3d_fisheye Fisheye camera model

    Definitions: Let P be a point in 3D of coordinates X in the world reference frame (stored in the
    matrix X) The coordinate vector of P in the camera reference frame is:

    \f[Xc = R X + T\f]

    where R is the rotation matrix corresponding to the rotation vector om: R = rodrigues(om); call x, y
    and z the 3 coordinates of Xc:

    \f[x = Xc_1 \\ y = Xc_2 \\ z = Xc_3\f]

    The pinhole projection coordinates of P is [a; b] where

    \f[a = x / z \ and \ b = y / z \\ r^2 = a^2 + b^2 \\ \theta = atan(r)\f]

    Fisheye distortion:

    \f[\theta_d = \theta (1 + k_1 \theta^2 + k_2 \theta^4 + k_3 \theta^6 + k_4 \theta^8)\f]

    The distorted point coordinates are [x'; y'] where

    \f[x' = (\theta_d / r) a \\ y' = (\theta_d / r) b \f]

    Finally, conversion into pixel coordinates: The final pixel coordinates vector [u; v] where:

    \f[u = f_x (x' + \alpha y') + c_x \\
    v = f_y y' + c_y\f]

A
Alexander Alekhin 已提交
394 395 396
    Summary:
    Generic camera model @cite Kannala2006 with perspective projection and without distortion correction

397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468
    @defgroup calib3d_c C API

  @}
 */

namespace cv {

//! @addtogroup calib
//! @{

enum { CALIB_CB_ADAPTIVE_THRESH = 1,
       CALIB_CB_NORMALIZE_IMAGE = 2,
       CALIB_CB_FILTER_QUADS    = 4,
       CALIB_CB_FAST_CHECK      = 8,
       CALIB_CB_EXHAUSTIVE      = 16,
       CALIB_CB_ACCURACY        = 32,
       CALIB_CB_LARGER          = 64,
       CALIB_CB_MARKER          = 128
     };

enum { CALIB_CB_SYMMETRIC_GRID  = 1,
       CALIB_CB_ASYMMETRIC_GRID = 2,
       CALIB_CB_CLUSTERING      = 4
     };

enum { CALIB_NINTRINSIC          = 18,
       CALIB_USE_INTRINSIC_GUESS = 0x00001,
       CALIB_FIX_ASPECT_RATIO    = 0x00002,
       CALIB_FIX_PRINCIPAL_POINT = 0x00004,
       CALIB_ZERO_TANGENT_DIST   = 0x00008,
       CALIB_FIX_FOCAL_LENGTH    = 0x00010,
       CALIB_FIX_K1              = 0x00020,
       CALIB_FIX_K2              = 0x00040,
       CALIB_FIX_K3              = 0x00080,
       CALIB_FIX_K4              = 0x00800,
       CALIB_FIX_K5              = 0x01000,
       CALIB_FIX_K6              = 0x02000,
       CALIB_RATIONAL_MODEL      = 0x04000,
       CALIB_THIN_PRISM_MODEL    = 0x08000,
       CALIB_FIX_S1_S2_S3_S4     = 0x10000,
       CALIB_TILTED_MODEL        = 0x40000,
       CALIB_FIX_TAUX_TAUY       = 0x80000,
       CALIB_USE_QR              = 0x100000, //!< use QR instead of SVD decomposition for solving. Faster but potentially less precise
       CALIB_FIX_TANGENT_DIST    = 0x200000,
       // only for stereo
       CALIB_FIX_INTRINSIC       = 0x00100,
       CALIB_SAME_FOCAL_LENGTH   = 0x00200,
       // for stereo rectification
       CALIB_ZERO_DISPARITY      = 0x00400,
       CALIB_USE_LU              = (1 << 17), //!< use LU instead of SVD decomposition for solving. much faster but potentially less precise
       CALIB_USE_EXTRINSIC_GUESS = (1 << 22)  //!< for stereoCalibrate
     };

enum HandEyeCalibrationMethod
{
    CALIB_HAND_EYE_TSAI         = 0, //!< A New Technique for Fully Autonomous and Efficient 3D Robotics Hand/Eye Calibration @cite Tsai89
    CALIB_HAND_EYE_PARK         = 1, //!< Robot Sensor Calibration: Solving AX = XB on the Euclidean Group @cite Park94
    CALIB_HAND_EYE_HORAUD       = 2, //!< Hand-eye Calibration @cite Horaud95
    CALIB_HAND_EYE_ANDREFF      = 3, //!< On-line Hand-Eye Calibration @cite Andreff99
    CALIB_HAND_EYE_DANIILIDIS   = 4  //!< Hand-Eye Calibration Using Dual Quaternions @cite Daniilidis98
};

enum RobotWorldHandEyeCalibrationMethod
{
    CALIB_ROBOT_WORLD_HAND_EYE_SHAH = 0, //!< Solving the robot-world/hand-eye calibration problem using the kronecker product @cite Shah2013SolvingTR
    CALIB_ROBOT_WORLD_HAND_EYE_LI   = 1  //!< Simultaneous robot-world and hand-eye calibration using dual-quaternions and kronecker product @cite Li2010SimultaneousRA
};

/** @brief Finds an initial camera intrinsic matrix from 3D-2D point correspondences.

@param objectPoints Vector of vectors of the calibration pattern points in the calibration pattern
coordinate space. In the old interface all the per-view vectors are concatenated. See
A
Alexander Alekhin 已提交
469
#calibrateCamera for details.
470 471 472 473
@param imagePoints Vector of vectors of the projections of the calibration pattern points. In the
old interface all the per-view vectors are concatenated.
@param imageSize Image size in pixels used to initialize the principal point.
@param aspectRatio If it is zero or negative, both \f$f_x\f$ and \f$f_y\f$ are estimated independently.
A
Alexander Alekhin 已提交
474
Otherwise, \f$f_x = f_y \cdot \texttt{aspectRatio}\f$ .
475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490

The function estimates and returns an initial camera intrinsic matrix for the camera calibration process.
Currently, the function only supports planar calibration patterns, which are patterns where each
object point has z-coordinate =0.
 */
CV_EXPORTS_W Mat initCameraMatrix2D( InputArrayOfArrays objectPoints,
                                     InputArrayOfArrays imagePoints,
                                     Size imageSize, double aspectRatio = 1.0 );

/** @brief Finds the positions of internal corners of the chessboard.

@param image Source chessboard view. It must be an 8-bit grayscale or color image.
@param patternSize Number of inner corners per a chessboard row and column
( patternSize = cv::Size(points_per_row,points_per_colum) = cv::Size(columns,rows) ).
@param corners Output array of detected corners.
@param flags Various operation flags that can be zero or a combination of the following values:
A
Alexander Alekhin 已提交
491
-   @ref CALIB_CB_ADAPTIVE_THRESH Use adaptive thresholding to convert the image to black
492
and white, rather than a fixed threshold level (computed from the average image brightness).
A
Alexander Alekhin 已提交
493
-   @ref CALIB_CB_NORMALIZE_IMAGE Normalize the image gamma with equalizeHist before
494
applying fixed or adaptive thresholding.
A
Alexander Alekhin 已提交
495
-   @ref CALIB_CB_FILTER_QUADS Use additional criteria (like contour area, perimeter,
496
square-like shape) to filter out false quads extracted at the contour retrieval stage.
A
Alexander Alekhin 已提交
497
-   @ref CALIB_CB_FAST_CHECK Run a fast check on the image that looks for chessboard corners,
498 499 500 501 502 503 504 505 506
and shortcut the call if none is found. This can drastically speed up the call in the
degenerate condition when no chessboard is observed.

The function attempts to determine whether the input image is a view of the chessboard pattern and
locate the internal chessboard corners. The function returns a non-zero value if all of the corners
are found and they are placed in a certain order (row by row, left to right in every row).
Otherwise, if the function fails to find all the corners or reorder them, it returns 0. For example,
a regular chessboard has 8 x 8 squares and 7 x 7 internal corners, that is, points where the black
squares touch each other. The detected coordinates are approximate, and to determine their positions
A
Alexander Alekhin 已提交
507
more accurately, the function calls #cornerSubPix. You also may use the function #cornerSubPix with
508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531
different parameters if returned coordinates are not accurate enough.

Sample usage of detecting and drawing chessboard corners: :
@code
    Size patternsize(8,6); //interior number of corners
    Mat gray = ....; //source image
    vector<Point2f> corners; //this will be filled by the detected corners

    //CALIB_CB_FAST_CHECK saves a lot of time on images
    //that do not contain any chessboard corners
    bool patternfound = findChessboardCorners(gray, patternsize, corners,
            CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE
            + CALIB_CB_FAST_CHECK);

    if(patternfound)
      cornerSubPix(gray, corners, Size(11, 11), Size(-1, -1),
        TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));

    drawChessboardCorners(img, patternsize, Mat(corners), patternfound);
@endcode
@note The function requires white space (like a square-thick border, the wider the better) around
the board to make the detection more robust in various environments. Otherwise, if there is no
border and the background is dark, the outer black squares cannot be segmented properly and so the
square grouping and ordering algorithm fails.
A
Alexander Alekhin 已提交
532 533

Use gen_pattern.py (@ref tutorial_camera_calibration_pattern) to create checkerboard.
534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550
 */
CV_EXPORTS_W bool findChessboardCorners( InputArray image, Size patternSize, OutputArray corners,
                                         int flags = CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE );

/*
   Checks whether the image contains chessboard of the specific size or not.
   If yes, nonzero value is returned.
*/
CV_EXPORTS_W bool checkChessboard(InputArray img, Size size);

/** @brief Finds the positions of internal corners of the chessboard using a sector based approach.

@param image Source chessboard view. It must be an 8-bit grayscale or color image.
@param patternSize Number of inner corners per a chessboard row and column
( patternSize = cv::Size(points_per_row,points_per_colum) = cv::Size(columns,rows) ).
@param corners Output array of detected corners.
@param flags Various operation flags that can be zero or a combination of the following values:
A
Alexander Alekhin 已提交
551 552 553 554 555
-   @ref CALIB_CB_NORMALIZE_IMAGE Normalize the image gamma with equalizeHist before detection.
-   @ref CALIB_CB_EXHAUSTIVE Run an exhaustive search to improve detection rate.
-   @ref CALIB_CB_ACCURACY Up sample input image to improve sub-pixel accuracy due to aliasing effects.
-   @ref CALIB_CB_LARGER The detected pattern is allowed to be larger than patternSize (see description).
-   @ref CALIB_CB_MARKER The detected pattern must have a marker (see description).
556 557 558 559 560 561 562 563 564
This should be used if an accurate camera calibration is required.
@param meta Optional output arrray of detected corners (CV_8UC1 and size = cv::Size(columns,rows)).
Each entry stands for one corner of the pattern and can have one of the following values:
-   0 = no meta data attached
-   1 = left-top corner of a black cell
-   2 = left-top corner of a white cell
-   3 = left-top corner of a black cell with a white marker dot
-   4 = left-top corner of a white cell with a black marker dot (pattern origin in case of markers otherwise first corner)

A
Alexander Alekhin 已提交
565
The function is analog to #findChessboardCorners but uses a localized radon
566 567 568 569 570 571 572 573
transformation approximated by box filters being more robust to all sort of
noise, faster on larger images and is able to directly return the sub-pixel
position of the internal chessboard corners. The Method is based on the paper
@cite duda2018 "Accurate Detection and Localization of Checkerboard Corners for
Calibration" demonstrating that the returned sub-pixel positions are more
accurate than the one returned by cornerSubPix allowing a precise camera
calibration for demanding applications.

A
Alexander Alekhin 已提交
574
In the case, the flags @ref CALIB_CB_LARGER or @ref CALIB_CB_MARKER are given,
575 576 577 578 579 580 581 582 583 584 585 586 587 588 589
the result can be recovered from the optional meta array. Both flags are
helpful to use calibration patterns exceeding the field of view of the camera.
These oversized patterns allow more accurate calibrations as corners can be
utilized, which are as close as possible to the image borders.  For a
consistent coordinate system across all images, the optional marker (see image
below) can be used to move the origin of the board to the location where the
black circle is located.

@note The function requires a white boarder with roughly the same width as one
of the checkerboard fields around the whole board to improve the detection in
various environments. In addition, because of the localized radon
transformation it is beneficial to use round corners for the field corners
which are located on the outside of the board. The following figure illustrates
a sample checkerboard optimized for the detection. However, any other checkerboard
can be used as well.
A
Alexander Alekhin 已提交
590 591

Use gen_pattern.py (@ref tutorial_camera_calibration_pattern) to create checkerboard.
592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616
![Checkerboard](pics/checkerboard_radon.png)
 */
CV_EXPORTS_AS(findChessboardCornersSBWithMeta)
bool findChessboardCornersSB(InputArray image,Size patternSize, OutputArray corners,
                             int flags,OutputArray meta);
/** @overload */
CV_EXPORTS_W inline
bool findChessboardCornersSB(InputArray image, Size patternSize, OutputArray corners,
                             int flags = 0)
{
    return findChessboardCornersSB(image, patternSize, corners, flags, noArray());
}

/** @brief Estimates the sharpness of a detected chessboard.

Image sharpness, as well as brightness, are a critical parameter for accuracte
camera calibration. For accessing these parameters for filtering out
problematic calibraiton images, this method calculates edge profiles by traveling from
black to white chessboard cell centers. Based on this, the number of pixels is
calculated required to transit from black to white. This width of the
transition area is a good indication of how sharp the chessboard is imaged
and should be below ~3.0 pixels.

@param image Gray image used to find chessboard corners
@param patternSize Size of a found chessboard pattern
A
Alexander Alekhin 已提交
617
@param corners Corners found by #findChessboardCornersSB
618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644
@param rise_distance Rise distance 0.8 means 10% ... 90% of the final signal strength
@param vertical By default edge responses for horizontal lines are calculated
@param sharpness Optional output array with a sharpness value for calculated edge responses (see description)

The optional sharpness array is of type CV_32FC1 and has for each calculated
profile one row with the following five entries:
* 0 = x coordinate of the underlying edge in the image
* 1 = y coordinate of the underlying edge in the image
* 2 = width of the transition area (sharpness)
* 3 = signal strength in the black cell (min brightness)
* 4 = signal strength in the white cell (max brightness)

@return Scalar(average sharpness, average min brightness, average max brightness,0)
*/
CV_EXPORTS_W Scalar estimateChessboardSharpness(InputArray image, Size patternSize, InputArray corners,
                                                float rise_distance=0.8F,bool vertical=false,
                                                OutputArray sharpness=noArray());


//! finds subpixel-accurate positions of the chessboard corners
CV_EXPORTS_W bool find4QuadCornerSubpix( InputArray img, InputOutputArray corners, Size region_size );

/** @brief Renders the detected chessboard corners.

@param image Destination image. It must be an 8-bit color image.
@param patternSize Number of inner corners per a chessboard row and column
(patternSize = cv::Size(points_per_row,points_per_column)).
A
Alexander Alekhin 已提交
645
@param corners Array of detected corners, the output of #findChessboardCorners.
646
@param patternWasFound Parameter indicating whether the complete board was found or not. The
A
Alexander Alekhin 已提交
647
return value of #findChessboardCorners should be passed here.
648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692

The function draws individual chessboard corners detected either as red circles if the board was not
found, or as colored corners connected with lines if the board was found.
 */
CV_EXPORTS_W void drawChessboardCorners( InputOutputArray image, Size patternSize,
                                         InputArray corners, bool patternWasFound );

struct CV_EXPORTS_W_SIMPLE CirclesGridFinderParameters
{
    CV_WRAP CirclesGridFinderParameters();
    CV_PROP_RW cv::Size2f densityNeighborhoodSize;
    CV_PROP_RW float minDensity;
    CV_PROP_RW int kmeansAttempts;
    CV_PROP_RW int minDistanceToAddKeypoint;
    CV_PROP_RW int keypointScale;
    CV_PROP_RW float minGraphConfidence;
    CV_PROP_RW float vertexGain;
    CV_PROP_RW float vertexPenalty;
    CV_PROP_RW float existingVertexGain;
    CV_PROP_RW float edgeGain;
    CV_PROP_RW float edgePenalty;
    CV_PROP_RW float convexHullFactor;
    CV_PROP_RW float minRNGEdgeSwitchDist;

    enum GridType
    {
      SYMMETRIC_GRID, ASYMMETRIC_GRID
    };
    GridType gridType;

    CV_PROP_RW float squareSize; //!< Distance between two adjacent points. Used by CALIB_CB_CLUSTERING.
    CV_PROP_RW float maxRectifiedDistance; //!< Max deviation from prediction. Used by CALIB_CB_CLUSTERING.
};

#ifndef DISABLE_OPENCV_3_COMPATIBILITY
typedef CirclesGridFinderParameters CirclesGridFinderParameters2;
#endif

/** @brief Finds centers in the grid of circles.

@param image grid view of input circles; it must be an 8-bit grayscale or color image.
@param patternSize number of circles per row and column
( patternSize = Size(points_per_row, points_per_colum) ).
@param centers output array of detected centers.
@param flags various operation flags that can be one of the following values:
A
Alexander Alekhin 已提交
693 694 695
-   @ref CALIB_CB_SYMMETRIC_GRID uses symmetric pattern of circles.
-   @ref CALIB_CB_ASYMMETRIC_GRID uses asymmetric pattern of circles.
-   @ref CALIB_CB_CLUSTERING uses a special algorithm for grid detection. It is more robust to
696 697
perspective distortions but much more sensitive to background clutter.
@param blobDetector feature detector that finds blobs like dark circles on light background.
A
Alexander Alekhin 已提交
698
                    If `blobDetector` is NULL then `image` represents Point2f array of candidates.
699 700 701 702 703 704 705 706 707 708
@param parameters struct for finding circles in a grid pattern.

The function attempts to determine whether the input image contains a grid of circles. If it is, the
function locates centers of the circles. The function returns a non-zero value if all of the centers
have been found and they have been placed in a certain order (row by row, left to right in every
row). Otherwise, if the function fails to find all the corners or reorder them, it returns 0.

Sample usage of detecting and drawing the centers of circles: :
@code
    Size patternsize(7,7); //number of centers
A
Alexander Alekhin 已提交
709
    Mat gray = ...; //source image
710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747
    vector<Point2f> centers; //this will be filled by the detected centers

    bool patternfound = findCirclesGrid(gray, patternsize, centers);

    drawChessboardCorners(img, patternsize, Mat(centers), patternfound);
@endcode
@note The function requires white space (like a square-thick border, the wider the better) around
the board to make the detection more robust in various environments.
 */
CV_EXPORTS_W bool findCirclesGrid( InputArray image, Size patternSize,
                                   OutputArray centers, int flags,
                                   const Ptr<FeatureDetector> &blobDetector,
                                   const CirclesGridFinderParameters& parameters);

/** @overload */
CV_EXPORTS_W bool findCirclesGrid( InputArray image, Size patternSize,
                                   OutputArray centers, int flags = CALIB_CB_SYMMETRIC_GRID,
                                   const Ptr<FeatureDetector> &blobDetector = SimpleBlobDetector::create());

/** @brief Finds the camera intrinsic and extrinsic parameters from several views of a calibration
pattern.

@param objectPoints In the new interface it is a vector of vectors of calibration pattern points in
the calibration pattern coordinate space (e.g. std::vector<std::vector<cv::Vec3f>>). The outer
vector contains as many elements as the number of pattern views. If the same calibration pattern
is shown in each view and it is fully visible, all the vectors will be the same. Although, it is
possible to use partially occluded patterns or even different patterns in different views. Then,
the vectors will be different. Although the points are 3D, they all lie in the calibration pattern's
XY coordinate plane (thus 0 in the Z-coordinate), if the used calibration pattern is a planar rig.
In the old interface all the vectors of object points from different views are concatenated
together.
@param imagePoints In the new interface it is a vector of vectors of the projections of calibration
pattern points (e.g. std::vector<std::vector<cv::Vec2f>>). imagePoints.size() and
objectPoints.size(), and imagePoints[i].size() and objectPoints[i].size() for each i, must be equal,
respectively. In the old interface all the vectors of object points from different views are
concatenated together.
@param imageSize Size of the image used only to initialize the camera intrinsic matrix.
@param cameraMatrix Input/output 3x3 floating-point camera intrinsic matrix
A
Alexander Alekhin 已提交
748
\f$\cameramatrix{A}\f$ . If @ref CALIB_USE_INTRINSIC_GUESS
A
Alexander Alekhin 已提交
749 750
and/or @ref CALIB_FIX_ASPECT_RATIO, @ref CALIB_FIX_PRINCIPAL_POINT or @ref CALIB_FIX_FOCAL_LENGTH
are specified, some or all of fx, fy, cx, cy must be initialized before calling the function.
751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771
@param distCoeffs Input/output vector of distortion coefficients
\f$\distcoeffs\f$.
@param rvecs Output vector of rotation vectors (@ref Rodrigues ) estimated for each pattern view
(e.g. std::vector<cv::Mat>>). That is, each i-th rotation vector together with the corresponding
i-th translation vector (see the next output parameter description) brings the calibration pattern
from the object coordinate space (in which object points are specified) to the camera coordinate
space. In more technical terms, the tuple of the i-th rotation and translation vector performs
a change of basis from object coordinate space to camera coordinate space. Due to its duality, this
tuple is equivalent to the position of the calibration pattern with respect to the camera coordinate
space.
@param tvecs Output vector of translation vectors estimated for each pattern view, see parameter
describtion above.
@param stdDeviationsIntrinsics Output vector of standard deviations estimated for intrinsic
parameters. Order of deviations values:
\f$(f_x, f_y, c_x, c_y, k_1, k_2, p_1, p_2, k_3, k_4, k_5, k_6 , s_1, s_2, s_3,
 s_4, \tau_x, \tau_y)\f$ If one of parameters is not estimated, it's deviation is equals to zero.
@param stdDeviationsExtrinsics Output vector of standard deviations estimated for extrinsic
parameters. Order of deviations values: \f$(R_0, T_0, \dotsc , R_{M - 1}, T_{M - 1})\f$ where M is
the number of pattern views. \f$R_i, T_i\f$ are concatenated 1x3 vectors.
 @param perViewErrors Output vector of the RMS re-projection error estimated for each pattern view.
@param flags Different flags that may be zero or a combination of the following values:
A
Alexander Alekhin 已提交
772
-   @ref CALIB_USE_INTRINSIC_GUESS cameraMatrix contains valid initial values of
773 774 775
fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image
center ( imageSize is used), and focal distances are computed in a least-squares fashion.
Note, that if intrinsic parameters are known, there is no need to use this function just to
A
Alexander Alekhin 已提交
776
estimate extrinsic parameters. Use @ref solvePnP instead.
A
Alexander Alekhin 已提交
777
-   @ref CALIB_FIX_PRINCIPAL_POINT The principal point is not changed during the global
778
optimization. It stays at the center or at a different location specified when
A
Alexander Alekhin 已提交
779 780
 @ref CALIB_USE_INTRINSIC_GUESS is set too.
-   @ref CALIB_FIX_ASPECT_RATIO The functions consider only fy as a free parameter. The
781
ratio fx/fy stays the same as in the input cameraMatrix . When
A
Alexander Alekhin 已提交
782
 @ref CALIB_USE_INTRINSIC_GUESS is not set, the actual input values of fx and fy are
783
ignored, only their ratio is computed and used further.
A
Alexander Alekhin 已提交
784
-   @ref CALIB_ZERO_TANGENT_DIST Tangential distortion coefficients \f$(p_1, p_2)\f$ are set
785
to zeros and stay zero.
A
Alexander Alekhin 已提交
786 787
-   @ref CALIB_FIX_FOCAL_LENGTH The focal length is not changed during the global optimization if
 @ref CALIB_USE_INTRINSIC_GUESS is set.
A
Alexander Alekhin 已提交
788 789
-   @ref CALIB_FIX_K1,..., @ref CALIB_FIX_K6 The corresponding radial distortion
coefficient is not changed during the optimization. If @ref CALIB_USE_INTRINSIC_GUESS is
790
set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
A
Alexander Alekhin 已提交
791
-   @ref CALIB_RATIONAL_MODEL Coefficients k4, k5, and k6 are enabled. To provide the
792
backward compatibility, this extra flag should be explicitly specified to make the
A
Alexander Alekhin 已提交
793
calibration function use the rational model and return 8 coefficients or more.
A
Alexander Alekhin 已提交
794
-   @ref CALIB_THIN_PRISM_MODEL Coefficients s1, s2, s3 and s4 are enabled. To provide the
795
backward compatibility, this extra flag should be explicitly specified to make the
A
Alexander Alekhin 已提交
796
calibration function use the thin prism model and return 12 coefficients or more.
A
Alexander Alekhin 已提交
797 798
-   @ref CALIB_FIX_S1_S2_S3_S4 The thin prism distortion coefficients are not changed during
the optimization. If @ref CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
799
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
A
Alexander Alekhin 已提交
800
-   @ref CALIB_TILTED_MODEL Coefficients tauX and tauY are enabled. To provide the
801
backward compatibility, this extra flag should be explicitly specified to make the
A
Alexander Alekhin 已提交
802
calibration function use the tilted sensor model and return 14 coefficients.
A
Alexander Alekhin 已提交
803 804
-   @ref CALIB_FIX_TAUX_TAUY The coefficients of the tilted sensor model are not changed during
the optimization. If @ref CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
805 806 807 808 809 810 811 812 813 814 815
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
@param criteria Termination criteria for the iterative optimization algorithm.

@return the overall RMS re-projection error.

The function estimates the intrinsic camera parameters and extrinsic parameters for each of the
views. The algorithm is based on @cite Zhang2000 and @cite BouguetMCT . The coordinates of 3D object
points and their corresponding 2D projections in each view must be specified. That may be achieved
by using an object with known geometry and easily detectable feature points. Such an object is
called a calibration rig or calibration pattern, and OpenCV has built-in support for a chessboard as
a calibration rig (see @ref findChessboardCorners). Currently, initialization of intrinsic
A
Alexander Alekhin 已提交
816
parameters (when @ref CALIB_USE_INTRINSIC_GUESS is not set) is only implemented for planar calibration
817 818 819 820 821 822 823 824 825 826
patterns (where Z-coordinates of the object points must be all zeros). 3D calibration rigs can also
be used as long as initial cameraMatrix is provided.

The algorithm performs the following steps:

-   Compute the initial intrinsic parameters (the option only available for planar calibration
    patterns) or read them from the input parameters. The distortion coefficients are all set to
    zeros initially unless some of CALIB_FIX_K? are specified.

-   Estimate the initial camera pose as if the intrinsic parameters have been already known. This is
A
Alexander Alekhin 已提交
827
    done using @ref solvePnP .
828 829 830 831

-   Run the global Levenberg-Marquardt optimization algorithm to minimize the reprojection error,
    that is, the total sum of squared distances between the observed feature points imagePoints and
    the projected (using the current estimates for camera parameters and the poses) object points
A
Alexander Alekhin 已提交
832
    objectPoints. See @ref projectPoints for details.
833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852

@note
    If you use a non-square (i.e. non-N-by-N) grid and @ref findChessboardCorners for calibration,
    and @ref calibrateCamera returns bad values (zero distortion coefficients, \f$c_x\f$ and
    \f$c_y\f$ very far from the image center, and/or large differences between \f$f_x\f$ and
    \f$f_y\f$ (ratios of 10:1 or more)), then you are probably using patternSize=cvSize(rows,cols)
    instead of using patternSize=cvSize(cols,rows) in @ref findChessboardCorners.

@sa
   calibrateCameraRO, findChessboardCorners, solvePnP, initCameraMatrix2D, stereoCalibrate,
   undistort
 */
CV_EXPORTS_AS(calibrateCameraExtended) double calibrateCamera( InputArrayOfArrays objectPoints,
                                     InputArrayOfArrays imagePoints, Size imageSize,
                                     InputOutputArray cameraMatrix, InputOutputArray distCoeffs,
                                     OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs,
                                     OutputArray stdDeviationsIntrinsics,
                                     OutputArray stdDeviationsExtrinsics,
                                     OutputArray perViewErrors,
                                     int flags = 0, TermCriteria criteria = TermCriteria(
853
                                        TermCriteria::COUNT + TermCriteria::EPS, 100, DBL_EPSILON) );
854 855 856 857 858 859 860

/** @overload */
CV_EXPORTS_W double calibrateCamera( InputArrayOfArrays objectPoints,
                                     InputArrayOfArrays imagePoints, Size imageSize,
                                     InputOutputArray cameraMatrix, InputOutputArray distCoeffs,
                                     OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs,
                                     int flags = 0, TermCriteria criteria = TermCriteria(
861
                                        TermCriteria::COUNT + TermCriteria::EPS, 100, DBL_EPSILON) );
862 863 864

/** @brief Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.

A
Alexander Alekhin 已提交
865
This function is an extension of #calibrateCamera with the method of releasing object which was
866 867 868 869
proposed in @cite strobl2011iccv. In many common cases with inaccurate, unmeasured, roughly planar
targets (calibration plates), this method can dramatically improve the precision of the estimated
camera parameters. Both the object-releasing method and standard method are supported by this
function. Use the parameter **iFixedPoint** for method selection. In the internal implementation,
A
Alexander Alekhin 已提交
870
#calibrateCamera is a wrapper for this function.
871 872

@param objectPoints Vector of vectors of calibration pattern points in the calibration pattern
A
Alexander Alekhin 已提交
873
coordinate space. See #calibrateCamera for details. If the method of releasing object to be used,
874 875 876 877 878
the identical calibration board must be used in each view and it must be fully visible, and all
objectPoints[i] must be the same and all points should be roughly close to a plane. **The calibration
target has to be rigid, or at least static if the camera (rather than the calibration target) is
shifted for grabbing images.**
@param imagePoints Vector of vectors of the projections of calibration pattern points. See
A
Alexander Alekhin 已提交
879
#calibrateCamera for details.
880 881 882 883 884 885 886 887 888
@param imageSize Size of the image used only to initialize the intrinsic camera matrix.
@param iFixedPoint The index of the 3D object point in objectPoints[0] to be fixed. It also acts as
a switch for calibration method selection. If object-releasing method to be used, pass in the
parameter in the range of [1, objectPoints[0].size()-2], otherwise a value out of this range will
make standard calibration method selected. Usually the top-right corner point of the calibration
board grid is recommended to be fixed when object-releasing method being utilized. According to
\cite strobl2011iccv, two other points are also fixed. In this implementation, objectPoints[0].front
and objectPoints[0].back.z are used. With object-releasing method, accurate rvecs, tvecs and
newObjPoints are only possible if coordinates of these three fixed points are accurate enough.
A
Alexander Alekhin 已提交
889 890 891
@param cameraMatrix Output 3x3 floating-point camera matrix. See #calibrateCamera for details.
@param distCoeffs Output vector of distortion coefficients. See #calibrateCamera for details.
@param rvecs Output vector of rotation vectors estimated for each pattern view. See #calibrateCamera
892 893 894 895 896 897 898
for details.
@param tvecs Output vector of translation vectors estimated for each pattern view.
@param newObjPoints The updated output vector of calibration pattern points. The coordinates might
be scaled based on three fixed points. The returned coordinates are accurate only if the above
mentioned three fixed points are accurate. If not needed, noArray() can be passed in. This parameter
is ignored with standard calibration method.
@param stdDeviationsIntrinsics Output vector of standard deviations estimated for intrinsic parameters.
A
Alexander Alekhin 已提交
899
See #calibrateCamera for details.
900
@param stdDeviationsExtrinsics Output vector of standard deviations estimated for extrinsic parameters.
A
Alexander Alekhin 已提交
901
See #calibrateCamera for details.
902 903 904 905 906
@param stdDeviationsObjPoints Output vector of standard deviations estimated for refined coordinates
of calibration pattern points. It has the same size and order as objectPoints[0] vector. This
parameter is ignored with standard calibration method.
 @param perViewErrors Output vector of the RMS re-projection error estimated for each pattern view.
@param flags Different flags that may be zero or a combination of some predefined values. See
A
Alexander Alekhin 已提交
907
#calibrateCamera for details. If the method of releasing object is used, the calibration time may
908 909 910 911 912 913 914 915
be much longer. CALIB_USE_QR or CALIB_USE_LU could be used for faster calibration with potentially
less precise and less stable in some rare cases.
@param criteria Termination criteria for the iterative optimization algorithm.

@return the overall RMS re-projection error.

The function estimates the intrinsic camera parameters and extrinsic parameters for each of the
views. The algorithm is based on @cite Zhang2000, @cite BouguetMCT and @cite strobl2011iccv. See
A
Alexander Alekhin 已提交
916
#calibrateCamera for other detailed explanations.
917 918 919 920 921 922 923 924 925 926 927 928 929
@sa
   calibrateCamera, findChessboardCorners, solvePnP, initCameraMatrix2D, stereoCalibrate, undistort
 */
CV_EXPORTS_AS(calibrateCameraROExtended) double calibrateCameraRO( InputArrayOfArrays objectPoints,
                                     InputArrayOfArrays imagePoints, Size imageSize, int iFixedPoint,
                                     InputOutputArray cameraMatrix, InputOutputArray distCoeffs,
                                     OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs,
                                     OutputArray newObjPoints,
                                     OutputArray stdDeviationsIntrinsics,
                                     OutputArray stdDeviationsExtrinsics,
                                     OutputArray stdDeviationsObjPoints,
                                     OutputArray perViewErrors,
                                     int flags = 0, TermCriteria criteria = TermCriteria(
930
                                        TermCriteria::COUNT + TermCriteria::EPS, 100, DBL_EPSILON) );
931 932 933 934 935 936 937 938

/** @overload */
CV_EXPORTS_W double calibrateCameraRO( InputArrayOfArrays objectPoints,
                                     InputArrayOfArrays imagePoints, Size imageSize, int iFixedPoint,
                                     InputOutputArray cameraMatrix, InputOutputArray distCoeffs,
                                     OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs,
                                     OutputArray newObjPoints,
                                     int flags = 0, TermCriteria criteria = TermCriteria(
939
                                        TermCriteria::COUNT + TermCriteria::EPS, 100, DBL_EPSILON) );
940 941 942

/** @brief Computes useful camera characteristics from the camera intrinsic matrix.

A
Alexander Alekhin 已提交
943 944
@param cameraMatrix Input camera intrinsic matrix that can be estimated by #calibrateCamera or
#stereoCalibrate .
945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996
@param imageSize Input image size in pixels.
@param apertureWidth Physical width in mm of the sensor.
@param apertureHeight Physical height in mm of the sensor.
@param fovx Output field of view in degrees along the horizontal sensor axis.
@param fovy Output field of view in degrees along the vertical sensor axis.
@param focalLength Focal length of the lens in mm.
@param principalPoint Principal point in mm.
@param aspectRatio \f$f_y/f_x\f$

The function computes various useful camera characteristics from the previously estimated camera
matrix.

@note
   Do keep in mind that the unity measure 'mm' stands for whatever unit of measure one chooses for
    the chessboard pitch (it can thus be any value).
 */
CV_EXPORTS_W void calibrationMatrixValues( InputArray cameraMatrix, Size imageSize,
                                           double apertureWidth, double apertureHeight,
                                           CV_OUT double& fovx, CV_OUT double& fovy,
                                           CV_OUT double& focalLength, CV_OUT Point2d& principalPoint,
                                           CV_OUT double& aspectRatio );

/** @brief Calibrates a stereo camera set up. This function finds the intrinsic parameters
for each of the two cameras and the extrinsic parameters between the two cameras.

@param objectPoints Vector of vectors of the calibration pattern points. The same structure as
in @ref calibrateCamera. For each pattern view, both cameras need to see the same object
points. Therefore, objectPoints.size(), imagePoints1.size(), and imagePoints2.size() need to be
equal as well as objectPoints[i].size(), imagePoints1[i].size(), and imagePoints2[i].size() need to
be equal for each i.
@param imagePoints1 Vector of vectors of the projections of the calibration pattern points,
observed by the first camera. The same structure as in @ref calibrateCamera.
@param imagePoints2 Vector of vectors of the projections of the calibration pattern points,
observed by the second camera. The same structure as in @ref calibrateCamera.
@param cameraMatrix1 Input/output camera intrinsic matrix for the first camera, the same as in
@ref calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.
@param distCoeffs1 Input/output vector of distortion coefficients, the same as in
@ref calibrateCamera.
@param cameraMatrix2 Input/output second camera intrinsic matrix for the second camera. See description for
cameraMatrix1.
@param distCoeffs2 Input/output lens distortion coefficients for the second camera. See
description for distCoeffs1.
@param imageSize Size of the image used only to initialize the camera intrinsic matrices.
@param R Output rotation matrix. Together with the translation vector T, this matrix brings
points given in the first camera's coordinate system to points in the second camera's
coordinate system. In more technical terms, the tuple of R and T performs a change of basis
from the first camera's coordinate system to the second camera's coordinate system. Due to its
duality, this tuple is equivalent to the position of the first camera with respect to the
second camera coordinate system.
@param T Output translation vector, see description above.
@param E Output essential matrix.
@param F Output fundamental matrix.
997 998 999 1000 1001 1002 1003 1004 1005
@param rvecs Output vector of rotation vectors ( @ref Rodrigues ) estimated for each pattern view in the
coordinate system of the first camera of the stereo pair (e.g. std::vector<cv::Mat>). More in detail, each
i-th rotation vector together with the corresponding i-th translation vector (see the next output parameter
description) brings the calibration pattern from the object coordinate space (in which object points are
specified) to the camera coordinate space of the first camera of the stereo pair. In more technical terms,
the tuple of the i-th rotation and translation vector performs a change of basis from object coordinate space
to camera coordinate space of the first camera of the stereo pair.
@param tvecs Output vector of translation vectors estimated for each pattern view, see parameter description
of previous output parameter ( rvecs ).
1006 1007
@param perViewErrors Output vector of the RMS re-projection error estimated for each pattern view.
@param flags Different flags that may be zero or a combination of the following values:
A
Alexander Alekhin 已提交
1008
-   @ref CALIB_FIX_INTRINSIC Fix cameraMatrix? and distCoeffs? so that only R, T, E, and F
1009
matrices are estimated.
A
Alexander Alekhin 已提交
1010
-   @ref CALIB_USE_INTRINSIC_GUESS Optimize some or all of the intrinsic parameters
1011
according to the specified flags. Initial values are provided by the user.
A
Alexander Alekhin 已提交
1012
-   @ref CALIB_USE_EXTRINSIC_GUESS R and T contain valid initial values that are optimized further.
1013
Otherwise R and T are initialized to the median value of the pattern views (each dimension separately).
A
Alexander Alekhin 已提交
1014 1015 1016
-   @ref CALIB_FIX_PRINCIPAL_POINT Fix the principal points during the optimization.
-   @ref CALIB_FIX_FOCAL_LENGTH Fix \f$f^{(j)}_x\f$ and \f$f^{(j)}_y\f$ .
-   @ref CALIB_FIX_ASPECT_RATIO Optimize \f$f^{(j)}_y\f$ . Fix the ratio \f$f^{(j)}_x/f^{(j)}_y\f$
1017
.
A
Alexander Alekhin 已提交
1018 1019
-   @ref CALIB_SAME_FOCAL_LENGTH Enforce \f$f^{(0)}_x=f^{(1)}_x\f$ and \f$f^{(0)}_y=f^{(1)}_y\f$ .
-   @ref CALIB_ZERO_TANGENT_DIST Set tangential distortion coefficients for each camera to
1020
zeros and fix there.
A
Alexander Alekhin 已提交
1021 1022
-   @ref CALIB_FIX_K1,..., @ref CALIB_FIX_K6 Do not change the corresponding radial
distortion coefficient during the optimization. If @ref CALIB_USE_INTRINSIC_GUESS is set,
1023
the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
A
Alexander Alekhin 已提交
1024
-   @ref CALIB_RATIONAL_MODEL Enable coefficients k4, k5, and k6. To provide the backward
1025 1026 1027
compatibility, this extra flag should be explicitly specified to make the calibration
function use the rational model and return 8 coefficients. If the flag is not set, the
function computes and returns only 5 distortion coefficients.
A
Alexander Alekhin 已提交
1028
-   @ref CALIB_THIN_PRISM_MODEL Coefficients s1, s2, s3 and s4 are enabled. To provide the
1029 1030 1031
backward compatibility, this extra flag should be explicitly specified to make the
calibration function use the thin prism model and return 12 coefficients. If the flag is not
set, the function computes and returns only 5 distortion coefficients.
A
Alexander Alekhin 已提交
1032 1033
-   @ref CALIB_FIX_S1_S2_S3_S4 The thin prism distortion coefficients are not changed during
the optimization. If @ref CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
1034
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
A
Alexander Alekhin 已提交
1035
-   @ref CALIB_TILTED_MODEL Coefficients tauX and tauY are enabled. To provide the
1036 1037 1038
backward compatibility, this extra flag should be explicitly specified to make the
calibration function use the tilted sensor model and return 14 coefficients. If the flag is not
set, the function computes and returns only 5 distortion coefficients.
A
Alexander Alekhin 已提交
1039 1040
-   @ref CALIB_FIX_TAUX_TAUY The coefficients of the tilted sensor model are not changed during
the optimization. If @ref CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087
supplied distCoeffs matrix is used. Otherwise, it is set to 0.
@param criteria Termination criteria for the iterative optimization algorithm.

The function estimates the transformation between two cameras making a stereo pair. If one computes
the poses of an object relative to the first camera and to the second camera,
( \f$R_1\f$,\f$T_1\f$ ) and (\f$R_2\f$,\f$T_2\f$), respectively, for a stereo camera where the
relative position and orientation between the two cameras are fixed, then those poses definitely
relate to each other. This means, if the relative position and orientation (\f$R\f$,\f$T\f$) of the
two cameras is known, it is possible to compute (\f$R_2\f$,\f$T_2\f$) when (\f$R_1\f$,\f$T_1\f$) is
given. This is what the described function does. It computes (\f$R\f$,\f$T\f$) such that:

\f[R_2=R R_1\f]
\f[T_2=R T_1 + T.\f]

Therefore, one can compute the coordinate representation of a 3D point for the second camera's
coordinate system when given the point's coordinate representation in the first camera's coordinate
system:

\f[\begin{bmatrix}
X_2 \\
Y_2 \\
Z_2 \\
1
\end{bmatrix} = \begin{bmatrix}
R & T \\
0 & 1
\end{bmatrix} \begin{bmatrix}
X_1 \\
Y_1 \\
Z_1 \\
1
\end{bmatrix}.\f]


Optionally, it computes the essential matrix E:

\f[E= \vecthreethree{0}{-T_2}{T_1}{T_2}{0}{-T_0}{-T_1}{T_0}{0} R\f]

where \f$T_i\f$ are components of the translation vector \f$T\f$ : \f$T=[T_0, T_1, T_2]^T\f$ .
And the function can also compute the fundamental matrix F:

\f[F = cameraMatrix2^{-T}\cdot E \cdot cameraMatrix1^{-1}\f]

Besides the stereo-related information, the function can also perform a full calibration of each of
the two cameras. However, due to the high dimensionality of the parameter space and noise in the
input data, the function can diverge from the correct solution. If the intrinsic parameters can be
estimated with high accuracy for each of the cameras individually (for example, using
A
Alexander Alekhin 已提交
1088
#calibrateCamera ), you are recommended to do so and then pass @ref CALIB_FIX_INTRINSIC flag to the
1089 1090
function along with the computed intrinsic parameters. Otherwise, if all the parameters are
estimated at once, it makes sense to restrict some parameters, for example, pass
A
Alexander Alekhin 已提交
1091
 @ref CALIB_SAME_FOCAL_LENGTH and @ref CALIB_ZERO_TANGENT_DIST flags, which is usually a
1092 1093
reasonable assumption.

A
Alexander Alekhin 已提交
1094
Similarly to #calibrateCamera, the function minimizes the total re-projection error for all the
1095 1096 1097 1098 1099 1100 1101
points in all the available views from both cameras. The function returns the final value of the
re-projection error.
 */
CV_EXPORTS_AS(stereoCalibrateExtended) double stereoCalibrate( InputArrayOfArrays objectPoints,
                                     InputArrayOfArrays imagePoints1, InputArrayOfArrays imagePoints2,
                                     InputOutputArray cameraMatrix1, InputOutputArray distCoeffs1,
                                     InputOutputArray cameraMatrix2, InputOutputArray distCoeffs2,
A
Alexander Alekhin 已提交
1102
                                     Size imageSize, InputOutputArray R, InputOutputArray T, OutputArray E, OutputArray F,
1103
                                     OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs,
1104
                                     OutputArray perViewErrors, int flags = CALIB_FIX_INTRINSIC,
1105
                                     TermCriteria criteria = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 100, 1e-6) );
1106 1107 1108 1109 1110 1111 1112 1113

/// @overload
CV_EXPORTS_W double stereoCalibrate( InputArrayOfArrays objectPoints,
                                     InputArrayOfArrays imagePoints1, InputArrayOfArrays imagePoints2,
                                     InputOutputArray cameraMatrix1, InputOutputArray distCoeffs1,
                                     InputOutputArray cameraMatrix2, InputOutputArray distCoeffs2,
                                     Size imageSize, OutputArray R,OutputArray T, OutputArray E, OutputArray F,
                                     int flags = CALIB_FIX_INTRINSIC,
1114
                                     TermCriteria criteria = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 100, 1e-6) );
1115

1116 1117 1118 1119 1120 1121 1122 1123
/// @overload
CV_EXPORTS_W double stereoCalibrate( InputArrayOfArrays objectPoints,
                                     InputArrayOfArrays imagePoints1, InputArrayOfArrays imagePoints2,
                                     InputOutputArray cameraMatrix1, InputOutputArray distCoeffs1,
                                     InputOutputArray cameraMatrix2, InputOutputArray distCoeffs2,
                                     Size imageSize, InputOutputArray R, InputOutputArray T, OutputArray E, OutputArray F,
                                     OutputArray perViewErrors, int flags = CALIB_FIX_INTRINSIC,
                                     TermCriteria criteria = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 1e-6) );
1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437

/** @brief Computes Hand-Eye calibration: \f$_{}^{g}\textrm{T}_c\f$

@param[in] R_gripper2base Rotation part extracted from the homogeneous matrix that transforms a point
expressed in the gripper frame to the robot base frame (\f$_{}^{b}\textrm{T}_g\f$).
This is a vector (`vector<Mat>`) that contains the rotation, `(3x3)` rotation matrices or `(3x1)` rotation vectors,
for all the transformations from gripper frame to robot base frame.
@param[in] t_gripper2base Translation part extracted from the homogeneous matrix that transforms a point
expressed in the gripper frame to the robot base frame (\f$_{}^{b}\textrm{T}_g\f$).
This is a vector (`vector<Mat>`) that contains the `(3x1)` translation vectors for all the transformations
from gripper frame to robot base frame.
@param[in] R_target2cam Rotation part extracted from the homogeneous matrix that transforms a point
expressed in the target frame to the camera frame (\f$_{}^{c}\textrm{T}_t\f$).
This is a vector (`vector<Mat>`) that contains the rotation, `(3x3)` rotation matrices or `(3x1)` rotation vectors,
for all the transformations from calibration target frame to camera frame.
@param[in] t_target2cam Rotation part extracted from the homogeneous matrix that transforms a point
expressed in the target frame to the camera frame (\f$_{}^{c}\textrm{T}_t\f$).
This is a vector (`vector<Mat>`) that contains the `(3x1)` translation vectors for all the transformations
from calibration target frame to camera frame.
@param[out] R_cam2gripper Estimated `(3x3)` rotation part extracted from the homogeneous matrix that transforms a point
expressed in the camera frame to the gripper frame (\f$_{}^{g}\textrm{T}_c\f$).
@param[out] t_cam2gripper Estimated `(3x1)` translation part extracted from the homogeneous matrix that transforms a point
expressed in the camera frame to the gripper frame (\f$_{}^{g}\textrm{T}_c\f$).
@param[in] method One of the implemented Hand-Eye calibration method, see cv::HandEyeCalibrationMethod

The function performs the Hand-Eye calibration using various methods. One approach consists in estimating the
rotation then the translation (separable solutions) and the following methods are implemented:
  - R. Tsai, R. Lenz A New Technique for Fully Autonomous and Efficient 3D Robotics Hand/EyeCalibration \cite Tsai89
  - F. Park, B. Martin Robot Sensor Calibration: Solving AX = XB on the Euclidean Group \cite Park94
  - R. Horaud, F. Dornaika Hand-Eye Calibration \cite Horaud95

Another approach consists in estimating simultaneously the rotation and the translation (simultaneous solutions),
with the following implemented methods:
  - N. Andreff, R. Horaud, B. Espiau On-line Hand-Eye Calibration \cite Andreff99
  - K. Daniilidis Hand-Eye Calibration Using Dual Quaternions \cite Daniilidis98

The following picture describes the Hand-Eye calibration problem where the transformation between a camera ("eye")
mounted on a robot gripper ("hand") has to be estimated. This configuration is called eye-in-hand.

The eye-to-hand configuration consists in a static camera observing a calibration pattern mounted on the robot
end-effector. The transformation from the camera to the robot base frame can then be estimated by inputting
the suitable transformations to the function, see below.

![](pics/hand-eye_figure.png)

The calibration procedure is the following:
  - a static calibration pattern is used to estimate the transformation between the target frame
  and the camera frame
  - the robot gripper is moved in order to acquire several poses
  - for each pose, the homogeneous transformation between the gripper frame and the robot base frame is recorded using for
  instance the robot kinematics
\f[
    \begin{bmatrix}
    X_b\\
    Y_b\\
    Z_b\\
    1
    \end{bmatrix}
    =
    \begin{bmatrix}
    _{}^{b}\textrm{R}_g & _{}^{b}\textrm{t}_g \\
    0_{1 \times 3} & 1
    \end{bmatrix}
    \begin{bmatrix}
    X_g\\
    Y_g\\
    Z_g\\
    1
    \end{bmatrix}
\f]
  - for each pose, the homogeneous transformation between the calibration target frame and the camera frame is recorded using
  for instance a pose estimation method (PnP) from 2D-3D point correspondences
\f[
    \begin{bmatrix}
    X_c\\
    Y_c\\
    Z_c\\
    1
    \end{bmatrix}
    =
    \begin{bmatrix}
    _{}^{c}\textrm{R}_t & _{}^{c}\textrm{t}_t \\
    0_{1 \times 3} & 1
    \end{bmatrix}
    \begin{bmatrix}
    X_t\\
    Y_t\\
    Z_t\\
    1
    \end{bmatrix}
\f]

The Hand-Eye calibration procedure returns the following homogeneous transformation
\f[
    \begin{bmatrix}
    X_g\\
    Y_g\\
    Z_g\\
    1
    \end{bmatrix}
    =
    \begin{bmatrix}
    _{}^{g}\textrm{R}_c & _{}^{g}\textrm{t}_c \\
    0_{1 \times 3} & 1
    \end{bmatrix}
    \begin{bmatrix}
    X_c\\
    Y_c\\
    Z_c\\
    1
    \end{bmatrix}
\f]

This problem is also known as solving the \f$\mathbf{A}\mathbf{X}=\mathbf{X}\mathbf{B}\f$ equation:
  - for an eye-in-hand configuration
\f[
    \begin{align*}
    ^{b}{\textrm{T}_g}^{(1)} \hspace{0.2em} ^{g}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(1)} &=
    \hspace{0.1em} ^{b}{\textrm{T}_g}^{(2)} \hspace{0.2em} ^{g}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(2)} \\

    (^{b}{\textrm{T}_g}^{(2)})^{-1} \hspace{0.2em} ^{b}{\textrm{T}_g}^{(1)} \hspace{0.2em} ^{g}\textrm{T}_c &=
    \hspace{0.1em} ^{g}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(2)} (^{c}{\textrm{T}_t}^{(1)})^{-1} \\

    \textrm{A}_i \textrm{X} &= \textrm{X} \textrm{B}_i \\
    \end{align*}
\f]

  - for an eye-to-hand configuration
\f[
    \begin{align*}
    ^{g}{\textrm{T}_b}^{(1)} \hspace{0.2em} ^{b}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(1)} &=
    \hspace{0.1em} ^{g}{\textrm{T}_b}^{(2)} \hspace{0.2em} ^{b}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(2)} \\

    (^{g}{\textrm{T}_b}^{(2)})^{-1} \hspace{0.2em} ^{g}{\textrm{T}_b}^{(1)} \hspace{0.2em} ^{b}\textrm{T}_c &=
    \hspace{0.1em} ^{b}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(2)} (^{c}{\textrm{T}_t}^{(1)})^{-1} \\

    \textrm{A}_i \textrm{X} &= \textrm{X} \textrm{B}_i \\
    \end{align*}
\f]

\note
Additional information can be found on this [website](http://campar.in.tum.de/Chair/HandEyeCalibration).
\note
A minimum of 2 motions with non parallel rotation axes are necessary to determine the hand-eye transformation.
So at least 3 different poses are required, but it is strongly recommended to use many more poses.

 */
CV_EXPORTS void calibrateHandEye( InputArrayOfArrays R_gripper2base, InputArrayOfArrays t_gripper2base,
                                    InputArrayOfArrays R_target2cam, InputArrayOfArrays t_target2cam,
                                    OutputArray R_cam2gripper, OutputArray t_cam2gripper,
                                    HandEyeCalibrationMethod method=CALIB_HAND_EYE_TSAI );

/** @brief Computes Robot-World/Hand-Eye calibration: \f$_{}^{w}\textrm{T}_b\f$ and \f$_{}^{c}\textrm{T}_g\f$

@param[in] R_world2cam Rotation part extracted from the homogeneous matrix that transforms a point
expressed in the world frame to the camera frame (\f$_{}^{c}\textrm{T}_w\f$).
This is a vector (`vector<Mat>`) that contains the rotation, `(3x3)` rotation matrices or `(3x1)` rotation vectors,
for all the transformations from world frame to the camera frame.
@param[in] t_world2cam Translation part extracted from the homogeneous matrix that transforms a point
expressed in the world frame to the camera frame (\f$_{}^{c}\textrm{T}_w\f$).
This is a vector (`vector<Mat>`) that contains the `(3x1)` translation vectors for all the transformations
from world frame to the camera frame.
@param[in] R_base2gripper Rotation part extracted from the homogeneous matrix that transforms a point
expressed in the robot base frame to the gripper frame (\f$_{}^{g}\textrm{T}_b\f$).
This is a vector (`vector<Mat>`) that contains the rotation, `(3x3)` rotation matrices or `(3x1)` rotation vectors,
for all the transformations from robot base frame to the gripper frame.
@param[in] t_base2gripper Rotation part extracted from the homogeneous matrix that transforms a point
expressed in the robot base frame to the gripper frame (\f$_{}^{g}\textrm{T}_b\f$).
This is a vector (`vector<Mat>`) that contains the `(3x1)` translation vectors for all the transformations
from robot base frame to the gripper frame.
@param[out] R_base2world Estimated `(3x3)` rotation part extracted from the homogeneous matrix that transforms a point
expressed in the robot base frame to the world frame (\f$_{}^{w}\textrm{T}_b\f$).
@param[out] t_base2world Estimated `(3x1)` translation part extracted from the homogeneous matrix that transforms a point
expressed in the robot base frame to the world frame (\f$_{}^{w}\textrm{T}_b\f$).
@param[out] R_gripper2cam Estimated `(3x3)` rotation part extracted from the homogeneous matrix that transforms a point
expressed in the gripper frame to the camera frame (\f$_{}^{c}\textrm{T}_g\f$).
@param[out] t_gripper2cam Estimated `(3x1)` translation part extracted from the homogeneous matrix that transforms a point
expressed in the gripper frame to the camera frame (\f$_{}^{c}\textrm{T}_g\f$).
@param[in] method One of the implemented Robot-World/Hand-Eye calibration method, see cv::RobotWorldHandEyeCalibrationMethod

The function performs the Robot-World/Hand-Eye calibration using various methods. One approach consists in estimating the
rotation then the translation (separable solutions):
  - M. Shah, Solving the robot-world/hand-eye calibration problem using the kronecker product \cite Shah2013SolvingTR

Another approach consists in estimating simultaneously the rotation and the translation (simultaneous solutions),
with the following implemented method:
  - A. Li, L. Wang, and D. Wu, Simultaneous robot-world and hand-eye calibration using dual-quaternions and kronecker product \cite Li2010SimultaneousRA

The following picture describes the Robot-World/Hand-Eye calibration problem where the transformations between a robot and a world frame
and between a robot gripper ("hand") and a camera ("eye") mounted at the robot end-effector have to be estimated.

![](pics/robot-world_hand-eye_figure.png)

The calibration procedure is the following:
  - a static calibration pattern is used to estimate the transformation between the target frame
  and the camera frame
  - the robot gripper is moved in order to acquire several poses
  - for each pose, the homogeneous transformation between the gripper frame and the robot base frame is recorded using for
  instance the robot kinematics
\f[
    \begin{bmatrix}
    X_g\\
    Y_g\\
    Z_g\\
    1
    \end{bmatrix}
    =
    \begin{bmatrix}
    _{}^{g}\textrm{R}_b & _{}^{g}\textrm{t}_b \\
    0_{1 \times 3} & 1
    \end{bmatrix}
    \begin{bmatrix}
    X_b\\
    Y_b\\
    Z_b\\
    1
    \end{bmatrix}
\f]
  - for each pose, the homogeneous transformation between the calibration target frame (the world frame) and the camera frame is recorded using
  for instance a pose estimation method (PnP) from 2D-3D point correspondences
\f[
    \begin{bmatrix}
    X_c\\
    Y_c\\
    Z_c\\
    1
    \end{bmatrix}
    =
    \begin{bmatrix}
    _{}^{c}\textrm{R}_w & _{}^{c}\textrm{t}_w \\
    0_{1 \times 3} & 1
    \end{bmatrix}
    \begin{bmatrix}
    X_w\\
    Y_w\\
    Z_w\\
    1
    \end{bmatrix}
\f]

The Robot-World/Hand-Eye calibration procedure returns the following homogeneous transformations
\f[
    \begin{bmatrix}
    X_w\\
    Y_w\\
    Z_w\\
    1
    \end{bmatrix}
    =
    \begin{bmatrix}
    _{}^{w}\textrm{R}_b & _{}^{w}\textrm{t}_b \\
    0_{1 \times 3} & 1
    \end{bmatrix}
    \begin{bmatrix}
    X_b\\
    Y_b\\
    Z_b\\
    1
    \end{bmatrix}
\f]
\f[
    \begin{bmatrix}
    X_c\\
    Y_c\\
    Z_c\\
    1
    \end{bmatrix}
    =
    \begin{bmatrix}
    _{}^{c}\textrm{R}_g & _{}^{c}\textrm{t}_g \\
    0_{1 \times 3} & 1
    \end{bmatrix}
    \begin{bmatrix}
    X_g\\
    Y_g\\
    Z_g\\
    1
    \end{bmatrix}
\f]

This problem is also known as solving the \f$\mathbf{A}\mathbf{X}=\mathbf{Z}\mathbf{B}\f$ equation, with:
  - \f$\mathbf{A} \Leftrightarrow \hspace{0.1em} _{}^{c}\textrm{T}_w\f$
  - \f$\mathbf{X} \Leftrightarrow \hspace{0.1em} _{}^{w}\textrm{T}_b\f$
  - \f$\mathbf{Z} \Leftrightarrow \hspace{0.1em} _{}^{c}\textrm{T}_g\f$
  - \f$\mathbf{B} \Leftrightarrow \hspace{0.1em} _{}^{g}\textrm{T}_b\f$

\note
At least 3 measurements are required (input vectors size must be greater or equal to 3).

 */
CV_EXPORTS void calibrateRobotWorldHandEye( InputArrayOfArrays R_world2cam, InputArrayOfArrays t_world2cam,
                                              InputArrayOfArrays R_base2gripper, InputArrayOfArrays t_base2gripper,
                                              OutputArray R_base2world, OutputArray t_base2world,
                                              OutputArray R_gripper2cam, OutputArray t_gripper2cam,
                                              RobotWorldHandEyeCalibrationMethod method=CALIB_ROBOT_WORLD_HAND_EYE_SHAH );

/** @brief The methods in this namespace use a so-called fisheye camera model.
  @ingroup calib3d_fisheye
*/
namespace fisheye
{
//! @addtogroup calib3d_fisheye
//! @{

enum{
    CALIB_USE_INTRINSIC_GUESS   = 1 << 0,
    CALIB_RECOMPUTE_EXTRINSIC   = 1 << 1,
    CALIB_CHECK_COND            = 1 << 2,
    CALIB_FIX_SKEW              = 1 << 3,
    CALIB_FIX_K1                = 1 << 4,
    CALIB_FIX_K2                = 1 << 5,
    CALIB_FIX_K3                = 1 << 6,
    CALIB_FIX_K4                = 1 << 7,
    CALIB_FIX_INTRINSIC         = 1 << 8,
A
Alexander Alekhin 已提交
1438 1439 1440
    CALIB_FIX_PRINCIPAL_POINT   = 1 << 9,
    CALIB_ZERO_DISPARITY        = 1 << 10,
    CALIB_FIX_FOCAL_LENGTH      = 1 << 11
1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479
};

/** @brief Projects points using fisheye model

@param objectPoints Array of object points, 1xN/Nx1 3-channel (or vector\<Point3f\> ), where N is
the number of points in the view.
@param imagePoints Output array of image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, or
vector\<Point2f\>.
@param affine
@param K Camera intrinsic matrix \f$cameramatrix{K}\f$.
@param D Input vector of distortion coefficients \f$\distcoeffsfisheye\f$.
@param alpha The skew coefficient.
@param jacobian Optional output 2Nx15 jacobian matrix of derivatives of image points with respect
to components of the focal lengths, coordinates of the principal point, distortion coefficients,
rotation vector, translation vector, and the skew. In the old interface different components of
the jacobian are returned via different output parameters.

The function computes projections of 3D points to the image plane given intrinsic and extrinsic
camera parameters. Optionally, the function computes Jacobians - matrices of partial derivatives of
image points coordinates (as functions of all the input parameters) with respect to the particular
parameters, intrinsic and/or extrinsic.
 */
CV_EXPORTS void projectPoints(InputArray objectPoints, OutputArray imagePoints, const Affine3d& affine,
    InputArray K, InputArray D, double alpha = 0, OutputArray jacobian = noArray());

/** @overload */
CV_EXPORTS_W void projectPoints(InputArray objectPoints, OutputArray imagePoints, InputArray rvec, InputArray tvec,
    InputArray K, InputArray D, double alpha = 0, OutputArray jacobian = noArray());

/** @brief Distorts 2D points using fisheye model.

@param undistorted Array of object points, 1xN/Nx1 2-channel (or vector\<Point2f\> ), where N is
the number of points in the view.
@param K Camera intrinsic matrix \f$cameramatrix{K}\f$.
@param D Input vector of distortion coefficients \f$\distcoeffsfisheye\f$.
@param alpha The skew coefficient.
@param distorted Output array of image points, 1xN/Nx1 2-channel, or vector\<Point2f\> .

Note that the function assumes the camera intrinsic matrix of the undistorted points to be identity.
A
Alexander Alekhin 已提交
1480
This means if you want to distort image points you have to multiply them with \f$K^{-1}\f$.
1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492
 */
CV_EXPORTS_W void distortPoints(InputArray undistorted, OutputArray distorted, InputArray K, InputArray D, double alpha = 0);

/** @brief Undistorts 2D points using fisheye model

@param distorted Array of object points, 1xN/Nx1 2-channel (or vector\<Point2f\> ), where N is the
number of points in the view.
@param K Camera intrinsic matrix \f$cameramatrix{K}\f$.
@param D Input vector of distortion coefficients \f$\distcoeffsfisheye\f$.
@param R Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3
1-channel or 1x1 3-channel
@param P New camera intrinsic matrix (3x3) or new projection matrix (3x4)
O
OpenCV Developers 已提交
1493
@param criteria Termination criteria
1494 1495 1496
@param undistorted Output array of image points, 1xN/Nx1 2-channel, or vector\<Point2f\> .
 */
CV_EXPORTS_W void undistortPoints(InputArray distorted, OutputArray undistorted,
O
OpenCV Developers 已提交
1497 1498 1499
    InputArray K, InputArray D, InputArray R = noArray(), InputArray P  = noArray(),
    TermCriteria criteria = TermCriteria(TermCriteria::MAX_ITER + TermCriteria::EPS, 10, 1e-8));

1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564

/** @brief Computes undistortion and rectification maps for image transform by cv::remap(). If D is empty zero
distortion is used, if R or P is empty identity matrixes are used.

@param K Camera intrinsic matrix \f$cameramatrix{K}\f$.
@param D Input vector of distortion coefficients \f$\distcoeffsfisheye\f$.
@param R Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3
1-channel or 1x1 3-channel
@param P New camera intrinsic matrix (3x3) or new projection matrix (3x4)
@param size Undistorted image size.
@param m1type Type of the first output map that can be CV_32FC1 or CV_16SC2 . See convertMaps()
for details.
@param map1 The first output map.
@param map2 The second output map.
 */
CV_EXPORTS_W void initUndistortRectifyMap(InputArray K, InputArray D, InputArray R, InputArray P,
    const cv::Size& size, int m1type, OutputArray map1, OutputArray map2);

/** @brief Transforms an image to compensate for fisheye lens distortion.

@param distorted image with fisheye lens distortion.
@param undistorted Output image with compensated fisheye lens distortion.
@param K Camera intrinsic matrix \f$cameramatrix{K}\f$.
@param D Input vector of distortion coefficients \f$\distcoeffsfisheye\f$.
@param Knew Camera intrinsic matrix of the distorted image. By default, it is the identity matrix but you
may additionally scale and shift the result by using a different matrix.
@param new_size the new size

The function transforms an image to compensate radial and tangential lens distortion.

The function is simply a combination of fisheye::initUndistortRectifyMap (with unity R ) and remap
(with bilinear interpolation). See the former function for details of the transformation being
performed.

See below the results of undistortImage.
   -   a\) result of undistort of perspective camera model (all possible coefficients (k_1, k_2, k_3,
        k_4, k_5, k_6) of distortion were optimized under calibration)
    -   b\) result of fisheye::undistortImage of fisheye camera model (all possible coefficients (k_1, k_2,
        k_3, k_4) of fisheye distortion were optimized under calibration)
    -   c\) original image was captured with fisheye lens

Pictures a) and b) almost the same. But if we consider points of image located far from the center
of image, we can notice that on image a) these points are distorted.

![image](pics/fisheye_undistorted.jpg)
 */
CV_EXPORTS_W void undistortImage(InputArray distorted, OutputArray undistorted,
    InputArray K, InputArray D, InputArray Knew = cv::noArray(), const Size& new_size = Size());

/** @brief Estimates new camera intrinsic matrix for undistortion or rectification.

@param K Camera intrinsic matrix \f$cameramatrix{K}\f$.
@param image_size Size of the image
@param D Input vector of distortion coefficients \f$\distcoeffsfisheye\f$.
@param R Rectification transformation in the object space: 3x3 1-channel, or vector: 3x1/1x3
1-channel or 1x1 3-channel
@param P New camera intrinsic matrix (3x3) or new projection matrix (3x4)
@param balance Sets the new focal length in range between the min focal length and the max focal
length. Balance is in range of [0, 1].
@param new_size the new size
@param fov_scale Divisor for new focal length.
 */
CV_EXPORTS_W void estimateNewCameraMatrixForUndistortRectify(InputArray K, InputArray D, const Size &image_size, InputArray R,
    OutputArray P, double balance = 0.0, const Size& new_size = Size(), double fov_scale = 1.0);

A
Alexander Alekhin 已提交
1565
/** @brief Performs camera calibration
1566 1567 1568 1569 1570 1571 1572 1573 1574

@param objectPoints vector of vectors of calibration pattern points in the calibration pattern
coordinate space.
@param imagePoints vector of vectors of the projections of calibration pattern points.
imagePoints.size() and objectPoints.size() and imagePoints[i].size() must be equal to
objectPoints[i].size() for each i.
@param image_size Size of the image used only to initialize the camera intrinsic matrix.
@param K Output 3x3 floating-point camera intrinsic matrix
\f$\cameramatrix{A}\f$ . If
A
Alexander Alekhin 已提交
1575
@ref fisheye::CALIB_USE_INTRINSIC_GUESS is specified, some or all of fx, fy, cx, cy must be
1576 1577 1578 1579 1580 1581 1582 1583 1584
initialized before calling the function.
@param D Output vector of distortion coefficients \f$\distcoeffsfisheye\f$.
@param rvecs Output vector of rotation vectors (see Rodrigues ) estimated for each pattern view.
That is, each k-th rotation vector together with the corresponding k-th translation vector (see
the next output parameter description) brings the calibration pattern from the model coordinate
space (in which object points are specified) to the world coordinate space, that is, a real
position of the calibration pattern in the k-th pattern view (k=0.. *M* -1).
@param tvecs Output vector of translation vectors estimated for each pattern view.
@param flags Different flags that may be zero or a combination of the following values:
A
Alexander Alekhin 已提交
1585
-   @ref fisheye::CALIB_USE_INTRINSIC_GUESS  cameraMatrix contains valid initial values of
1586 1587
fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image
center ( imageSize is used), and focal distances are computed in a least-squares fashion.
A
Alexander Alekhin 已提交
1588
-   @ref fisheye::CALIB_RECOMPUTE_EXTRINSIC  Extrinsic will be recomputed after each iteration
1589
of intrinsic optimization.
A
Alexander Alekhin 已提交
1590 1591 1592
-   @ref fisheye::CALIB_CHECK_COND  The functions will check validity of condition number.
-   @ref fisheye::CALIB_FIX_SKEW  Skew coefficient (alpha) is set to zero and stay zero.
-   @ref fisheye::CALIB_FIX_K1,..., @ref fisheye::CALIB_FIX_K4 Selected distortion coefficients
1593
are set to zeros and stay zero.
A
Alexander Alekhin 已提交
1594 1595 1596 1597
-   @ref fisheye::CALIB_FIX_PRINCIPAL_POINT  The principal point is not changed during the global
optimization. It stays at the center or at a different location specified when @ref fisheye::CALIB_USE_INTRINSIC_GUESS is set too.
-   @ref fisheye::CALIB_FIX_FOCAL_LENGTH The focal length is not changed during the global
optimization. It is the \f$max(width,height)/\pi\f$ or the provided \f$f_x\f$, \f$f_y\f$ when @ref fisheye::CALIB_USE_INTRINSIC_GUESS is set too.
1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620
@param criteria Termination criteria for the iterative optimization algorithm.
 */
CV_EXPORTS_W double calibrate(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints, const Size& image_size,
    InputOutputArray K, InputOutputArray D, OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs, int flags = 0,
        TermCriteria criteria = TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 100, DBL_EPSILON));

/** @brief Stereo rectification for fisheye camera model

@param K1 First camera intrinsic matrix.
@param D1 First camera distortion parameters.
@param K2 Second camera intrinsic matrix.
@param D2 Second camera distortion parameters.
@param imageSize Size of the image used for stereo calibration.
@param R Rotation matrix between the coordinate systems of the first and the second
cameras.
@param tvec Translation vector between coordinate systems of the cameras.
@param R1 Output 3x3 rectification transform (rotation matrix) for the first camera.
@param R2 Output 3x3 rectification transform (rotation matrix) for the second camera.
@param P1 Output 3x4 projection matrix in the new (rectified) coordinate systems for the first
camera.
@param P2 Output 3x4 projection matrix in the new (rectified) coordinate systems for the second
camera.
@param Q Output \f$4 \times 4\f$ disparity-to-depth mapping matrix (see reprojectImageTo3D ).
A
Alexander Alekhin 已提交
1621
@param flags Operation flags that may be zero or @ref fisheye::CALIB_ZERO_DISPARITY . If the flag is set,
1622 1623 1624 1625 1626
the function makes the principal points of each camera have the same pixel coordinates in the
rectified views. And if the flag is not set, the function may still shift the images in the
horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the
useful image area.
@param newImageSize New image resolution after rectification. The same size should be passed to
A
Alexander Alekhin 已提交
1627
#initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0)
1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646
is passed (default), it is set to the original imageSize . Setting it to larger value can help you
preserve details in the original image, especially when there is a big radial distortion.
@param balance Sets the new focal length in range between the min focal length and the max focal
length. Balance is in range of [0, 1].
@param fov_scale Divisor for new focal length.
 */
CV_EXPORTS_W void stereoRectify(InputArray K1, InputArray D1, InputArray K2, InputArray D2, const Size &imageSize, InputArray R, InputArray tvec,
    OutputArray R1, OutputArray R2, OutputArray P1, OutputArray P2, OutputArray Q, int flags, const Size &newImageSize = Size(),
    double balance = 0.0, double fov_scale = 1.0);

/** @brief Performs stereo calibration

@param objectPoints Vector of vectors of the calibration pattern points.
@param imagePoints1 Vector of vectors of the projections of the calibration pattern points,
observed by the first camera.
@param imagePoints2 Vector of vectors of the projections of the calibration pattern points,
observed by the second camera.
@param K1 Input/output first camera intrinsic matrix:
\f$\vecthreethree{f_x^{(j)}}{0}{c_x^{(j)}}{0}{f_y^{(j)}}{c_y^{(j)}}{0}{0}{1}\f$ , \f$j = 0,\, 1\f$ . If
A
Alexander Alekhin 已提交
1647
any of @ref fisheye::CALIB_USE_INTRINSIC_GUESS , @ref fisheye::CALIB_FIX_INTRINSIC are specified,
1648 1649 1650 1651 1652 1653 1654 1655
some or all of the matrix components must be initialized.
@param D1 Input/output vector of distortion coefficients \f$\distcoeffsfisheye\f$ of 4 elements.
@param K2 Input/output second camera intrinsic matrix. The parameter is similar to K1 .
@param D2 Input/output lens distortion coefficients for the second camera. The parameter is
similar to D1 .
@param imageSize Size of the image used only to initialize camera intrinsic matrix.
@param R Output rotation matrix between the 1st and the 2nd camera coordinate systems.
@param T Output translation vector between the coordinate systems of the cameras.
1656 1657 1658 1659 1660 1661 1662 1663 1664
@param rvecs Output vector of rotation vectors ( @ref Rodrigues ) estimated for each pattern view in the
coordinate system of the first camera of the stereo pair (e.g. std::vector<cv::Mat>). More in detail, each
i-th rotation vector together with the corresponding i-th translation vector (see the next output parameter
description) brings the calibration pattern from the object coordinate space (in which object points are
specified) to the camera coordinate space of the first camera of the stereo pair. In more technical terms,
the tuple of the i-th rotation and translation vector performs a change of basis from object coordinate space
to camera coordinate space of the first camera of the stereo pair.
@param tvecs Output vector of translation vectors estimated for each pattern view, see parameter description
of previous output parameter ( rvecs ).
1665
@param flags Different flags that may be zero or a combination of the following values:
A
Alexander Alekhin 已提交
1666
-   @ref fisheye::CALIB_FIX_INTRINSIC  Fix K1, K2? and D1, D2? so that only R, T matrices
1667
are estimated.
A
Alexander Alekhin 已提交
1668
-   @ref fisheye::CALIB_USE_INTRINSIC_GUESS  K1, K2 contains valid initial values of
1669 1670
fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image
center (imageSize is used), and focal distances are computed in a least-squares fashion.
A
Alexander Alekhin 已提交
1671
-   @ref fisheye::CALIB_RECOMPUTE_EXTRINSIC  Extrinsic will be recomputed after each iteration
1672
of intrinsic optimization.
A
Alexander Alekhin 已提交
1673 1674 1675
-   @ref fisheye::CALIB_CHECK_COND  The functions will check validity of condition number.
-   @ref fisheye::CALIB_FIX_SKEW  Skew coefficient (alpha) is set to zero and stay zero.
-   @ref fisheye::CALIB_FIX_K1,..., @ref fisheye::CALIB_FIX_K4 Selected distortion coefficients are set to zeros and stay
1676 1677 1678 1679
zero.
@param criteria Termination criteria for the iterative optimization algorithm.
 */
CV_EXPORTS_W double stereoCalibrate(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints1, InputArrayOfArrays imagePoints2,
1680 1681 1682 1683 1684 1685 1686 1687 1688
                                    InputOutputArray K1, InputOutputArray D1, InputOutputArray K2, InputOutputArray D2, Size imageSize,
                                    OutputArray R, OutputArray T, OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs, int flags = fisheye::CALIB_FIX_INTRINSIC,
                                    TermCriteria criteria = TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 100, DBL_EPSILON));

/// @overload
CV_EXPORTS_W double stereoCalibrate(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints1, InputArrayOfArrays imagePoints2,
                                    InputOutputArray K1, InputOutputArray D1, InputOutputArray K2, InputOutputArray D2, Size imageSize,
                                    OutputArray R, OutputArray T, int flags = fisheye::CALIB_FIX_INTRINSIC,
                                    TermCriteria criteria = TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 100, DBL_EPSILON));
1689 1690 1691 1692 1693 1694

//! @} calib3d_fisheye
} //end namespace fisheye
} //end namespace cv

#endif