提交 5ffb53f2 编写于 作者: M Martin Ueding

Add example data types for calibrateCamera call

It took me a while to figure out what was meant with

    OpenCV Error: Assertion failed (i < 0) in getMat

While searching for this error message I found [a list of error
messages](https://adventuresandwhathaveyou.wordpress.com/2014/03/14/opencv-error-messages-suck/)
which also explained what the problem was: The data type for `rvecs` was
not a simple `cv::Mat` but a `std::vector<cv::Mat>`.

After I fixed that, I got the next error message:

    OpenCV Error: Assertion failed (ni > 0 && ni == ni1) in
    collectCalibrationData, file
    /build/buildd/opencv-2.4.9+dfsg/modules/calib3d/src/calibration.cpp,
    line 3193

The problem here was that my data type for the `objectPoints` was just
`vector<Vec3f>` and not `vector<vector<Vec3f>>`.

In order to save other people the time looking for this, I added
explicit examples of the needed data types into the documentation of the
function. I had to re-read the current version a couple of times until I
can read the needed levels of `vector<>`. Having this example would have
really helped me there.
上级 dcf177e3
......@@ -697,19 +697,19 @@ CV_EXPORTS_W bool findCirclesGrid( InputArray image, Size patternSize,
/** @brief Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.
@param objectPoints In the new interface it is a vector of vectors of calibration pattern points
in the calibration pattern coordinate space. The outer vector contains as many elements as the
number of the pattern views. If the same calibration pattern is shown in each view and it is fully
visible, all the vectors will be the same. Although, it is possible to use partially occluded
patterns, or even different patterns in different views. Then, the vectors will be different. The
points are 3D, but since they are in a pattern coordinate system, then, if the rig is planar, it
may make sense to put the model to a XY coordinate plane so that Z-coordinate of each input object
point is 0.
@param objectPoints In the new interface it is a vector of vectors of calibration pattern points in
the calibration pattern coordinate space (e.g. std::vector<std::vector<cv::Vec3f>>). The outer
vector contains as many elements as the number of the pattern views. If the same calibration pattern
is shown in each view and it is fully visible, all the vectors will be the same. Although, it is
possible to use partially occluded patterns, or even different patterns in different views. Then,
the vectors will be different. The points are 3D, but since they are in a pattern coordinate system,
then, if the rig is planar, it may make sense to put the model to a XY coordinate plane so that
Z-coordinate of each input object point is 0.
In the old interface all the vectors of object points from different views are concatenated
together.
@param imagePoints In the new interface it is a vector of vectors of the projections of
calibration pattern points. imagePoints.size() and objectPoints.size() and imagePoints[i].size()
must be equal to objectPoints[i].size() for each i.
@param imagePoints In the new interface it is a vector of vectors of the projections of calibration
pattern points (e.g. std::vector<std::vector<cv::Vec2f>>). imagePoints.size() and
objectPoints.size() and imagePoints[i].size() must be equal to objectPoints[i].size() for each i.
In the old interface all the vectors of object points from different views are concatenated
together.
@param imageSize Size of the image used only to initialize the intrinsic camera matrix.
......@@ -719,11 +719,11 @@ and/or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of fx, fy, cx, cy mu
initialized before calling the function.
@param distCoeffs Output vector of distortion coefficients
\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6],[s_1, s_2, s_3, s_4]])\f$ of 4, 5, 8 or 12 elements.
@param rvecs Output vector of rotation vectors (see Rodrigues ) estimated for each pattern view.
That is, each k-th rotation vector together with the corresponding k-th translation vector (see
the next output parameter description) brings the calibration pattern from the model coordinate
space (in which object points are specified) to the world coordinate space, that is, a real
position of the calibration pattern in the k-th pattern view (k=0.. *M* -1).
@param rvecs Output vector of rotation vectors (see Rodrigues ) estimated for each pattern view
(e.g. std::vector<cv::Mat>>). That is, each k-th rotation vector together with the corresponding
k-th translation vector (see the next output parameter description) brings the calibration pattern
from the model coordinate space (in which object points are specified) to the world coordinate
space, that is, a real position of the calibration pattern in the k-th pattern view (k=0.. *M* -1).
@param tvecs Output vector of translation vectors estimated for each pattern view.
@param flags Different flags that may be zero or a combination of the following values:
- **CV_CALIB_USE_INTRINSIC_GUESS** cameraMatrix contains valid initial values of
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册