提交 69731e3c 编写于 作者: G gineshidalgo99

Camera parameters read at runtime

上级 56833873
......@@ -102,6 +102,9 @@ distribute/
*leveldb
*lmdb
# Camera parameters
*.xml
######################### Compilation (build, distribute & bins) Files / Folders #########################
*.bin
*.testbin
......
......@@ -52,11 +52,12 @@ This demo assumes n arbitrary stereo cameras from the FLIR company (formerly Poi
## Camera Calibration
The user must manually get the intrinsic and extrinsic parameters of the cameras, introduce them on: `src/experimental/3d/cameraParameters.cpp`, and recompile OpenPose.
The user must manually get the intrinsic and extrinsic parameters of the FLIR cameras:
The program uses 3 cameras by default, but cameras can be added or removed from `cameraParameters.hpp` by adding or removing elements to `INTRINSICS`, `DISTORTIONS` and `M_EACH_CAMERA`. `INTRINSICS` corresponds to the intrinsic parameters, `DISTORTIONS` to the distortion coefficients, and `M` corresponds to the extrinsic parameters of the cameras with respect to camera 1, i.e. camera 1 is considered the coordinates origin. However, it can be changed by the user by setting these parameters differently.
3D OpenPose uses the 8-distortion-parameter version of OpenCV by default. Internally, the distortion parameters are used by the OpenCV function [undistort()](http://docs.opencv.org/3.2.0/da/d54/group__imgproc__transform.html#ga69f2545a8b62a6b0fc2ee060dc30559d) to rectify the images. This function can take either 4-, 5- or 8-parameter distortion coefficients (OpenCV 3.X also adds a 12- and 14-parameter alternatives). Therefore, either version (4, 5, 8, 12 or 14) will work in 3D OpenPose, the user just needs to modify the `DISTORTION_i` variables in `cameraParameters.hpp` with the desired number of distortion parameters for each camera.
1. Create a xml file for each camera named as `models/cameraParameters/flir/{camera_serial_number}.xml`.
2. The elements inside each xml file are the extrinsic parameters of the camera, the intrinsic parameters, and the distortion coefficients. Copy the format from `models/cameraParameters/flir/17012332.xml.example`.
3. The program can use any arbitrary number of cameras. Even if lots of cameras are added in `models/cameraParameters/flir/`, the program will check at runtime which FLIR cameras are detected and simply read those camera parameters. If the file corresponding to any of the cameras detected at runtime is not found, OpenPose will return an error.
4. In the example XML, OpenPose uses the 8-distortion-parameter version of OpenCV. The distortion parameters are internally used by the OpenCV function [undistort()](http://docs.opencv.org/3.2.0/da/d54/group__imgproc__transform.html#ga69f2545a8b62a6b0fc2ee060dc30559d) to rectify the images. Therefore, this function can take either 4-, 5- or 8-parameter distortion coefficients (OpenCV 3.X also adds a 12- and 14-parameter alternatives). Therefore, either version (4, 5, 8, 12 or 14) will work in 3D OpenPose.
......
......@@ -148,6 +148,7 @@ Each flag is divided into flag name, default value, and description.
- DEFINE_int32(frame_rotate, 0, "Rotate each frame, 4 possible values: 0, 90, 180, 270.");
- DEFINE_bool(frames_repeat, false, "Repeat frames when finished.");
- DEFINE_bool(process_real_time, false, "Enable to keep the original source frame rate (e.g. for video). If the processing time is too long, it will skip frames. If it is too fast, it will slow it down.");
- DEFINE_string(camera_parameter_folder, "models/cameraParameters/", "String with the folder where the camera parameters are located.");
3. OpenPose
- DEFINE_string(model_folder, "models/", "Folder path (absolute or relative) where the models (pose, face, ...) are located.");
......
......@@ -227,7 +227,7 @@ You can include the 3D reconstruction module by:
1. Right-click on `Solution 'OpenPose'` of the `Solution Explorer` window, usually placed at the top-right part of the VS screen.
2. Click on `Properties`. Go to `Configuration Properties` -> `Configuration` and check `Build` for the `OpenPose3DReconstruction` project.
After installation, check the [doc/openpose_3d_reconstruction_demo.md](./openpose_3d_reconstruction_demo.md) instructions.
After installation, check the [doc/3d_reconstruction_demo.md](./3d_reconstruction_demo.md) instructions.
......
......@@ -184,6 +184,7 @@ OpenPose Library - Release Notes
12. Added flag: number_people_max to optionally select the maximum number of people to be detected.
13. 3-D reconstruction module forces the user to set `number_people_max 1` to avoid errors (as it assumes only 1 person per image).
14. Removed old `windows/` version. CMake is the only Windows version available.
15. Camera parameters (flir camera) are read from disk at runtime rather than being compiled.
2. Functions or parameters renamed:
1. Flag `no_display` renamed as `display`, able to select between `NoDisplay`, `Display2D`, `Display3D`, and `DisplayAll`.
2. 3-D reconstruction demo is now inside the OpenPose demo binary.
......
......@@ -62,6 +62,7 @@ DEFINE_int32(frame_rotate, 0, "Rotate each frame, 4 po
DEFINE_bool(frames_repeat, false, "Repeat frames when finished.");
DEFINE_bool(process_real_time, false, "Enable to keep the original source frame rate (e.g. for video). If the processing time is"
" too long, it will skip frames. If it is too fast, it will slow it down.");
DEFINE_string(camera_parameter_folder, "models/cameraParameters/", "String with the folder where the camera parameters are located.");
// OpenPose
DEFINE_string(model_folder, "models/", "Folder path (absolute or relative) where the models (pose, face, ...) are located.");
DEFINE_string(output_resolution, "-1x-1", "The image resolution (display and output). Use \"-1x-1\" to force the program to use the"
......@@ -234,7 +235,8 @@ int openPoseDemo()
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// producerType
const auto producerSharedPtr = op::flagsToProducer(FLAGS_image_dir, FLAGS_video, FLAGS_ip_camera, FLAGS_camera,
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps);
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps,
FLAGS_camera_parameter_folder);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......
......@@ -71,6 +71,7 @@ DEFINE_int32(frame_rotate, 0, "Rotate each frame, 4 po
DEFINE_bool(frames_repeat, false, "Repeat frames when finished.");
DEFINE_bool(process_real_time, false, "Enable to keep the original source frame rate (e.g. for video). If the processing time is"
" too long, it will skip frames. If it is too fast, it will slow it down.");
DEFINE_string(camera_parameter_folder, "models/cameraParameters/", "String with the folder where the camera parameters are located.");
// OpenPose
DEFINE_string(model_folder, "models/", "Folder path (absolute or relative) where the models (pose, face, ...) are located.");
DEFINE_string(output_resolution, "-1x-1", "The image resolution (display and output). Use \"-1x-1\" to force the program to use the"
......@@ -236,7 +237,8 @@ int openPoseTutorialWrapper4()
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// producerType
const auto producerSharedPtr = op::flagsToProducer(FLAGS_image_dir, FLAGS_video, FLAGS_ip_camera, FLAGS_camera,
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps);
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps,
FLAGS_camera_parameter_folder);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......
......@@ -42,6 +42,7 @@ DEFINE_bool(flir_camera, false, "Whether to use FLIR (Po
DEFINE_string(ip_camera, "", "String with the IP camera URL. It supports protocols like RTSP and HTTP.");
DEFINE_bool(process_real_time, false, "Enable to keep the original source frame rate (e.g. for video). If the processing time is"
" too long, it will skip frames. If it is too fast, it will slow it down.");
DEFINE_string(camera_parameter_folder, "models/cameraParameters/", "String with the folder where the camera parameters are located.");
// OpenPose
DEFINE_string(output_resolution, "-1x-1", "The image resolution (display and output). Use \"-1x-1\" to force the program to use the"
" input image resolution.");
......@@ -63,7 +64,8 @@ int openPoseTutorialThread1()
const auto outputSize = op::flagsToPoint(FLAGS_output_resolution, "-1x-1");
// producerType
const auto producerSharedPtr = op::flagsToProducer(FLAGS_image_dir, FLAGS_video, FLAGS_ip_camera, FLAGS_camera,
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps);
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps,
FLAGS_camera_parameter_folder);
const auto displayProducerFpsMode = (FLAGS_process_real_time
? op::ProducerFpsMode::OriginalFps : op::ProducerFpsMode::RetrievalFps);
producerSharedPtr->setProducerFpsMode(displayProducerFpsMode);
......
......@@ -43,6 +43,7 @@ DEFINE_bool(flir_camera, false, "Whether to use FLIR (Po
DEFINE_string(ip_camera, "", "String with the IP camera URL. It supports protocols like RTSP and HTTP.");
DEFINE_bool(process_real_time, false, "Enable to keep the original source frame rate (e.g. for video). If the processing time is"
" too long, it will skip frames. If it is too fast, it will slow it down.");
DEFINE_string(camera_parameter_folder, "models/cameraParameters/", "String with the folder where the camera parameters are located.");
// OpenPose
DEFINE_string(output_resolution, "-1x-1", "The image resolution (display and output). Use \"-1x-1\" to force the program to use the"
" input image resolution.");
......@@ -97,7 +98,8 @@ int openPoseTutorialThread2()
const auto outputSize = op::flagsToPoint(FLAGS_output_resolution, "-1x-1");
// producerType
const auto producerSharedPtr = op::flagsToProducer(FLAGS_image_dir, FLAGS_video, FLAGS_ip_camera, FLAGS_camera,
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps);
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps,
FLAGS_camera_parameter_folder);
const auto displayProducerFpsMode = (FLAGS_process_real_time
? op::ProducerFpsMode::OriginalFps : op::ProducerFpsMode::RetrievalFps);
producerSharedPtr->setProducerFpsMode(displayProducerFpsMode);
......
......@@ -61,6 +61,7 @@ DEFINE_int32(frame_rotate, 0, "Rotate each frame, 4 po
DEFINE_bool(frames_repeat, false, "Repeat frames when finished.");
DEFINE_bool(process_real_time, false, "Enable to keep the original source frame rate (e.g. for video). If the processing time is"
" too long, it will skip frames. If it is too fast, it will slow it down.");
DEFINE_string(camera_parameter_folder, "models/cameraParameters/", "String with the folder where the camera parameters are located.");
// OpenPose
DEFINE_string(model_folder, "models/", "Folder path (absolute or relative) where the models (pose, face, ...) are located.");
DEFINE_string(output_resolution, "-1x-1", "The image resolution (display and output). Use \"-1x-1\" to force the program to use the"
......@@ -311,7 +312,8 @@ int openPoseTutorialWrapper1()
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// producerType
const auto producerSharedPtr = op::flagsToProducer(FLAGS_image_dir, FLAGS_video, FLAGS_ip_camera, FLAGS_camera,
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps);
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps,
FLAGS_camera_parameter_folder);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......
......@@ -6,13 +6,31 @@
namespace op
{
OP_API const cv::Mat getIntrinsics(const int cameraIndex);
class OP_API CameraParameterReader
{
public:
explicit CameraParameterReader();
OP_API const cv::Mat getDistorsion(const int cameraIndex);
void readParameters(const std::string& cameraParameterPath,
const std::vector<std::string>& serialNumbers);
OP_API cv::Mat getM(const int cameraIndex);
unsigned long long getNumberCameras() const;
OP_API unsigned long long getNumberCameras();
const std::vector<cv::Mat>& getCameraMatrices() const;
const std::vector<cv::Mat>& getCameraIntrinsics() const;
const std::vector<cv::Mat>& getCameraDistortions() const;
private:
std::vector<std::string> mSerialNumbers;
unsigned long long mNumberCameras;
std::vector<cv::Mat> mCameraMatrices;
std::vector<cv::Mat> mCameraIntrinsics;
std::vector<cv::Mat> mCameraDistortions;
DELETE_COPY(CameraParameterReader);
};
}
#endif // OPENPOSE_EXPERIMENTAL_3D_CAMERA_PARAMETER_READER_HPP
......@@ -8,6 +8,8 @@
namespace op
{
OP_API std::string dataFormatToString(const DataFormat dataFormat);
OP_API DataFormat stringToDataFormat(const std::string& dataFormat);
// Save custom float format
......@@ -24,16 +26,16 @@ namespace op
// Save/load json, xml, yaml, yml
OP_API void saveData(const std::vector<cv::Mat>& cvMats, const std::vector<std::string>& cvMatNames,
const std::string& fileNameNoExtension, const DataFormat format);
const std::string& fileNameNoExtension, const DataFormat dataFormat);
OP_API void saveData(const cv::Mat& cvMat, const std::string cvMatName, const std::string& fileNameNoExtension,
const DataFormat format);
const DataFormat dataFormat);
OP_API std::vector<cv::Mat> loadData(const std::vector<std::string>& cvMatNames,
const std::string& fileNameNoExtension, const DataFormat format);
const std::string& fileNameNoExtension, const DataFormat dataFormat);
OP_API cv::Mat loadData(const std::string& cvMatName, const std::string& fileNameNoExtension,
const DataFormat format);
const DataFormat dataFormat);
// Json - Saving as *.json not available in OpenCV verions < 3.0, this function is a quick fix
OP_API void savePeopleJson(const Array<float>& keypoints,
......
......@@ -3,11 +3,12 @@
#include <openpose/core/common.hpp>
#include <openpose/producer/producer.hpp>
#include <openpose/producer/spinnakerWrapper.hpp>
namespace op
{
/**
* FlirReader is an abstract class to extract frames from a image directory. Its interface imitates the
* FlirReader is an abstract class to extract frames from a FLIR stereo-camera system. Its interface imitates the
* cv::VideoCapture class, so it can be used quite similarly to the cv::VideoCapture class. Thus,
* it is quite similar to VideoReader and WebcamReader.
*/
......@@ -15,11 +16,9 @@ namespace op
{
public:
/**
* Constructor of FlirReader. It sets the image directory path from which the images will be loaded
* and generates a std::vector<std::string> with the list of images on that directory.
* @param imageDirectoryPath const std::string parameter with the folder path containing the images.
* Constructor of FlirReader. It opens all the available FLIR cameras
*/
explicit FlirReader();
explicit FlirReader(const std::string& cameraParametersPath);
~FlirReader();
......@@ -27,10 +26,7 @@ namespace op
std::string getFrameName();
inline bool isOpened() const
{
return true;
}
bool isOpened() const;
void release();
......@@ -39,11 +35,7 @@ namespace op
void set(const int capProperty, const double value);
private:
// PIMPL idiom
// http://www.cppsamples.com/common-tasks/pimpl.html
struct ImplFlirReader;
std::shared_ptr<ImplFlirReader> upImpl;
SpinnakerWrapper mSpinnakerWrapper;
Point<int> mResolution;
long long mFrameNameCounter;
......
......@@ -8,6 +8,7 @@
#include <openpose/producer/imageDirectoryReader.hpp>
#include <openpose/producer/ipCameraReader.hpp>
#include <openpose/producer/producer.hpp>
#include <openpose/producer/spinnakerWrapper.hpp>
#include <openpose/producer/videoCaptureReader.hpp>
#include <openpose/producer/videoReader.hpp>
#include <openpose/producer/webcamReader.hpp>
......
#ifndef OPENPOSE_PRODUCER_SPINNAKER_WRAPPER_HPP
#define OPENPOSE_PRODUCER_SPINNAKER_WRAPPER_HPP
#include <openpose/core/common.hpp>
namespace op
{
/**
* SpinnakerWrapper is a subclass of SpinnakerWrapper. It decouples the final interface (meant to imitates
* cv::VideoCapture) from the Spinnaker SDK wrapper.
*/
class OP_API SpinnakerWrapper
{
public:
/**
* Constructor of SpinnakerWrapper. It opens all the available FLIR cameras
*/
explicit SpinnakerWrapper(const std::string& cameraParameterPath);
~SpinnakerWrapper();
std::vector<cv::Mat> getRawFrames();
std::vector<cv::Mat> getCameraMatrices() const;
Point<int> getResolution() const;
bool isOpened() const;
void release();
private:
// PIMPL idiom
// http://www.cppsamples.com/common-tasks/pimpl.html
struct ImplSpinnakerWrapper;
std::shared_ptr<ImplSpinnakerWrapper> upImpl;
DELETE_COPY(SpinnakerWrapper);
};
}
#endif // OPENPOSE_PRODUCER_SPINNAKER_WRAPPER_HPP
......@@ -24,7 +24,8 @@ namespace op
const std::string& ipCameraPath, const int webcamIndex,
const bool flirCamera = false,
const std::string& webcamResolution = "1280x720",
const double webcamFps = 30.);
const double webcamFps = 30.,
const std::string& cameraParameterPath = "models/cameraParameters/");
OP_API std::vector<HeatMapType> flagsToHeatMaps(const bool heatMapsAddParts = false,
const bool heatMapsAddBkg = false,
......
<?xml version="1.0"?>
<opencv_storage>
<CameraMatrix type_id="opencv-matrix">
<rows>3</rows>
<cols>4</cols>
<dt>d</dt>
<data>
1. 0. 0. 0. 0. 1. 0. 0. 0. 0. 1. 0.</data></CameraMatrix>
<Intrinsics type_id="opencv-matrix">
<rows>3</rows>
<cols>3</cols>
<dt>d</dt>
<data>
8.1793481631740565e+02 0. 6.0070689997785121e+02 0.
8.1651774059837908e+02 5.1784529566329593e+02 0. 0. 1.</data></Intrinsics>
<Distortion type_id="opencv-matrix">
<rows>8</rows>
<cols>1</cols>
<dt>d</dt>
<data>
-1.8102158829399091e+00 9.1966147162623262e+00
-4.4293900343777355e-04 1.3638377686816653e-03
1.3303863414979364e+00 -1.4189051636354870e+00
8.4725535468475819e+00 4.7911023525901033e+00</data></Distortion>
</opencv_storage>
#include <vector>
#include <openpose/utilities/errorAndLog.hpp>
#include <openpose/filestream/fileStream.hpp>
#include <openpose/3d/cameraParameterReader.hpp>
namespace op
{
// User configurable code
// Intrinsic and distortion parameters
// Camera 1 parameters
const cv::Mat INTRINSIC_1 = (cv::Mat_<double>(3, 3) << 817.93481631740565, 0, 600.70689997785121,
0, 816.51774059837908, 517.84529566329593,
0, 0, 1);
const cv::Mat DISTORTION_1 = (cv::Mat_<double>(8, 1) <<
-1.8102158829399091, 9.1966147162623262, -0.00044293900343777355, 0.0013638377686816653,
1.3303863414979364, -1.418905163635487, 8.4725535468475819, 4.7911023525901033);
// Camera 2 parameters
const cv::Mat INTRINSIC_2 = (cv::Mat_<double>(3, 3) << 816.20921132436638, 0, 612.67087968681585,
0, 816.18292222910486, 530.47901782670431,
0, 0, 1);
const cv::Mat DISTORTION_2 = (cv::Mat_<double>(8, 1) <<
-5.1088507540294881, 133.63995617304997, -0.0010048069080912836, 0.00018825291386406282,
20.688286893903879, -4.7604289550474768, 132.42412342224557, 70.01195364029752);
const cv::Mat INTRINSIC_3 = (cv::Mat_<double>(3, 3) << 798.42980806905666, 0, 646.48130011561727,
0, 798.46535448393979, 523.91590563194586,
0, 0, 1);
// Camera 3
const cv::Mat DISTORTION_3 = (cv::Mat_<double>(8, 1) <<
-0.57530495294002304, -0.54721992620722582, -0.00037614702677289967, -0.00081995658363481598,
-0.020321660897680775, -0.18040544059116842, -0.87724444571603022, -0.13136636671099691);
// // User configurable parameters
// // Camera matrices (extrinsic parameters) - rotation and pose orientation between cameras
// // Camera 1
// const cv::Mat M_1_1 = (cv::Mat_<double>(3, 4) << 1, 0, 0, 0,
// 0, 1, 0, 0,
// 0, 0, 1, 0);
// // Not working on Windows
// // const cv::Mat M_1_1 = cv::Mat::eye(3, 4, CV_64F);
// // From camera 1 to 2
// const cv::Mat M_1_2 = (cv::Mat_<double>(3, 4)
// << 0.999962504862692, -0.00165862051503619, 0.00849928507093793, -238.301309354482,
// 0.00176155163779584, 0.999925029704659, -0.0121174215889211, 4.75863886121558,
// -0.00847854967298925, 0.0121319391740716, 0.999890459124058, 15.9219925821916);
// // From camera 1 to 3
// const cv::Mat M_1_3 = (cv::Mat_<double>(3, 4)
// << 0.995809442124071, -0.000473104796892308, 0.0914512501193800, -461.301274485705,
// 0.00165046455210419, 0.999916727562850, -0.0127989806923977, 6.22648121362088,
// -0.0914375794917412, 0.0128962828696210, 0.995727299487585, 63.4911132860733);
// // From camera 2 to 3
// const cv::Mat M_2_3 = (cv::Mat_<double>(3, 4)
// << 0.999644115423621, -0.00194501088674130, -0.0266056278177532, -235.236375502202,
// 0.00201646110733780, 0.999994431880356, 0.00265896462686206, 9.52238656728889,
// 0.0266003079592876, -0.00271166755609303, 0.999642471324391, -4.23534963077479);
// Extrinsic parameters - rotation and pose orientation between cameras
// Camera 1
const cv::Mat M_1_1 = (cv::Mat_<double>(3, 4) << 1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0);
// Not working on Windows
// const cv::Mat M_1_1 = cv::Mat::eye(3, 4, CV_64F);
// From camera 1 to 2
const cv::Mat M_1_2 = (cv::Mat_<double>(3, 4)
<< 0.999962504862692, -0.00165862051503619, 0.00849928507093793, -238.301309354482,
0.00176155163779584, 0.999925029704659, -0.0121174215889211, 4.75863886121558,
-0.00847854967298925, 0.0121319391740716, 0.999890459124058, 15.9219925821916);
// From camera 1 to 3
const cv::Mat M_1_3 = (cv::Mat_<double>(3, 4)
<< 0.995809442124071, -0.000473104796892308, 0.0914512501193800, -461.301274485705,
0.00165046455210419, 0.999916727562850, -0.0127989806923977, 6.22648121362088,
-0.0914375794917412, 0.0128962828696210, 0.995727299487585, 63.4911132860733);
// From camera 2 to 3
const cv::Mat M_2_3 = (cv::Mat_<double>(3, 4)
<< 0.999644115423621, -0.00194501088674130, -0.0266056278177532, -235.236375502202,
0.00201646110733780, 0.999994431880356, 0.00265896462686206, 9.52238656728889,
0.0266003079592876, -0.00271166755609303, 0.999642471324391, -4.23534963077479);
// // Intrinsic and distortion parameters
// // Camera 1 parameters
// const cv::Mat INTRINSIC_1 = (cv::Mat_<double>(3, 3) << 817.93481631740565, 0, 600.70689997785121,
// 0, 816.51774059837908, 517.84529566329593,
// 0, 0, 1);
// const cv::Mat DISTORTION_1 = (cv::Mat_<double>(8, 1) <<
// -1.8102158829399091, 9.1966147162623262, -0.00044293900343777355, 0.0013638377686816653,
// 1.3303863414979364, -1.418905163635487, 8.4725535468475819, 4.7911023525901033);
// // Camera 2 parameters
// const cv::Mat INTRINSIC_2 = (cv::Mat_<double>(3, 3) << 816.20921132436638, 0, 612.67087968681585,
// 0, 816.18292222910486, 530.47901782670431,
// 0, 0, 1);
// const cv::Mat DISTORTION_2 = (cv::Mat_<double>(8, 1) <<
// -5.1088507540294881, 133.63995617304997, -0.0010048069080912836, 0.00018825291386406282,
// 20.688286893903879, -4.7604289550474768, 132.42412342224557, 70.01195364029752);
// const cv::Mat INTRINSIC_3 = (cv::Mat_<double>(3, 3) << 798.42980806905666, 0, 646.48130011561727,
// 0, 798.46535448393979, 523.91590563194586,
// 0, 0, 1);
// // Camera 3
// const cv::Mat DISTORTION_3 = (cv::Mat_<double>(8, 1) <<
// -0.57530495294002304, -0.54721992620722582, -0.00037614702677289967, -0.00081995658363481598,
// -0.020321660897680775, -0.18040544059116842, -0.87724444571603022, -0.13136636671099691);
// // Do not modify this code
// const std::vector<cv::Mat> INTRINSICS{ INTRINSIC_1, INTRINSIC_2, INTRINSIC_3 };
// const std::vector<cv::Mat> DISTORTIONS{ DISTORTION_1, DISTORTION_2, DISTORTION_3 };
// const std::vector<cv::Mat> M{ M_1_1, M_1_2, M_1_3 };
// // Not working on Windows
// // const std::vector<cv::Mat> M_EACH_CAMERA{
// // INTRINSIC_1 * M_1_1,
// // INTRINSIC_2 * M_1_2,
// // INTRINSIC_3 * M_1_3
// // };
CameraParameterReader::CameraParameterReader()
{
}
void CameraParameterReader::readParameters(const std::string& cameraParameterPath,
const std::vector<std::string>& serialNumbers)
{
try
{
// Serial numbers
mSerialNumbers = serialNumbers;
// Commong saving/loading
const std::string fileNameNoExtension{cameraParameterPath};
const auto dataFormat = DataFormat::Xml;
const std::vector<std::string> cvMatNames {
"CameraMatrix", "Intrinsics", "Distortion"
};
// // Temporary code to save the parameters in disk
// // #cameras
// mNumberCameras = INTRINSICS.size();
// // Camera matrices
// mCameraMatrices.resize(mNumberCameras);
// for (auto i = 0ull ; i < mNumberCameras ; i++)
// mCameraMatrices[i] = INTRINSICS.at(i) * M.at(i);
// // Not working on Windows
// // mCameraMatrices[i] = M_EACH_CAMERA.at(i);
// // Camera intrinsics
// mCameraIntrinsics.resize(mNumberCameras);
// for (auto i = 0ull ; i < mNumberCameras ; i++)
// mCameraIntrinsics[i] = INTRINSICS.at(i);
// // Camera extrinsics
// mCameraDistortions.resize(mNumberCameras);
// for (auto i = 0ull ; i < mNumberCameras ; i++)
// mCameraDistortions[i] = DISTORTIONS.at(i);
// Do not modify this code
const std::vector<cv::Mat> INTRINSICS{ INTRINSIC_1, INTRINSIC_2, INTRINSIC_3 };
const std::vector<cv::Mat> DISTORTIONS{ DISTORTION_1, DISTORTION_2, DISTORTION_3 };
const std::vector<cv::Mat> M{ M_1_1, M_1_2, M_1_3 };
// Not working on Windows
// const std::vector<cv::Mat> M_EACH_CAMERA{
// INTRINSIC_1 * M_1_1,
// INTRINSIC_2 * M_1_2,
// INTRINSIC_3 * M_1_3
// };
// // Saving
// for (auto i = 0ull ; i < mNumberCameras ; i++)
// {
// std::vector<cv::Mat> cameraParameters;
// cameraParameters.emplace_back(M.at(i));
// cameraParameters.emplace_back(mCameraIntrinsics.at(i));
// cameraParameters.emplace_back(mCameraDistortions.at(i));
// saveData(cameraParameters, cvMatNames, fileNameNoExtension + mSerialNumbers.at(i), dataFormat);
// }
// Load parameters
mNumberCameras = serialNumbers.size();
mCameraMatrices.clear();
mCameraIntrinsics.clear();
mCameraDistortions.clear();
// log("Camera matrices:");
for (auto i = 0ull ; i < mNumberCameras ; i++)
{
const auto parameterPath = fileNameNoExtension + mSerialNumbers.at(i);
const auto cameraParameters = loadData(cvMatNames, parameterPath, dataFormat);
const auto& intrinsics = cameraParameters.at(1);
// Error if empty element
if (cameraParameters.empty() || cameraParameters.at(0).empty()
|| cameraParameters.at(1).empty() || cameraParameters.at(2).empty())
{
const std::string errorMessage = " of the camera with serial number `" + mSerialNumbers[i]
+ "` (file: " + parameterPath + "." + dataFormatToString(dataFormat)
+ "). Is its format valid? You might want to check the example xml"
+ " file.";
if (cameraParameters.empty())
error("Error at reading the camera parameters" + errorMessage,
__LINE__, __FUNCTION__, __FILE__);
if (cameraParameters.at(0).empty())
error("Error at reading the camera matrix parameters" + errorMessage,
__LINE__, __FUNCTION__, __FILE__);
if (cameraParameters.at(1).empty())
error("Error at reading the camera intrinsics parameters" + errorMessage,
__LINE__, __FUNCTION__, __FILE__);
if (cameraParameters.at(2).empty())
error("Error at reading the camera distortion parameters" + errorMessage,
__LINE__, __FUNCTION__, __FILE__);
}
mCameraMatrices.emplace_back(intrinsics * cameraParameters.at(0));
mCameraIntrinsics.emplace_back(intrinsics);
mCameraDistortions.emplace_back(cameraParameters.at(2));
// log(cameraParameters.at(0));
}
// // mCameraMatrices
// log("\nFull camera matrices:");
// for (const auto& cvMat : mCameraMatrices)
// log(cvMat);
// // mCameraIntrinsics
// log("\nCamera intrinsic parameters:");
// for (const auto& cvMat : mCameraIntrinsics)
// log(cvMat);
// // mCameraDistortions
// log("\nCamera distortion parameters:");
// for (const auto& cvMat : mCameraDistortions)
// log(cvMat);
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
}
const cv::Mat getIntrinsics(const int cameraIndex)
unsigned long long CameraParameterReader::getNumberCameras() const
{
try
{
return INTRINSICS[cameraIndex];
return mNumberCameras;
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return cv::Mat{};
return 0ull;
}
}
const cv::Mat getDistorsion(const int cameraIndex)
const std::vector<cv::Mat>& CameraParameterReader::getCameraMatrices() const
{
try
{
return DISTORTIONS[cameraIndex];
return mCameraMatrices;
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return cv::Mat{};
return mCameraMatrices;
}
}
cv::Mat getM(const int cameraIndex)
const std::vector<cv::Mat>& CameraParameterReader::getCameraIntrinsics() const
{
try
{
return INTRINSICS.at(cameraIndex) * M.at(cameraIndex);
// Not working on Windows
// return M_EACH_CAMERA[cameraIndex];
return mCameraIntrinsics;
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return cv::Mat{};
return mCameraIntrinsics;
}
}
unsigned long long getNumberCameras()
const std::vector<cv::Mat>& CameraParameterReader::getCameraDistortions() const
{
try
{
return INTRINSICS.size();
return mCameraDistortions;
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return 0;
return mCameraDistortions;
}
}
}
......@@ -73,7 +73,7 @@ namespace op
// Access pointers
const T* sourcePtr = sourcePtrs[n];
T* tempTargetPtr;
if(n != 0)
if (n != 0)
tempTargetPtr = tempTargetPtrs[n-1].get();
else
tempTargetPtr = targetPtr;
......
......@@ -12,34 +12,9 @@ namespace op
const auto errorMessage = "Json format only implemented in OpenCV for versions >= 3.0. Check savePoseJson"
" instead.";
std::string dataFormatToString(const DataFormat format)
std::string getFullName(const std::string& fileNameNoExtension, const DataFormat dataFormat)
{
try
{
if (format == DataFormat::Json)
return "json";
else if (format == DataFormat::Xml)
return "xml";
else if (format == DataFormat::Yaml)
return "yaml";
else if (format == DataFormat::Yml)
return "yml";
else
{
error("Undefined DataFormat.", __LINE__, __FUNCTION__, __FILE__);
return "";
}
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return "";
}
}
std::string getFullName(const std::string& fileNameNoExtension, const DataFormat format)
{
return fileNameNoExtension + "." + dataFormatToString(format);
return fileNameNoExtension + "." + dataFormatToString(dataFormat);
}
void addKeypointsToJson(JsonOfstream& jsonOfstream,
......@@ -150,6 +125,31 @@ namespace op
// Public classes (on *.hpp)
std::string dataFormatToString(const DataFormat dataFormat)
{
try
{
if (dataFormat == DataFormat::Json)
return "json";
else if (dataFormat == DataFormat::Xml)
return "xml";
else if (dataFormat == DataFormat::Yaml)
return "yaml";
else if (dataFormat == DataFormat::Yml)
return "yml";
else
{
error("Undefined DataFormat.", __LINE__, __FUNCTION__, __FILE__);
return "";
}
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return "";
}
}
DataFormat stringToDataFormat(const std::string& dataFormat)
{
try
......@@ -164,7 +164,7 @@ namespace op
return DataFormat::Yml;
else
{
error("String does not correspond to any format (json, xml, yaml, yml)",
error("String does not correspond to any known format (json, xml, yaml, yml)",
__LINE__, __FUNCTION__, __FILE__);
return DataFormat::Json;
}
......@@ -204,17 +204,18 @@ namespace op
}
void saveData(const std::vector<cv::Mat>& cvMats, const std::vector<std::string>& cvMatNames,
const std::string& fileNameNoExtension, const DataFormat format)
const std::string& fileNameNoExtension, const DataFormat dataFormat)
{
try
{
// Security checks
if (format == DataFormat::Json && CV_MAJOR_VERSION < 3)
if (dataFormat == DataFormat::Json && CV_MAJOR_VERSION < 3)
error(errorMessage, __LINE__, __FUNCTION__, __FILE__);
if (cvMats.size() != cvMatNames.size())
error("cvMats.size() != cvMatNames.size()", __LINE__, __FUNCTION__, __FILE__);
error("cvMats.size() != cvMatNames.size() (" + std::to_string(cvMats.size())
+ " vs. " + std::to_string(cvMatNames.size()) + ")", __LINE__, __FUNCTION__, __FILE__);
// Save cv::Mat data
cv::FileStorage fileStorage{getFullName(fileNameNoExtension, format), cv::FileStorage::WRITE};
cv::FileStorage fileStorage{getFullName(fileNameNoExtension, dataFormat), cv::FileStorage::WRITE};
for (auto i = 0u ; i < cvMats.size() ; i++)
fileStorage << cvMatNames[i] << (cvMats[i].empty() ? cv::Mat() : cvMats[i]);
// Release file
......@@ -227,11 +228,12 @@ namespace op
}
void saveData(const cv::Mat& cvMat, const std::string cvMatName, const std::string& fileNameNoExtension,
const DataFormat format)
const DataFormat dataFormat)
{
try
{
saveData(std::vector<cv::Mat>{cvMat}, std::vector<std::string>{cvMatName}, fileNameNoExtension, format);
saveData(std::vector<cv::Mat>{cvMat}, std::vector<std::string>{cvMatName}, fileNameNoExtension,
dataFormat);
}
catch (const std::exception& e)
{
......@@ -240,14 +242,19 @@ namespace op
}
std::vector<cv::Mat> loadData(const std::vector<std::string>& cvMatNames, const std::string& fileNameNoExtension,
const DataFormat format)
const DataFormat dataFormat)
{
try
{
if (format == DataFormat::Json && CV_MAJOR_VERSION < 3)
// Security checks
if (dataFormat == DataFormat::Json && CV_MAJOR_VERSION < 3)
error(errorMessage, __LINE__, __FUNCTION__, __FILE__);
cv::FileStorage fileStorage{getFullName(fileNameNoExtension, format), cv::FileStorage::READ};
const auto fileName = getFullName(fileNameNoExtension, dataFormat);
// Security checks
if (!existFile(fileName))
error("File to be read does not exist: " + fileName + ".", __LINE__, __FUNCTION__, __FILE__);
// Read file
cv::FileStorage fileStorage{fileName, cv::FileStorage::READ};
std::vector<cv::Mat> cvMats(cvMatNames.size());
for (auto i = 0u ; i < cvMats.size() ; i++)
fileStorage[cvMatNames[i]] >> cvMats[i];
......@@ -261,11 +268,11 @@ namespace op
}
}
cv::Mat loadData(const std::string& cvMatName, const std::string& fileNameNoExtension, const DataFormat format)
cv::Mat loadData(const std::string& cvMatName, const std::string& fileNameNoExtension, const DataFormat dataFormat)
{
try
{
return loadData(std::vector<std::string>{cvMatName}, fileNameNoExtension, format)[0];
return loadData(std::vector<std::string>{cvMatName}, fileNameNoExtension, dataFormat)[0];
}
catch (const std::exception& e)
{
......
......@@ -4,6 +4,7 @@ set(SOURCES_OP_PRODUCER
imageDirectoryReader.cpp
ipCameraReader.cpp
producer.cpp
spinnakerWrapper.cpp
videoCaptureReader.cpp
videoReader.cpp
webcamReader.cpp)
......
......@@ -53,7 +53,15 @@ namespace op
std::string ImageDirectoryReader::getFrameName()
{
return getFileNameNoExtension(mFilePaths.at(mFrameNameCounter));
try
{
return getFileNameNoExtension(mFilePaths.at(mFrameNameCounter));
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return "";
}
}
cv::Mat ImageDirectoryReader::getRawFrame()
......
此差异已折叠。
......@@ -137,7 +137,7 @@ namespace op
std::shared_ptr<Producer> flagsToProducer(const std::string& imageDirectory, const std::string& videoPath,
const std::string& ipCameraPath, const int webcamIndex,
const bool flirCamera, const std::string& webcamResolution,
const double webcamFps)
const double webcamFps, const std::string& cameraParameterPath)
{
try
{
......@@ -151,7 +151,7 @@ namespace op
else if (type == ProducerType::IPCamera)
return std::make_shared<IpCameraReader>(ipCameraPath);
else if (type == ProducerType::FlirCamera)
return std::make_shared<FlirReader>();
return std::make_shared<FlirReader>(cameraParameterPath);
else if (type == ProducerType::Webcam)
{
// cameraFrameSize
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册