提交 69731e3c 编写于 作者: G gineshidalgo99

Camera parameters read at runtime

上级 56833873
......@@ -102,6 +102,9 @@ distribute/
*leveldb
*lmdb
# Camera parameters
*.xml
######################### Compilation (build, distribute & bins) Files / Folders #########################
*.bin
*.testbin
......
......@@ -52,11 +52,12 @@ This demo assumes n arbitrary stereo cameras from the FLIR company (formerly Poi
## Camera Calibration
The user must manually get the intrinsic and extrinsic parameters of the cameras, introduce them on: `src/experimental/3d/cameraParameters.cpp`, and recompile OpenPose.
The user must manually get the intrinsic and extrinsic parameters of the FLIR cameras:
The program uses 3 cameras by default, but cameras can be added or removed from `cameraParameters.hpp` by adding or removing elements to `INTRINSICS`, `DISTORTIONS` and `M_EACH_CAMERA`. `INTRINSICS` corresponds to the intrinsic parameters, `DISTORTIONS` to the distortion coefficients, and `M` corresponds to the extrinsic parameters of the cameras with respect to camera 1, i.e. camera 1 is considered the coordinates origin. However, it can be changed by the user by setting these parameters differently.
3D OpenPose uses the 8-distortion-parameter version of OpenCV by default. Internally, the distortion parameters are used by the OpenCV function [undistort()](http://docs.opencv.org/3.2.0/da/d54/group__imgproc__transform.html#ga69f2545a8b62a6b0fc2ee060dc30559d) to rectify the images. This function can take either 4-, 5- or 8-parameter distortion coefficients (OpenCV 3.X also adds a 12- and 14-parameter alternatives). Therefore, either version (4, 5, 8, 12 or 14) will work in 3D OpenPose, the user just needs to modify the `DISTORTION_i` variables in `cameraParameters.hpp` with the desired number of distortion parameters for each camera.
1. Create a xml file for each camera named as `models/cameraParameters/flir/{camera_serial_number}.xml`.
2. The elements inside each xml file are the extrinsic parameters of the camera, the intrinsic parameters, and the distortion coefficients. Copy the format from `models/cameraParameters/flir/17012332.xml.example`.
3. The program can use any arbitrary number of cameras. Even if lots of cameras are added in `models/cameraParameters/flir/`, the program will check at runtime which FLIR cameras are detected and simply read those camera parameters. If the file corresponding to any of the cameras detected at runtime is not found, OpenPose will return an error.
4. In the example XML, OpenPose uses the 8-distortion-parameter version of OpenCV. The distortion parameters are internally used by the OpenCV function [undistort()](http://docs.opencv.org/3.2.0/da/d54/group__imgproc__transform.html#ga69f2545a8b62a6b0fc2ee060dc30559d) to rectify the images. Therefore, this function can take either 4-, 5- or 8-parameter distortion coefficients (OpenCV 3.X also adds a 12- and 14-parameter alternatives). Therefore, either version (4, 5, 8, 12 or 14) will work in 3D OpenPose.
......
......@@ -148,6 +148,7 @@ Each flag is divided into flag name, default value, and description.
- DEFINE_int32(frame_rotate, 0, "Rotate each frame, 4 possible values: 0, 90, 180, 270.");
- DEFINE_bool(frames_repeat, false, "Repeat frames when finished.");
- DEFINE_bool(process_real_time, false, "Enable to keep the original source frame rate (e.g. for video). If the processing time is too long, it will skip frames. If it is too fast, it will slow it down.");
- DEFINE_string(camera_parameter_folder, "models/cameraParameters/", "String with the folder where the camera parameters are located.");
3. OpenPose
- DEFINE_string(model_folder, "models/", "Folder path (absolute or relative) where the models (pose, face, ...) are located.");
......
......@@ -227,7 +227,7 @@ You can include the 3D reconstruction module by:
1. Right-click on `Solution 'OpenPose'` of the `Solution Explorer` window, usually placed at the top-right part of the VS screen.
2. Click on `Properties`. Go to `Configuration Properties` -> `Configuration` and check `Build` for the `OpenPose3DReconstruction` project.
After installation, check the [doc/openpose_3d_reconstruction_demo.md](./openpose_3d_reconstruction_demo.md) instructions.
After installation, check the [doc/3d_reconstruction_demo.md](./3d_reconstruction_demo.md) instructions.
......
......@@ -184,6 +184,7 @@ OpenPose Library - Release Notes
12. Added flag: number_people_max to optionally select the maximum number of people to be detected.
13. 3-D reconstruction module forces the user to set `number_people_max 1` to avoid errors (as it assumes only 1 person per image).
14. Removed old `windows/` version. CMake is the only Windows version available.
15. Camera parameters (flir camera) are read from disk at runtime rather than being compiled.
2. Functions or parameters renamed:
1. Flag `no_display` renamed as `display`, able to select between `NoDisplay`, `Display2D`, `Display3D`, and `DisplayAll`.
2. 3-D reconstruction demo is now inside the OpenPose demo binary.
......
......@@ -62,6 +62,7 @@ DEFINE_int32(frame_rotate, 0, "Rotate each frame, 4 po
DEFINE_bool(frames_repeat, false, "Repeat frames when finished.");
DEFINE_bool(process_real_time, false, "Enable to keep the original source frame rate (e.g. for video). If the processing time is"
" too long, it will skip frames. If it is too fast, it will slow it down.");
DEFINE_string(camera_parameter_folder, "models/cameraParameters/", "String with the folder where the camera parameters are located.");
// OpenPose
DEFINE_string(model_folder, "models/", "Folder path (absolute or relative) where the models (pose, face, ...) are located.");
DEFINE_string(output_resolution, "-1x-1", "The image resolution (display and output). Use \"-1x-1\" to force the program to use the"
......@@ -234,7 +235,8 @@ int openPoseDemo()
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// producerType
const auto producerSharedPtr = op::flagsToProducer(FLAGS_image_dir, FLAGS_video, FLAGS_ip_camera, FLAGS_camera,
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps);
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps,
FLAGS_camera_parameter_folder);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......
......@@ -71,6 +71,7 @@ DEFINE_int32(frame_rotate, 0, "Rotate each frame, 4 po
DEFINE_bool(frames_repeat, false, "Repeat frames when finished.");
DEFINE_bool(process_real_time, false, "Enable to keep the original source frame rate (e.g. for video). If the processing time is"
" too long, it will skip frames. If it is too fast, it will slow it down.");
DEFINE_string(camera_parameter_folder, "models/cameraParameters/", "String with the folder where the camera parameters are located.");
// OpenPose
DEFINE_string(model_folder, "models/", "Folder path (absolute or relative) where the models (pose, face, ...) are located.");
DEFINE_string(output_resolution, "-1x-1", "The image resolution (display and output). Use \"-1x-1\" to force the program to use the"
......@@ -236,7 +237,8 @@ int openPoseTutorialWrapper4()
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// producerType
const auto producerSharedPtr = op::flagsToProducer(FLAGS_image_dir, FLAGS_video, FLAGS_ip_camera, FLAGS_camera,
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps);
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps,
FLAGS_camera_parameter_folder);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......
......@@ -42,6 +42,7 @@ DEFINE_bool(flir_camera, false, "Whether to use FLIR (Po
DEFINE_string(ip_camera, "", "String with the IP camera URL. It supports protocols like RTSP and HTTP.");
DEFINE_bool(process_real_time, false, "Enable to keep the original source frame rate (e.g. for video). If the processing time is"
" too long, it will skip frames. If it is too fast, it will slow it down.");
DEFINE_string(camera_parameter_folder, "models/cameraParameters/", "String with the folder where the camera parameters are located.");
// OpenPose
DEFINE_string(output_resolution, "-1x-1", "The image resolution (display and output). Use \"-1x-1\" to force the program to use the"
" input image resolution.");
......@@ -63,7 +64,8 @@ int openPoseTutorialThread1()
const auto outputSize = op::flagsToPoint(FLAGS_output_resolution, "-1x-1");
// producerType
const auto producerSharedPtr = op::flagsToProducer(FLAGS_image_dir, FLAGS_video, FLAGS_ip_camera, FLAGS_camera,
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps);
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps,
FLAGS_camera_parameter_folder);
const auto displayProducerFpsMode = (FLAGS_process_real_time
? op::ProducerFpsMode::OriginalFps : op::ProducerFpsMode::RetrievalFps);
producerSharedPtr->setProducerFpsMode(displayProducerFpsMode);
......
......@@ -43,6 +43,7 @@ DEFINE_bool(flir_camera, false, "Whether to use FLIR (Po
DEFINE_string(ip_camera, "", "String with the IP camera URL. It supports protocols like RTSP and HTTP.");
DEFINE_bool(process_real_time, false, "Enable to keep the original source frame rate (e.g. for video). If the processing time is"
" too long, it will skip frames. If it is too fast, it will slow it down.");
DEFINE_string(camera_parameter_folder, "models/cameraParameters/", "String with the folder where the camera parameters are located.");
// OpenPose
DEFINE_string(output_resolution, "-1x-1", "The image resolution (display and output). Use \"-1x-1\" to force the program to use the"
" input image resolution.");
......@@ -97,7 +98,8 @@ int openPoseTutorialThread2()
const auto outputSize = op::flagsToPoint(FLAGS_output_resolution, "-1x-1");
// producerType
const auto producerSharedPtr = op::flagsToProducer(FLAGS_image_dir, FLAGS_video, FLAGS_ip_camera, FLAGS_camera,
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps);
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps,
FLAGS_camera_parameter_folder);
const auto displayProducerFpsMode = (FLAGS_process_real_time
? op::ProducerFpsMode::OriginalFps : op::ProducerFpsMode::RetrievalFps);
producerSharedPtr->setProducerFpsMode(displayProducerFpsMode);
......
......@@ -61,6 +61,7 @@ DEFINE_int32(frame_rotate, 0, "Rotate each frame, 4 po
DEFINE_bool(frames_repeat, false, "Repeat frames when finished.");
DEFINE_bool(process_real_time, false, "Enable to keep the original source frame rate (e.g. for video). If the processing time is"
" too long, it will skip frames. If it is too fast, it will slow it down.");
DEFINE_string(camera_parameter_folder, "models/cameraParameters/", "String with the folder where the camera parameters are located.");
// OpenPose
DEFINE_string(model_folder, "models/", "Folder path (absolute or relative) where the models (pose, face, ...) are located.");
DEFINE_string(output_resolution, "-1x-1", "The image resolution (display and output). Use \"-1x-1\" to force the program to use the"
......@@ -311,7 +312,8 @@ int openPoseTutorialWrapper1()
const auto handNetInputSize = op::flagsToPoint(FLAGS_hand_net_resolution, "368x368 (multiples of 16)");
// producerType
const auto producerSharedPtr = op::flagsToProducer(FLAGS_image_dir, FLAGS_video, FLAGS_ip_camera, FLAGS_camera,
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps);
FLAGS_flir_camera, FLAGS_camera_resolution, FLAGS_camera_fps,
FLAGS_camera_parameter_folder);
// poseModel
const auto poseModel = op::flagsToPoseModel(FLAGS_model_pose);
// JSON saving
......
......@@ -6,13 +6,31 @@
namespace op
{
OP_API const cv::Mat getIntrinsics(const int cameraIndex);
class OP_API CameraParameterReader
{
public:
explicit CameraParameterReader();
OP_API const cv::Mat getDistorsion(const int cameraIndex);
void readParameters(const std::string& cameraParameterPath,
const std::vector<std::string>& serialNumbers);
OP_API cv::Mat getM(const int cameraIndex);
unsigned long long getNumberCameras() const;
OP_API unsigned long long getNumberCameras();
const std::vector<cv::Mat>& getCameraMatrices() const;
const std::vector<cv::Mat>& getCameraIntrinsics() const;
const std::vector<cv::Mat>& getCameraDistortions() const;
private:
std::vector<std::string> mSerialNumbers;
unsigned long long mNumberCameras;
std::vector<cv::Mat> mCameraMatrices;
std::vector<cv::Mat> mCameraIntrinsics;
std::vector<cv::Mat> mCameraDistortions;
DELETE_COPY(CameraParameterReader);
};
}
#endif // OPENPOSE_EXPERIMENTAL_3D_CAMERA_PARAMETER_READER_HPP
......@@ -8,6 +8,8 @@
namespace op
{
OP_API std::string dataFormatToString(const DataFormat dataFormat);
OP_API DataFormat stringToDataFormat(const std::string& dataFormat);
// Save custom float format
......@@ -24,16 +26,16 @@ namespace op
// Save/load json, xml, yaml, yml
OP_API void saveData(const std::vector<cv::Mat>& cvMats, const std::vector<std::string>& cvMatNames,
const std::string& fileNameNoExtension, const DataFormat format);
const std::string& fileNameNoExtension, const DataFormat dataFormat);
OP_API void saveData(const cv::Mat& cvMat, const std::string cvMatName, const std::string& fileNameNoExtension,
const DataFormat format);
const DataFormat dataFormat);
OP_API std::vector<cv::Mat> loadData(const std::vector<std::string>& cvMatNames,
const std::string& fileNameNoExtension, const DataFormat format);
const std::string& fileNameNoExtension, const DataFormat dataFormat);
OP_API cv::Mat loadData(const std::string& cvMatName, const std::string& fileNameNoExtension,
const DataFormat format);
const DataFormat dataFormat);
// Json - Saving as *.json not available in OpenCV verions < 3.0, this function is a quick fix
OP_API void savePeopleJson(const Array<float>& keypoints,
......
......@@ -3,11 +3,12 @@
#include <openpose/core/common.hpp>
#include <openpose/producer/producer.hpp>
#include <openpose/producer/spinnakerWrapper.hpp>
namespace op
{
/**
* FlirReader is an abstract class to extract frames from a image directory. Its interface imitates the
* FlirReader is an abstract class to extract frames from a FLIR stereo-camera system. Its interface imitates the
* cv::VideoCapture class, so it can be used quite similarly to the cv::VideoCapture class. Thus,
* it is quite similar to VideoReader and WebcamReader.
*/
......@@ -15,11 +16,9 @@ namespace op
{
public:
/**
* Constructor of FlirReader. It sets the image directory path from which the images will be loaded
* and generates a std::vector<std::string> with the list of images on that directory.
* @param imageDirectoryPath const std::string parameter with the folder path containing the images.
* Constructor of FlirReader. It opens all the available FLIR cameras
*/
explicit FlirReader();
explicit FlirReader(const std::string& cameraParametersPath);
~FlirReader();
......@@ -27,10 +26,7 @@ namespace op
std::string getFrameName();
inline bool isOpened() const
{
return true;
}
bool isOpened() const;
void release();
......@@ -39,11 +35,7 @@ namespace op
void set(const int capProperty, const double value);
private:
// PIMPL idiom
// http://www.cppsamples.com/common-tasks/pimpl.html
struct ImplFlirReader;
std::shared_ptr<ImplFlirReader> upImpl;
SpinnakerWrapper mSpinnakerWrapper;
Point<int> mResolution;
long long mFrameNameCounter;
......
......@@ -8,6 +8,7 @@
#include <openpose/producer/imageDirectoryReader.hpp>
#include <openpose/producer/ipCameraReader.hpp>
#include <openpose/producer/producer.hpp>
#include <openpose/producer/spinnakerWrapper.hpp>
#include <openpose/producer/videoCaptureReader.hpp>
#include <openpose/producer/videoReader.hpp>
#include <openpose/producer/webcamReader.hpp>
......
#ifndef OPENPOSE_PRODUCER_SPINNAKER_WRAPPER_HPP
#define OPENPOSE_PRODUCER_SPINNAKER_WRAPPER_HPP
#include <openpose/core/common.hpp>
namespace op
{
/**
* SpinnakerWrapper is a subclass of SpinnakerWrapper. It decouples the final interface (meant to imitates
* cv::VideoCapture) from the Spinnaker SDK wrapper.
*/
class OP_API SpinnakerWrapper
{
public:
/**
* Constructor of SpinnakerWrapper. It opens all the available FLIR cameras
*/
explicit SpinnakerWrapper(const std::string& cameraParameterPath);
~SpinnakerWrapper();
std::vector<cv::Mat> getRawFrames();
std::vector<cv::Mat> getCameraMatrices() const;
Point<int> getResolution() const;
bool isOpened() const;
void release();
private:
// PIMPL idiom
// http://www.cppsamples.com/common-tasks/pimpl.html
struct ImplSpinnakerWrapper;
std::shared_ptr<ImplSpinnakerWrapper> upImpl;
DELETE_COPY(SpinnakerWrapper);
};
}
#endif // OPENPOSE_PRODUCER_SPINNAKER_WRAPPER_HPP
......@@ -24,7 +24,8 @@ namespace op
const std::string& ipCameraPath, const int webcamIndex,
const bool flirCamera = false,
const std::string& webcamResolution = "1280x720",
const double webcamFps = 30.);
const double webcamFps = 30.,
const std::string& cameraParameterPath = "models/cameraParameters/");
OP_API std::vector<HeatMapType> flagsToHeatMaps(const bool heatMapsAddParts = false,
const bool heatMapsAddBkg = false,
......
<?xml version="1.0"?>
<opencv_storage>
<CameraMatrix type_id="opencv-matrix">
<rows>3</rows>
<cols>4</cols>
<dt>d</dt>
<data>
1. 0. 0. 0. 0. 1. 0. 0. 0. 0. 1. 0.</data></CameraMatrix>
<Intrinsics type_id="opencv-matrix">
<rows>3</rows>
<cols>3</cols>
<dt>d</dt>
<data>
8.1793481631740565e+02 0. 6.0070689997785121e+02 0.
8.1651774059837908e+02 5.1784529566329593e+02 0. 0. 1.</data></Intrinsics>
<Distortion type_id="opencv-matrix">
<rows>8</rows>
<cols>1</cols>
<dt>d</dt>
<data>
-1.8102158829399091e+00 9.1966147162623262e+00
-4.4293900343777355e-04 1.3638377686816653e-03
1.3303863414979364e+00 -1.4189051636354870e+00
8.4725535468475819e+00 4.7911023525901033e+00</data></Distortion>
</opencv_storage>
#include <vector>
#include <openpose/utilities/errorAndLog.hpp>
#include <openpose/filestream/fileStream.hpp>
#include <openpose/3d/cameraParameterReader.hpp>
namespace op
{
// User configurable code
// Intrinsic and distortion parameters
// Camera 1 parameters
const cv::Mat INTRINSIC_1 = (cv::Mat_<double>(3, 3) << 817.93481631740565, 0, 600.70689997785121,
0, 816.51774059837908, 517.84529566329593,
0, 0, 1);
const cv::Mat DISTORTION_1 = (cv::Mat_<double>(8, 1) <<
-1.8102158829399091, 9.1966147162623262, -0.00044293900343777355, 0.0013638377686816653,
1.3303863414979364, -1.418905163635487, 8.4725535468475819, 4.7911023525901033);
// Camera 2 parameters
const cv::Mat INTRINSIC_2 = (cv::Mat_<double>(3, 3) << 816.20921132436638, 0, 612.67087968681585,
0, 816.18292222910486, 530.47901782670431,
0, 0, 1);
const cv::Mat DISTORTION_2 = (cv::Mat_<double>(8, 1) <<
-5.1088507540294881, 133.63995617304997, -0.0010048069080912836, 0.00018825291386406282,
20.688286893903879, -4.7604289550474768, 132.42412342224557, 70.01195364029752);
const cv::Mat INTRINSIC_3 = (cv::Mat_<double>(3, 3) << 798.42980806905666, 0, 646.48130011561727,
0, 798.46535448393979, 523.91590563194586,
0, 0, 1);
// Camera 3
const cv::Mat DISTORTION_3 = (cv::Mat_<double>(8, 1) <<
-0.57530495294002304, -0.54721992620722582, -0.00037614702677289967, -0.00081995658363481598,
-0.020321660897680775, -0.18040544059116842, -0.87724444571603022, -0.13136636671099691);
// // User configurable parameters
// // Camera matrices (extrinsic parameters) - rotation and pose orientation between cameras
// // Camera 1
// const cv::Mat M_1_1 = (cv::Mat_<double>(3, 4) << 1, 0, 0, 0,
// 0, 1, 0, 0,
// 0, 0, 1, 0);
// // Not working on Windows
// // const cv::Mat M_1_1 = cv::Mat::eye(3, 4, CV_64F);
// // From camera 1 to 2
// const cv::Mat M_1_2 = (cv::Mat_<double>(3, 4)
// << 0.999962504862692, -0.00165862051503619, 0.00849928507093793, -238.301309354482,
// 0.00176155163779584, 0.999925029704659, -0.0121174215889211, 4.75863886121558,
// -0.00847854967298925, 0.0121319391740716, 0.999890459124058, 15.9219925821916);
// // From camera 1 to 3
// const cv::Mat M_1_3 = (cv::Mat_<double>(3, 4)
// << 0.995809442124071, -0.000473104796892308, 0.0914512501193800, -461.301274485705,
// 0.00165046455210419, 0.999916727562850, -0.0127989806923977, 6.22648121362088,
// -0.0914375794917412, 0.0128962828696210, 0.995727299487585, 63.4911132860733);
// // From camera 2 to 3
// const cv::Mat M_2_3 = (cv::Mat_<double>(3, 4)
// << 0.999644115423621, -0.00194501088674130, -0.0266056278177532, -235.236375502202,
// 0.00201646110733780, 0.999994431880356, 0.00265896462686206, 9.52238656728889,
// 0.0266003079592876, -0.00271166755609303, 0.999642471324391, -4.23534963077479);
// Extrinsic parameters - rotation and pose orientation between cameras
// Camera 1
const cv::Mat M_1_1 = (cv::Mat_<double>(3, 4) << 1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0);
// Not working on Windows
// const cv::Mat M_1_1 = cv::Mat::eye(3, 4, CV_64F);
// From camera 1 to 2
const cv::Mat M_1_2 = (cv::Mat_<double>(3, 4)
<< 0.999962504862692, -0.00165862051503619, 0.00849928507093793, -238.301309354482,
0.00176155163779584, 0.999925029704659, -0.0121174215889211, 4.75863886121558,
-0.00847854967298925, 0.0121319391740716, 0.999890459124058, 15.9219925821916);
// From camera 1 to 3
const cv::Mat M_1_3 = (cv::Mat_<double>(3, 4)
<< 0.995809442124071, -0.000473104796892308, 0.0914512501193800, -461.301274485705,
0.00165046455210419, 0.999916727562850, -0.0127989806923977, 6.22648121362088,
-0.0914375794917412, 0.0128962828696210, 0.995727299487585, 63.4911132860733);
// From camera 2 to 3
const cv::Mat M_2_3 = (cv::Mat_<double>(3, 4)
<< 0.999644115423621, -0.00194501088674130, -0.0266056278177532, -235.236375502202,
0.00201646110733780, 0.999994431880356, 0.00265896462686206, 9.52238656728889,
0.0266003079592876, -0.00271166755609303, 0.999642471324391, -4.23534963077479);
// // Intrinsic and distortion parameters
// // Camera 1 parameters
// const cv::Mat INTRINSIC_1 = (cv::Mat_<double>(3, 3) << 817.93481631740565, 0, 600.70689997785121,
// 0, 816.51774059837908, 517.84529566329593,
// 0, 0, 1);
// const cv::Mat DISTORTION_1 = (cv::Mat_<double>(8, 1) <<
// -1.8102158829399091, 9.1966147162623262, -0.00044293900343777355, 0.0013638377686816653,
// 1.3303863414979364, -1.418905163635487, 8.4725535468475819, 4.7911023525901033);
// // Camera 2 parameters
// const cv::Mat INTRINSIC_2 = (cv::Mat_<double>(3, 3) << 816.20921132436638, 0, 612.67087968681585,
// 0, 816.18292222910486, 530.47901782670431,
// 0, 0, 1);
// const cv::Mat DISTORTION_2 = (cv::Mat_<double>(8, 1) <<
// -5.1088507540294881, 133.63995617304997, -0.0010048069080912836, 0.00018825291386406282,
// 20.688286893903879, -4.7604289550474768, 132.42412342224557, 70.01195364029752);
// const cv::Mat INTRINSIC_3 = (cv::Mat_<double>(3, 3) << 798.42980806905666, 0, 646.48130011561727,
// 0, 798.46535448393979, 523.91590563194586,
// 0, 0, 1);
// // Camera 3
// const cv::Mat DISTORTION_3 = (cv::Mat_<double>(8, 1) <<
// -0.57530495294002304, -0.54721992620722582, -0.00037614702677289967, -0.00081995658363481598,
// -0.020321660897680775, -0.18040544059116842, -0.87724444571603022, -0.13136636671099691);
// // Do not modify this code
// const std::vector<cv::Mat> INTRINSICS{ INTRINSIC_1, INTRINSIC_2, INTRINSIC_3 };
// const std::vector<cv::Mat> DISTORTIONS{ DISTORTION_1, DISTORTION_2, DISTORTION_3 };
// const std::vector<cv::Mat> M{ M_1_1, M_1_2, M_1_3 };
// // Not working on Windows
// // const std::vector<cv::Mat> M_EACH_CAMERA{
// // INTRINSIC_1 * M_1_1,
// // INTRINSIC_2 * M_1_2,
// // INTRINSIC_3 * M_1_3
// // };
CameraParameterReader::CameraParameterReader()
{
}
void CameraParameterReader::readParameters(const std::string& cameraParameterPath,
const std::vector<std::string>& serialNumbers)
{
try
{
// Serial numbers
mSerialNumbers = serialNumbers;
// Commong saving/loading
const std::string fileNameNoExtension{cameraParameterPath};
const auto dataFormat = DataFormat::Xml;
const std::vector<std::string> cvMatNames {
"CameraMatrix", "Intrinsics", "Distortion"
};
// // Temporary code to save the parameters in disk
// // #cameras
// mNumberCameras = INTRINSICS.size();
// // Camera matrices
// mCameraMatrices.resize(mNumberCameras);
// for (auto i = 0ull ; i < mNumberCameras ; i++)
// mCameraMatrices[i] = INTRINSICS.at(i) * M.at(i);
// // Not working on Windows
// // mCameraMatrices[i] = M_EACH_CAMERA.at(i);
// // Camera intrinsics
// mCameraIntrinsics.resize(mNumberCameras);
// for (auto i = 0ull ; i < mNumberCameras ; i++)
// mCameraIntrinsics[i] = INTRINSICS.at(i);
// // Camera extrinsics
// mCameraDistortions.resize(mNumberCameras);
// for (auto i = 0ull ; i < mNumberCameras ; i++)
// mCameraDistortions[i] = DISTORTIONS.at(i);
// Do not modify this code
const std::vector<cv::Mat> INTRINSICS{ INTRINSIC_1, INTRINSIC_2, INTRINSIC_3 };
const std::vector<cv::Mat> DISTORTIONS{ DISTORTION_1, DISTORTION_2, DISTORTION_3 };
const std::vector<cv::Mat> M{ M_1_1, M_1_2, M_1_3 };
// Not working on Windows
// const std::vector<cv::Mat> M_EACH_CAMERA{
// INTRINSIC_1 * M_1_1,
// INTRINSIC_2 * M_1_2,
// INTRINSIC_3 * M_1_3
// };
// // Saving
// for (auto i = 0ull ; i < mNumberCameras ; i++)
// {
// std::vector<cv::Mat> cameraParameters;
// cameraParameters.emplace_back(M.at(i));
// cameraParameters.emplace_back(mCameraIntrinsics.at(i));
// cameraParameters.emplace_back(mCameraDistortions.at(i));
// saveData(cameraParameters, cvMatNames, fileNameNoExtension + mSerialNumbers.at(i), dataFormat);
// }
// Load parameters
mNumberCameras = serialNumbers.size();
mCameraMatrices.clear();
mCameraIntrinsics.clear();
mCameraDistortions.clear();
// log("Camera matrices:");
for (auto i = 0ull ; i < mNumberCameras ; i++)
{
const auto parameterPath = fileNameNoExtension + mSerialNumbers.at(i);
const auto cameraParameters = loadData(cvMatNames, parameterPath, dataFormat);
const auto& intrinsics = cameraParameters.at(1);
// Error if empty element
if (cameraParameters.empty() || cameraParameters.at(0).empty()
|| cameraParameters.at(1).empty() || cameraParameters.at(2).empty())
{
const std::string errorMessage = " of the camera with serial number `" + mSerialNumbers[i]
+ "` (file: " + parameterPath + "." + dataFormatToString(dataFormat)
+ "). Is its format valid? You might want to check the example xml"
+ " file.";
if (cameraParameters.empty())
error("Error at reading the camera parameters" + errorMessage,
__LINE__, __FUNCTION__, __FILE__);
if (cameraParameters.at(0).empty())
error("Error at reading the camera matrix parameters" + errorMessage,
__LINE__, __FUNCTION__, __FILE__);
if (cameraParameters.at(1).empty())
error("Error at reading the camera intrinsics parameters" + errorMessage,
__LINE__, __FUNCTION__, __FILE__);
if (cameraParameters.at(2).empty())
error("Error at reading the camera distortion parameters" + errorMessage,
__LINE__, __FUNCTION__, __FILE__);
}
mCameraMatrices.emplace_back(intrinsics * cameraParameters.at(0));
mCameraIntrinsics.emplace_back(intrinsics);
mCameraDistortions.emplace_back(cameraParameters.at(2));
// log(cameraParameters.at(0));
}
// // mCameraMatrices
// log("\nFull camera matrices:");
// for (const auto& cvMat : mCameraMatrices)
// log(cvMat);
// // mCameraIntrinsics
// log("\nCamera intrinsic parameters:");
// for (const auto& cvMat : mCameraIntrinsics)
// log(cvMat);
// // mCameraDistortions
// log("\nCamera distortion parameters:");
// for (const auto& cvMat : mCameraDistortions)
// log(cvMat);
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
}
const cv::Mat getIntrinsics(const int cameraIndex)
unsigned long long CameraParameterReader::getNumberCameras() const
{
try
{
return INTRINSICS[cameraIndex];
return mNumberCameras;
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return cv::Mat{};
return 0ull;
}
}
const cv::Mat getDistorsion(const int cameraIndex)
const std::vector<cv::Mat>& CameraParameterReader::getCameraMatrices() const
{
try
{
return DISTORTIONS[cameraIndex];
return mCameraMatrices;
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return cv::Mat{};
return mCameraMatrices;
}
}
cv::Mat getM(const int cameraIndex)
const std::vector<cv::Mat>& CameraParameterReader::getCameraIntrinsics() const
{
try
{
return INTRINSICS.at(cameraIndex) * M.at(cameraIndex);
// Not working on Windows
// return M_EACH_CAMERA[cameraIndex];
return mCameraIntrinsics;
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return cv::Mat{};
return mCameraIntrinsics;
}
}
unsigned long long getNumberCameras()
const std::vector<cv::Mat>& CameraParameterReader::getCameraDistortions() const
{
try
{
return INTRINSICS.size();
return mCameraDistortions;
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return 0;
return mCameraDistortions;
}
}
}
......@@ -73,7 +73,7 @@ namespace op
// Access pointers
const T* sourcePtr = sourcePtrs[n];
T* tempTargetPtr;
if(n != 0)
if (n != 0)
tempTargetPtr = tempTargetPtrs[n-1].get();
else
tempTargetPtr = targetPtr;
......
......@@ -12,34 +12,9 @@ namespace op
const auto errorMessage = "Json format only implemented in OpenCV for versions >= 3.0. Check savePoseJson"
" instead.";
std::string dataFormatToString(const DataFormat format)
std::string getFullName(const std::string& fileNameNoExtension, const DataFormat dataFormat)
{
try
{
if (format == DataFormat::Json)
return "json";
else if (format == DataFormat::Xml)
return "xml";
else if (format == DataFormat::Yaml)
return "yaml";
else if (format == DataFormat::Yml)
return "yml";
else
{
error("Undefined DataFormat.", __LINE__, __FUNCTION__, __FILE__);
return "";
}
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return "";
}
}
std::string getFullName(const std::string& fileNameNoExtension, const DataFormat format)
{
return fileNameNoExtension + "." + dataFormatToString(format);
return fileNameNoExtension + "." + dataFormatToString(dataFormat);
}
void addKeypointsToJson(JsonOfstream& jsonOfstream,
......@@ -150,6 +125,31 @@ namespace op
// Public classes (on *.hpp)
std::string dataFormatToString(const DataFormat dataFormat)
{
try
{
if (dataFormat == DataFormat::Json)
return "json";
else if (dataFormat == DataFormat::Xml)
return "xml";
else if (dataFormat == DataFormat::Yaml)
return "yaml";
else if (dataFormat == DataFormat::Yml)
return "yml";
else
{
error("Undefined DataFormat.", __LINE__, __FUNCTION__, __FILE__);
return "";
}
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return "";
}
}
DataFormat stringToDataFormat(const std::string& dataFormat)
{
try
......@@ -164,7 +164,7 @@ namespace op
return DataFormat::Yml;
else
{
error("String does not correspond to any format (json, xml, yaml, yml)",
error("String does not correspond to any known format (json, xml, yaml, yml)",
__LINE__, __FUNCTION__, __FILE__);
return DataFormat::Json;
}
......@@ -204,17 +204,18 @@ namespace op
}
void saveData(const std::vector<cv::Mat>& cvMats, const std::vector<std::string>& cvMatNames,
const std::string& fileNameNoExtension, const DataFormat format)
const std::string& fileNameNoExtension, const DataFormat dataFormat)
{
try
{
// Security checks
if (format == DataFormat::Json && CV_MAJOR_VERSION < 3)
if (dataFormat == DataFormat::Json && CV_MAJOR_VERSION < 3)
error(errorMessage, __LINE__, __FUNCTION__, __FILE__);
if (cvMats.size() != cvMatNames.size())
error("cvMats.size() != cvMatNames.size()", __LINE__, __FUNCTION__, __FILE__);
error("cvMats.size() != cvMatNames.size() (" + std::to_string(cvMats.size())
+ " vs. " + std::to_string(cvMatNames.size()) + ")", __LINE__, __FUNCTION__, __FILE__);
// Save cv::Mat data
cv::FileStorage fileStorage{getFullName(fileNameNoExtension, format), cv::FileStorage::WRITE};
cv::FileStorage fileStorage{getFullName(fileNameNoExtension, dataFormat), cv::FileStorage::WRITE};
for (auto i = 0u ; i < cvMats.size() ; i++)
fileStorage << cvMatNames[i] << (cvMats[i].empty() ? cv::Mat() : cvMats[i]);
// Release file
......@@ -227,11 +228,12 @@ namespace op
}
void saveData(const cv::Mat& cvMat, const std::string cvMatName, const std::string& fileNameNoExtension,
const DataFormat format)
const DataFormat dataFormat)
{
try
{
saveData(std::vector<cv::Mat>{cvMat}, std::vector<std::string>{cvMatName}, fileNameNoExtension, format);
saveData(std::vector<cv::Mat>{cvMat}, std::vector<std::string>{cvMatName}, fileNameNoExtension,
dataFormat);
}
catch (const std::exception& e)
{
......@@ -240,14 +242,19 @@ namespace op
}
std::vector<cv::Mat> loadData(const std::vector<std::string>& cvMatNames, const std::string& fileNameNoExtension,
const DataFormat format)
const DataFormat dataFormat)
{
try
{
if (format == DataFormat::Json && CV_MAJOR_VERSION < 3)
// Security checks
if (dataFormat == DataFormat::Json && CV_MAJOR_VERSION < 3)
error(errorMessage, __LINE__, __FUNCTION__, __FILE__);
cv::FileStorage fileStorage{getFullName(fileNameNoExtension, format), cv::FileStorage::READ};
const auto fileName = getFullName(fileNameNoExtension, dataFormat);
// Security checks
if (!existFile(fileName))
error("File to be read does not exist: " + fileName + ".", __LINE__, __FUNCTION__, __FILE__);
// Read file
cv::FileStorage fileStorage{fileName, cv::FileStorage::READ};
std::vector<cv::Mat> cvMats(cvMatNames.size());
for (auto i = 0u ; i < cvMats.size() ; i++)
fileStorage[cvMatNames[i]] >> cvMats[i];
......@@ -261,11 +268,11 @@ namespace op
}
}
cv::Mat loadData(const std::string& cvMatName, const std::string& fileNameNoExtension, const DataFormat format)
cv::Mat loadData(const std::string& cvMatName, const std::string& fileNameNoExtension, const DataFormat dataFormat)
{
try
{
return loadData(std::vector<std::string>{cvMatName}, fileNameNoExtension, format)[0];
return loadData(std::vector<std::string>{cvMatName}, fileNameNoExtension, dataFormat)[0];
}
catch (const std::exception& e)
{
......
......@@ -4,6 +4,7 @@ set(SOURCES_OP_PRODUCER
imageDirectoryReader.cpp
ipCameraReader.cpp
producer.cpp
spinnakerWrapper.cpp
videoCaptureReader.cpp
videoReader.cpp
webcamReader.cpp)
......
#include <opencv2/imgproc/imgproc.hpp> // cv::undistort
#ifdef WITH_FLIR_CAMERA
#include <Spinnaker.h>
#endif
#include <openpose/3d/cameraParameterReader.hpp>
#include <openpose/filestream/fileStream.hpp>
#include <openpose/utilities/fastMath.hpp>
#include <openpose/utilities/fileSystem.hpp>
#include <openpose/utilities/string.hpp>
#include <openpose/producer/flirReader.hpp>
namespace op
{
#ifdef WITH_FLIR_CAMERA
/*
* This function converts between Spinnaker::ImagePtr container to cv::Mat container used in OpenCV.
*/
cv::Mat flirReaderToCvMat(const Spinnaker::ImagePtr &imagePtr)
{
try
{
const auto XPadding = imagePtr->GetXPadding();
const auto YPadding = imagePtr->GetYPadding();
const auto rowsize = imagePtr->GetWidth();
const auto colsize = imagePtr->GetHeight();
// Image data contains padding. When allocating cv::Mat container size, you need to account for the X,Y
// image data padding.
return cv::Mat((int)(colsize + YPadding), (int)(rowsize + XPadding), CV_8UC3, imagePtr->GetData(),
imagePtr->GetStride());
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return cv::Mat();
}
}
// This function configures the camera to use a trigger. First, trigger mode is
// set to off in order to select the trigger source. Once the trigger source
// has been selected, trigger mode is then enabled, which has the camera
// capture only a single image upon the execution of the chosen trigger.
int configureTrigger(Spinnaker::GenApi::INodeMap &iNodeMap)
{
try
{
int result = 0;
log("*** CONFIGURING TRIGGER ***", Priority::High);
log("Configuring hardware trigger...", Priority::High);
// Ensure trigger mode off
// *** NOTES ***
// The trigger must be disabled in order to configure whether the source
// is software or hardware.
Spinnaker::GenApi::CEnumerationPtr ptrTriggerMode = iNodeMap.GetNode("TriggerMode");
if (!Spinnaker::GenApi::IsAvailable(ptrTriggerMode) || !Spinnaker::GenApi::IsReadable(ptrTriggerMode))
error("Unable to disable trigger mode (node retrieval). Aborting...",
__LINE__, __FUNCTION__, __FILE__);
Spinnaker::GenApi::CEnumEntryPtr ptrTriggerModeOff = ptrTriggerMode->GetEntryByName("Off");
if (!Spinnaker::GenApi::IsAvailable(ptrTriggerModeOff)
|| !Spinnaker::GenApi::IsReadable(ptrTriggerModeOff))
error("Unable to disable trigger mode (enum entry retrieval). Aborting...",
__LINE__, __FUNCTION__, __FILE__);
ptrTriggerMode->SetIntValue(ptrTriggerModeOff->GetValue());
log("Trigger mode disabled...", Priority::High);
// Select trigger source
// *** NOTES ***
// The trigger source must be set to hardware or software while trigger
// mode is off.
Spinnaker::GenApi::CEnumerationPtr ptrTriggerSource = iNodeMap.GetNode("TriggerSource");
if (!Spinnaker::GenApi::IsAvailable(ptrTriggerSource)
|| !Spinnaker::GenApi::IsWritable(ptrTriggerSource))
error("Unable to set trigger mode (node retrieval). Aborting...",
__LINE__, __FUNCTION__, __FILE__);
// Set trigger mode to hardware ('Line0')
Spinnaker::GenApi::CEnumEntryPtr ptrTriggerSourceHardware = ptrTriggerSource->GetEntryByName("Line0");
if (!Spinnaker::GenApi::IsAvailable(ptrTriggerSourceHardware)
|| !Spinnaker::GenApi::IsReadable(ptrTriggerSourceHardware))
error("Unable to set trigger mode (enum entry retrieval). Aborting...",
__LINE__, __FUNCTION__, __FILE__);
ptrTriggerSource->SetIntValue(ptrTriggerSourceHardware->GetValue());
log("Trigger source set to hardware...", Priority::High);
// Turn trigger mode on
// *** LATER ***
// Once the appropriate trigger source has been set, turn trigger mode
// on in order to retrieve images using the trigger.
Spinnaker::GenApi::CEnumEntryPtr ptrTriggerModeOn = ptrTriggerMode->GetEntryByName("On");
if (!Spinnaker::GenApi::IsAvailable(ptrTriggerModeOn)
|| !Spinnaker::GenApi::IsReadable(ptrTriggerModeOn))
{
error("Unable to enable trigger mode (enum entry retrieval). Aborting...",
__LINE__, __FUNCTION__, __FILE__);
return -1;
}
ptrTriggerMode->SetIntValue(ptrTriggerModeOn->GetValue());
log("Trigger mode turned back on...", Priority::High);
return result;
}
catch (const Spinnaker::Exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return -1;
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return -1;
}
}
// This function returns the camera to a normal state by turning off trigger
// mode.
int resetTrigger(Spinnaker::GenApi::INodeMap &iNodeMap)
{
try
{
int result = 0;
//
// Turn trigger mode back off
//
// *** NOTES ***
// Once all images have been captured, turn trigger mode back off to
// restore the camera to a clean state.
//
Spinnaker::GenApi::CEnumerationPtr ptrTriggerMode = iNodeMap.GetNode("TriggerMode");
if (!Spinnaker::GenApi::IsAvailable(ptrTriggerMode)
|| !Spinnaker::GenApi::IsReadable(ptrTriggerMode))
error("Unable to disable trigger mode (node retrieval). Non-fatal error...",
__LINE__, __FUNCTION__, __FILE__);
Spinnaker::GenApi::CEnumEntryPtr ptrTriggerModeOff = ptrTriggerMode->GetEntryByName("Off");
if (!Spinnaker::GenApi::IsAvailable(ptrTriggerModeOff)
|| !Spinnaker::GenApi::IsReadable(ptrTriggerModeOff))
error("Unable to disable trigger mode (enum entry retrieval). Non-fatal error...",
__LINE__, __FUNCTION__, __FILE__);
ptrTriggerMode->SetIntValue(ptrTriggerModeOff->GetValue());
// log("Trigger mode disabled...", Priority::High);
return result;
}
catch (Spinnaker::Exception &e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return -1;
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return -1;
}
}
// This function acquires and displays images from each device.
std::vector<cv::Mat> acquireImages(Spinnaker::CameraList &cameraList)
FlirReader::FlirReader(const std::string& cameraParametersPath) :
Producer{ProducerType::FlirCamera},
mSpinnakerWrapper{cameraParametersPath + "flir/"},
mFrameNameCounter{0}
{
try
{
try
{
// Security checks
if ((unsigned long long) cameraList.GetSize() != getNumberCameras())
error("The number of cameras must be the same as the INTRINSICS vector size.",
__LINE__, __FUNCTION__, __FILE__);
std::vector<cv::Mat> cvMats;
// Retrieve, convert, and return an image for each camera
// In order to work with simultaneous camera streams, nested loops are
// needed. It is important that the inner loop be the one iterating
// through the cameras; otherwise, all images will be grabbed from a
// single camera before grabbing any images from another.
// Get cameras
std::vector<Spinnaker::CameraPtr> cameraPtrs(cameraList.GetSize());
for (auto i = 0u; i < cameraPtrs.size(); i++)
cameraPtrs.at(i) = cameraList.GetByIndex(i);
std::vector<Spinnaker::ImagePtr> imagePtrs(cameraPtrs.size());
// Getting frames
// Retrieve next received image and ensure image completion
// Spinnaker::ImagePtr imagePtr = cameraPtrs.at(i)->GetNextImage();
// Clean buffer + retrieve next received image + ensure image completion
auto durationMs = 0.;
// for (auto counter = 0 ; counter < 10 ; counter++)
while (durationMs < 1.)
{
const auto begin = std::chrono::high_resolution_clock::now();
for (auto i = 0u; i < cameraPtrs.size(); i++)
imagePtrs.at(i) = cameraPtrs.at(i)->GetNextImage();
durationMs = std::chrono::duration_cast<std::chrono::nanoseconds>(
std::chrono::high_resolution_clock::now()-begin
).count() * 1e-6;
// log("Time extraction (ms): " + std::to_string(durationMs), Priority::High);
}
// Original format -> RGB8
bool imagesExtracted = true;
for (auto& imagePtr : imagePtrs)
{
if (imagePtr->IsIncomplete())
{
log("Image incomplete with image status " + std::to_string(imagePtr->GetImageStatus()) + "...",
Priority::High, __LINE__, __FUNCTION__, __FILE__);
imagesExtracted = false;
break;
}
else
{
// Print image information
// Convert image to RGB
// Interpolation methods
// http://softwareservices.ptgrey.com/Spinnaker/latest/group___spinnaker_defs.html
// DEFAULT Default method.
// NO_COLOR_PROCESSING No color processing.
// NEAREST_NEIGHBOR Fastest but lowest quality. Equivalent to FLYCAPTURE_NEAREST_NEIGHBOR_FAST
// in FlyCapture.
// EDGE_SENSING Weights surrounding pixels based on localized edge orientation.
// HQ_LINEAR Well-balanced speed and quality.
// RIGOROUS Slowest but produces good results.
// IPP Multi-threaded with similar results to edge sensing.
// DIRECTIONAL_FILTER Best quality but much faster than rigorous.
// Colors
// http://softwareservices.ptgrey.com/Spinnaker/latest/group___camera_defs__h.html#ggabd5af55aaa20bcb0644c46241c2cbad1a33a1c8a1f6dbcb4a4eaaaf6d4d7ff1d1
// PixelFormat_BGR8
// Time tests
// const auto reps = 1e3;
// // const auto reps = 1e2; // for RIGOROUS & DIRECTIONAL_FILTER
// const auto begin = std::chrono::high_resolution_clock::now();
// for (auto asdf = 0 ; asdf < reps ; asdf++){
// ~ 1.5 ms but pixeled
// imagePtr = imagePtr->Convert(Spinnaker::PixelFormat_BGR8, Spinnaker::DEFAULT);
// ~0.5 ms but BW
// imagePtr = imagePtr->Convert(Spinnaker::PixelFormat_BGR8, Spinnaker::NO_COLOR_PROCESSING);
// ~6 ms, looks as good as best
imagePtr = imagePtr->Convert(Spinnaker::PixelFormat_BGR8, Spinnaker::HQ_LINEAR);
// ~2 ms default << edge << best
// imagePtr = imagePtr->Convert(Spinnaker::PixelFormat_BGR8, Spinnaker::EDGE_SENSING);
// ~115, too slow
// imagePtr = imagePtr->Convert(Spinnaker::PixelFormat_BGR8, Spinnaker::RIGOROUS);
// ~2 ms, slightly worse than HQ_LINEAR
// imagePtr = imagePtr->Convert(Spinnaker::PixelFormat_BGR8, Spinnaker::IPP);
// ~30 ms, ideally best quality?
// imagePtr = imagePtr->Convert(Spinnaker::PixelFormat_BGR8, Spinnaker::DIRECTIONAL_FILTER);
// imagePtr = imagePtr;
// }
// durationMs = std::chrono::duration_cast<std::chrono::nanoseconds>(
// std::chrono::high_resolution_clock::now()-begin
// ).count() * 1e-6;
// log("Time conversion (ms): " + std::to_string(durationMs / reps), Priority::High);
}
}
// Convert to cv::Mat
if (imagesExtracted)
{
for (auto i = 0u; i < imagePtrs.size(); i++)
{
// Baseline
// cvMats.emplace_back(flirReaderToCvMat(imagePtrs.at(i)).clone());
// Undistort
// http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#undistort
auto auxCvMat = flirReaderToCvMat(imagePtrs.at(i));
cvMats.emplace_back();
cv::undistort(auxCvMat, cvMats[i], getIntrinsics(i), getDistorsion(i));
}
}
return cvMats;
}
catch (Spinnaker::Exception &e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return {};
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return {};
}
const auto resolution = mSpinnakerWrapper.getResolution();
set(CV_CAP_PROP_FRAME_WIDTH, resolution.x);
set(CV_CAP_PROP_FRAME_HEIGHT, resolution.y);
}
// This function prints the device information of the camera from the transport
// layer; please see NodeMapInfo example for more in-depth comments on printing
// device information from the nodemap.
int printDeviceInfo(Spinnaker::GenApi::INodeMap &iNodeMap, const unsigned int camNum)
catch (const std::exception& e)
{
int result = 0;
log("Printing device information for camera " + std::to_string(camNum) + "...\n", Priority::High);
Spinnaker::GenApi::FeatureList_t features;
Spinnaker::GenApi::CCategoryPtr cCategoryPtr = iNodeMap.GetNode("DeviceInformation");
if (Spinnaker::GenApi::IsAvailable(cCategoryPtr) && Spinnaker::GenApi::IsReadable(cCategoryPtr))
{
cCategoryPtr->GetFeatures(features);
Spinnaker::GenApi::FeatureList_t::const_iterator it;
for (it = features.begin(); it != features.end(); ++it)
{
Spinnaker::GenApi::CNodePtr pfeatureNode = *it;
const auto cValuePtr = (Spinnaker::GenApi::CValuePtr)pfeatureNode;
log(pfeatureNode->GetName() + " : " +
(IsReadable(cValuePtr) ? cValuePtr->ToString() : "Node not readable"), Priority::High);
}
}
else
log("Device control information not available.", Priority::High);
log(" ", Priority::High);
return result;
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
#else
const std::string WITH_FLIR_CAMERA_ERROR{"OpenPose CMake must be compiled with the `WITH_FLIR_CAMERA` flag in"
" order to use the FLIR camera. Alternatively, disable `--flir_camera`."};
#endif
struct FlirReader::ImplFlirReader
{
#ifdef WITH_FLIR_CAMERA
bool mInitialized;
Spinnaker::CameraList mCameraList;
Spinnaker::SystemPtr mSystemPtr;
ImplFlirReader() :
mInitialized{false}
{
}
#endif
};
FlirReader::FlirReader() :
Producer{ProducerType::FlirCamera},
#ifdef WITH_FLIR_CAMERA
upImpl{new ImplFlirReader{}},
#endif
mFrameNameCounter{0}
{
#ifdef WITH_FLIR_CAMERA
try
{
upImpl->mInitialized = true;
// Print application build information
log(std::string{ "Application build date: " } + __DATE__ + " " + __TIME__, Priority::High);
// Retrieve singleton reference to upImpl->mSystemPtr object
upImpl->mSystemPtr = Spinnaker::System::GetInstance();
// Retrieve list of cameras from the upImpl->mSystemPtr
upImpl->mCameraList = upImpl->mSystemPtr->GetCameras();
unsigned int numCameras = upImpl->mCameraList.GetSize();
log("Number of cameras detected: " + std::to_string(numCameras), Priority::High);
// Finish if there are no cameras
if (numCameras == 0)
{
// Clear camera list before releasing upImpl->mSystemPtr
upImpl->mCameraList.Clear();
// Release upImpl->mSystemPtr
upImpl->mSystemPtr->ReleaseInstance();
log("Not enough cameras!\nPress Enter to exit...", Priority::High);
getchar();
error("No cameras detected.", __LINE__, __FUNCTION__, __FILE__);
}
log("Camera system initialized...", Priority::High);
//
// Retrieve transport layer nodemaps and print device information for
// each camera
//
// *** NOTES ***
// This example retrieves information from the transport layer nodemap
// twice: once to print device information and once to grab the device
// serial number. Rather than caching the nodemap, each nodemap is
// retrieved both times as needed.
//
log("\n*** DEVICE INFORMATION ***\n", Priority::High);
for (int i = 0; i < upImpl->mCameraList.GetSize(); i++)
{
// Select camera
auto cameraPtr = upImpl->mCameraList.GetByIndex(i);
// Retrieve TL device nodemap
auto& iNodeMapTLDevice = cameraPtr->GetTLDeviceNodeMap();
// Print device information
auto result = printDeviceInfo(iNodeMapTLDevice, i);
if (result < 0)
error("Result > 0, error " + std::to_string(result) + " occurred...",
__LINE__, __FUNCTION__, __FILE__);
}
for (auto i = 0; i < upImpl->mCameraList.GetSize(); i++)
{
// Select camera
auto cameraPtr = upImpl->mCameraList.GetByIndex(i);
// Initialize each camera
// You may notice that the steps in this function have more loops with
// less steps per loop; this contrasts the acquireImages() function
// which has less loops but more steps per loop. This is done for
// demonstrative purposes as both work equally well.
// Later: Each camera needs to be deinitialized once all images have been
// acquired.
cameraPtr->Init();
// Retrieve GenICam nodemap
// auto& iNodeMap = cameraPtr->GetNodeMap();
// // Configure trigger
// result = configureTrigger(iNodeMap);
// if (result < 0)
// error("Result > 0, error " + std::to_string(result) + " occurred...",
// __LINE__, __FUNCTION__, __FILE__);
// // Configure chunk data
// result = configureChunkData(iNodeMap);
// if (result < 0)
// return result;
// Remove buffer --> Always get newest frame
Spinnaker::GenApi::INodeMap& snodeMap = cameraPtr->GetTLStreamNodeMap();
Spinnaker::GenApi::CEnumerationPtr ptrBufferHandlingMode = snodeMap.GetNode(
"StreamBufferHandlingMode");
if (!Spinnaker::GenApi::IsAvailable(ptrBufferHandlingMode)
|| !Spinnaker::GenApi::IsWritable(ptrBufferHandlingMode))
error("Unable to change buffer handling mode", __LINE__, __FUNCTION__, __FILE__);
Spinnaker::GenApi::CEnumEntryPtr ptrBufferHandlingModeNewest = ptrBufferHandlingMode->GetEntryByName(
"NewestFirstOverwrite");
if (!Spinnaker::GenApi::IsAvailable(ptrBufferHandlingModeNewest)
|| !IsReadable(ptrBufferHandlingModeNewest))
error("Unable to set buffer handling mode to newest (entry 'NewestFirstOverwrite' retrieval)."
" Aborting...", __LINE__, __FUNCTION__, __FILE__);
int64_t bufferHandlingModeNewest = ptrBufferHandlingModeNewest->GetValue();
ptrBufferHandlingMode->SetIntValue(bufferHandlingModeNewest);
}
// Prepare each camera to acquire images
//
// *** NOTES ***
// For pseudo-simultaneous streaming, each camera is prepared as if it
// were just one, but in a loop. Notice that cameras are selected with
// an index. We demonstrate pseduo-simultaneous streaming because true
// simultaneous streaming would require multiple process or threads,
// which is too complex for an example.
//
// Serial numbers are the only persistent objects we gather in this
// example, which is why a std::vector is created.
std::vector<Spinnaker::GenICam::gcstring> strSerialNumbers(upImpl->mCameraList.GetSize());
for (auto i = 0u; i < strSerialNumbers.size(); i++)
{
// Select camera
auto cameraPtr = upImpl->mCameraList.GetByIndex(i);
// Set acquisition mode to continuous
Spinnaker::GenApi::CEnumerationPtr ptrAcquisitionMode = cameraPtr->GetNodeMap().GetNode(
"AcquisitionMode");
if (!Spinnaker::GenApi::IsAvailable(ptrAcquisitionMode)
|| !Spinnaker::GenApi::IsWritable(ptrAcquisitionMode))
error("Unable to set acquisition mode to continuous (node retrieval; camera "
+ std::to_string(i) + "). Aborting...", __LINE__, __FUNCTION__, __FILE__);
Spinnaker::GenApi::CEnumEntryPtr ptrAcquisitionModeContinuous = ptrAcquisitionMode->GetEntryByName(
"Continuous");
if (!Spinnaker::GenApi::IsAvailable(ptrAcquisitionModeContinuous)
|| !Spinnaker::GenApi::IsReadable(ptrAcquisitionModeContinuous))
error("Unable to set acquisition mode to continuous (entry 'continuous' retrieval "
+ std::to_string(i) + "). Aborting...", __LINE__, __FUNCTION__, __FILE__);
int64_t acquisitionModeContinuous = ptrAcquisitionModeContinuous->GetValue();
ptrAcquisitionMode->SetIntValue(acquisitionModeContinuous);
log("Camera " + std::to_string(i) + " acquisition mode set to continuous...", Priority::High);
// Begin acquiring images
cameraPtr->BeginAcquisition();
log("Camera " + std::to_string(i) + " started acquiring images...", Priority::High);
// Retrieve device serial number for filename
strSerialNumbers[i] = "";
Spinnaker::GenApi::CStringPtr ptrStringSerial = cameraPtr->GetTLDeviceNodeMap().GetNode(
"DeviceSerialNumber"
);
if (Spinnaker::GenApi::IsAvailable(ptrStringSerial)
&& Spinnaker::GenApi::IsReadable(ptrStringSerial))
{
strSerialNumbers[i] = ptrStringSerial->GetValue();
log("Camera " + std::to_string(i) + " serial number set to "
+ strSerialNumbers[i].c_str() + "...", Priority::High);
}
log(" ", Priority::High);
}
const auto cvMats = getRawFrames();
// Security checks
if (cvMats.empty())
error("Cameras could not be opened.", __LINE__, __FUNCTION__, __FILE__);
// Get resolution
else
{
set(CV_CAP_PROP_FRAME_WIDTH, cvMats[0].cols);
set(CV_CAP_PROP_FRAME_HEIGHT, cvMats[0].rows);
}
log("\nRunning for all cameras...\n\n*** IMAGE ACQUISITION ***\n", Priority::High);
}
catch (const Spinnaker::Exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
#else
error(WITH_FLIR_CAMERA_ERROR, __LINE__, __FUNCTION__, __FILE__);
#endif
}
FlirReader::~FlirReader()
......@@ -556,10 +37,7 @@ namespace op
{
try
{
std::vector<cv::Mat> cameraMatrices(getNumberCameras());
for (auto i = 0ull ; i < cameraMatrices.size() ; i++)
cameraMatrices[i] = getM(i);
return cameraMatrices;
return mSpinnakerWrapper.getCameraMatrices();
}
catch (const std::exception& e)
{
......@@ -570,92 +48,48 @@ namespace op
std::string FlirReader::getFrameName()
{
const auto stringLength = 12u;
return toFixedLengthString( fastMax(0ll, longLongRound(mFrameNameCounter)), stringLength);
try
{
const auto stringLength = 12u;
return toFixedLengthString( fastMax(0ll, longLongRound(mFrameNameCounter)), stringLength);
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return "";
}
}
void FlirReader::release()
bool FlirReader::isOpened() const
{
#ifdef WITH_FLIR_CAMERA
try
{
if (upImpl->mInitialized)
{
// End acquisition for each camera
// Notice that what is usually a one-step process is now two steps
// because of the additional step of selecting the camera. It is worth
// repeating that camera selection needs to be done once per loop.
// It is possible to interact with cameras through the camera list with
// GetByIndex(); this is an alternative to retrieving cameras as
// Spinnaker::CameraPtr objects that can be quick and easy for small tasks.
//
for (auto i = 0; i < upImpl->mCameraList.GetSize(); i++)
upImpl->mCameraList.GetByIndex(i)->EndAcquisition();
for (auto i = 0; i < upImpl->mCameraList.GetSize(); i++)
{
// Select camera
auto cameraPtr = upImpl->mCameraList.GetByIndex(i);
// Retrieve GenICam nodemap
auto& iNodeMap = cameraPtr->GetNodeMap();
// // Disable chunk data
// result = disableChunkData(iNodeMap);
// // if (result < 0)
// // return result;
// Reset trigger
auto result = resetTrigger(iNodeMap);
if (result < 0)
error("Error happened..." + std::to_string(result), __LINE__, __FUNCTION__, __FILE__);
// Deinitialize each camera
// Each camera must be deinitialized separately by first
// selecting the camera and then deinitializing it.
cameraPtr->DeInit();
}
log("FLIR (Point-grey) capture completed. Releasing cameras...", Priority::High);
// Clear camera list before releasing upImpl->mSystemPtr
upImpl->mCameraList.Clear();
// Release upImpl->mSystemPtr
upImpl->mSystemPtr->ReleaseInstance();
}
try
{
return mSpinnakerWrapper.isOpened();
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return false;
}
}
log("Cameras released! Exiting program.", Priority::High);
}
catch (const Spinnaker::Exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
#endif
void FlirReader::release()
{
try
{
mSpinnakerWrapper.release();
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
}
cv::Mat FlirReader::getRawFrame()
{
try
{
#ifdef WITH_FLIR_CAMERA
try
{
return acquireImages(upImpl->mCameraList).at(0);
}
catch (const Spinnaker::Exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return cv::Mat();
}
#else
error(WITH_FLIR_CAMERA_ERROR, __LINE__, __FUNCTION__, __FILE__);
return cv::Mat();
#endif
return mSpinnakerWrapper.getRawFrames().at(0);
}
catch (const std::exception& e)
{
......@@ -668,20 +102,7 @@ namespace op
{
try
{
#ifdef WITH_FLIR_CAMERA
try
{
return acquireImages(upImpl->mCameraList);
}
catch (const Spinnaker::Exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return {};
}
#else
error(WITH_FLIR_CAMERA_ERROR, __LINE__, __FUNCTION__, __FILE__);
return {};
#endif
return mSpinnakerWrapper.getRawFrames();
}
catch (const std::exception& e)
{
......
......@@ -53,7 +53,15 @@ namespace op
std::string ImageDirectoryReader::getFrameName()
{
return getFileNameNoExtension(mFilePaths.at(mFrameNameCounter));
try
{
return getFileNameNoExtension(mFilePaths.at(mFrameNameCounter));
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return "";
}
}
cv::Mat ImageDirectoryReader::getRawFrame()
......
#include <opencv2/imgproc/imgproc.hpp> // cv::undistort
#ifdef WITH_FLIR_CAMERA
#include <Spinnaker.h>
#endif
#include <openpose/3d/cameraParameterReader.hpp>
#include <openpose/producer/spinnakerWrapper.hpp>
namespace op
{
#ifdef WITH_FLIR_CAMERA
/*
* This function converts between Spinnaker::ImagePtr container to cv::Mat container used in OpenCV.
*/
cv::Mat spinnakerWrapperToCvMat(const Spinnaker::ImagePtr &imagePtr)
{
try
{
const auto XPadding = imagePtr->GetXPadding();
const auto YPadding = imagePtr->GetYPadding();
const auto rowsize = imagePtr->GetWidth();
const auto colsize = imagePtr->GetHeight();
// Image data contains padding. When allocating cv::Mat container size, you need to account for the X,Y
// image data padding.
return cv::Mat((int)(colsize + YPadding), (int)(rowsize + XPadding), CV_8UC3, imagePtr->GetData(),
imagePtr->GetStride());
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return cv::Mat();
}
}
// This function configures the camera to use a trigger. First, trigger mode is
// set to off in order to select the trigger source. Once the trigger source
// has been selected, trigger mode is then enabled, which has the camera
// capture only a single image upon the execution of the chosen trigger.
int configureTrigger(Spinnaker::GenApi::INodeMap &iNodeMap)
{
try
{
int result = 0;
log("*** CONFIGURING TRIGGER ***", Priority::High);
log("Configuring hardware trigger...", Priority::High);
// Ensure trigger mode off
// *** NOTES ***
// The trigger must be disabled in order to configure whether the source
// is software or hardware.
Spinnaker::GenApi::CEnumerationPtr ptrTriggerMode = iNodeMap.GetNode("TriggerMode");
if (!Spinnaker::GenApi::IsAvailable(ptrTriggerMode) || !Spinnaker::GenApi::IsReadable(ptrTriggerMode))
error("Unable to disable trigger mode (node retrieval). Aborting...",
__LINE__, __FUNCTION__, __FILE__);
Spinnaker::GenApi::CEnumEntryPtr ptrTriggerModeOff = ptrTriggerMode->GetEntryByName("Off");
if (!Spinnaker::GenApi::IsAvailable(ptrTriggerModeOff)
|| !Spinnaker::GenApi::IsReadable(ptrTriggerModeOff))
error("Unable to disable trigger mode (enum entry retrieval). Aborting...",
__LINE__, __FUNCTION__, __FILE__);
ptrTriggerMode->SetIntValue(ptrTriggerModeOff->GetValue());
log("Trigger mode disabled...", Priority::High);
// Select trigger source
// *** NOTES ***
// The trigger source must be set to hardware or software while trigger
// mode is off.
Spinnaker::GenApi::CEnumerationPtr ptrTriggerSource = iNodeMap.GetNode("TriggerSource");
if (!Spinnaker::GenApi::IsAvailable(ptrTriggerSource)
|| !Spinnaker::GenApi::IsWritable(ptrTriggerSource))
error("Unable to set trigger mode (node retrieval). Aborting...",
__LINE__, __FUNCTION__, __FILE__);
// Set trigger mode to hardware ('Line0')
Spinnaker::GenApi::CEnumEntryPtr ptrTriggerSourceHardware = ptrTriggerSource->GetEntryByName("Line0");
if (!Spinnaker::GenApi::IsAvailable(ptrTriggerSourceHardware)
|| !Spinnaker::GenApi::IsReadable(ptrTriggerSourceHardware))
error("Unable to set trigger mode (enum entry retrieval). Aborting...",
__LINE__, __FUNCTION__, __FILE__);
ptrTriggerSource->SetIntValue(ptrTriggerSourceHardware->GetValue());
log("Trigger source set to hardware...", Priority::High);
// Turn trigger mode on
// *** LATER ***
// Once the appropriate trigger source has been set, turn trigger mode
// on in order to retrieve images using the trigger.
Spinnaker::GenApi::CEnumEntryPtr ptrTriggerModeOn = ptrTriggerMode->GetEntryByName("On");
if (!Spinnaker::GenApi::IsAvailable(ptrTriggerModeOn)
|| !Spinnaker::GenApi::IsReadable(ptrTriggerModeOn))
{
error("Unable to enable trigger mode (enum entry retrieval). Aborting...",
__LINE__, __FUNCTION__, __FILE__);
return -1;
}
ptrTriggerMode->SetIntValue(ptrTriggerModeOn->GetValue());
log("Trigger mode turned back on...", Priority::High);
return result;
}
catch (const Spinnaker::Exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return -1;
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return -1;
}
}
// This function returns the camera to a normal state by turning off trigger
// mode.
int resetTrigger(Spinnaker::GenApi::INodeMap &iNodeMap)
{
try
{
int result = 0;
//
// Turn trigger mode back off
//
// *** NOTES ***
// Once all images have been captured, turn trigger mode back off to
// restore the camera to a clean state.
//
Spinnaker::GenApi::CEnumerationPtr ptrTriggerMode = iNodeMap.GetNode("TriggerMode");
if (!Spinnaker::GenApi::IsAvailable(ptrTriggerMode)
|| !Spinnaker::GenApi::IsReadable(ptrTriggerMode))
error("Unable to disable trigger mode (node retrieval). Non-fatal error...",
__LINE__, __FUNCTION__, __FILE__);
Spinnaker::GenApi::CEnumEntryPtr ptrTriggerModeOff = ptrTriggerMode->GetEntryByName("Off");
if (!Spinnaker::GenApi::IsAvailable(ptrTriggerModeOff)
|| !Spinnaker::GenApi::IsReadable(ptrTriggerModeOff))
error("Unable to disable trigger mode (enum entry retrieval). Non-fatal error...",
__LINE__, __FUNCTION__, __FILE__);
ptrTriggerMode->SetIntValue(ptrTriggerModeOff->GetValue());
// log("Trigger mode disabled...", Priority::High);
return result;
}
catch (Spinnaker::Exception &e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return -1;
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return -1;
}
}
// This function acquires and displays images from each device.
std::vector<cv::Mat> acquireImages(Spinnaker::CameraList &cameraList,
const std::vector<cv::Mat>& cameraIntrinsics,
const std::vector<cv::Mat>& cameraDistorsions)
{
try
{
std::vector<cv::Mat> cvMats;
// Retrieve, convert, and return an image for each camera
// In order to work with simultaneous camera streams, nested loops are
// needed. It is important that the inner loop be the one iterating
// through the cameras; otherwise, all images will be grabbed from a
// single camera before grabbing any images from another.
// Get cameras
std::vector<Spinnaker::CameraPtr> cameraPtrs(cameraList.GetSize());
for (auto i = 0u; i < cameraPtrs.size(); i++)
cameraPtrs.at(i) = cameraList.GetByIndex(i);
std::vector<Spinnaker::ImagePtr> imagePtrs(cameraPtrs.size());
// Getting frames
// Retrieve next received image and ensure image completion
// Spinnaker::ImagePtr imagePtr = cameraPtrs.at(i)->GetNextImage();
// Clean buffer + retrieve next received image + ensure image completion
auto durationMs = 0.;
// for (auto counter = 0 ; counter < 10 ; counter++)
while (durationMs < 1.)
{
const auto begin = std::chrono::high_resolution_clock::now();
for (auto i = 0u; i < cameraPtrs.size(); i++)
imagePtrs.at(i) = cameraPtrs.at(i)->GetNextImage();
durationMs = std::chrono::duration_cast<std::chrono::nanoseconds>(
std::chrono::high_resolution_clock::now()-begin
).count() * 1e-6;
// log("Time extraction (ms): " + std::to_string(durationMs), Priority::High);
}
// Original format -> RGB8
bool imagesExtracted = true;
for (auto& imagePtr : imagePtrs)
{
if (imagePtr->IsIncomplete())
{
log("Image incomplete with image status " + std::to_string(imagePtr->GetImageStatus()) + "...",
Priority::High, __LINE__, __FUNCTION__, __FILE__);
imagesExtracted = false;
break;
}
else
{
// Print image information
// Convert image to RGB
// Interpolation methods
// http://softwareservices.ptgrey.com/Spinnaker/latest/group___spinnaker_defs.html
// DEFAULT Default method.
// NO_COLOR_PROCESSING No color processing.
// NEAREST_NEIGHBOR Fastest but lowest quality. Equivalent to
// FLYCAPTURE_NEAREST_NEIGHBOR_FAST in FlyCapture.
// EDGE_SENSING Weights surrounding pixels based on localized edge orientation.
// HQ_LINEAR Well-balanced speed and quality.
// RIGOROUS Slowest but produces good results.
// IPP Multi-threaded with similar results to edge sensing.
// DIRECTIONAL_FILTER Best quality but much faster than rigorous.
// Colors
// http://softwareservices.ptgrey.com/Spinnaker/latest/group___camera_defs__h.html#ggabd5af55aaa20bcb0644c46241c2cbad1a33a1c8a1f6dbcb4a4eaaaf6d4d7ff1d1
// PixelFormat_BGR8
// Time tests
// const auto reps = 1e3;
// // const auto reps = 1e2; // for RIGOROUS & DIRECTIONAL_FILTER
// const auto begin = std::chrono::high_resolution_clock::now();
// for (auto asdf = 0 ; asdf < reps ; asdf++){
// ~ 1.5 ms but pixeled
// imagePtr = imagePtr->Convert(Spinnaker::PixelFormat_BGR8, Spinnaker::DEFAULT);
// ~0.5 ms but BW
// imagePtr = imagePtr->Convert(Spinnaker::PixelFormat_BGR8, Spinnaker::NO_COLOR_PROCESSING);
// ~6 ms, looks as good as best
imagePtr = imagePtr->Convert(Spinnaker::PixelFormat_BGR8, Spinnaker::HQ_LINEAR);
// ~2 ms default << edge << best
// imagePtr = imagePtr->Convert(Spinnaker::PixelFormat_BGR8, Spinnaker::EDGE_SENSING);
// ~115, too slow
// imagePtr = imagePtr->Convert(Spinnaker::PixelFormat_BGR8, Spinnaker::RIGOROUS);
// ~2 ms, slightly worse than HQ_LINEAR
// imagePtr = imagePtr->Convert(Spinnaker::PixelFormat_BGR8, Spinnaker::IPP);
// ~30 ms, ideally best quality?
// imagePtr = imagePtr->Convert(Spinnaker::PixelFormat_BGR8, Spinnaker::DIRECTIONAL_FILTER);
// imagePtr = imagePtr;
// }
// durationMs = std::chrono::duration_cast<std::chrono::nanoseconds>(
// std::chrono::high_resolution_clock::now()-begin
// ).count() * 1e-6;
// log("Time conversion (ms): " + std::to_string(durationMs / reps), Priority::High);
}
}
// Convert to cv::Mat
if (imagesExtracted)
{
for (auto i = 0u; i < imagePtrs.size(); i++)
{
// Baseline
// cvMats.emplace_back(spinnakerWrapperToCvMat(imagePtrs.at(i)).clone());
// Undistort
// http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#undistort
auto auxCvMat = spinnakerWrapperToCvMat(imagePtrs.at(i));
cvMats.emplace_back();
cv::undistort(auxCvMat, cvMats[i], cameraIntrinsics[i], cameraDistorsions[i]);
}
}
return cvMats;
}
catch (Spinnaker::Exception &e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return {};
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return {};
}
}
// This function prints the device information of the camera from the transport
// layer; please see NodeMapInfo example for more in-depth comments on printing
// device information from the nodemap.
int printDeviceInfo(Spinnaker::GenApi::INodeMap &iNodeMap, const unsigned int camNum)
{
int result = 0;
log("Printing device information for camera " + std::to_string(camNum) + "...\n", Priority::High);
Spinnaker::GenApi::FeatureList_t features;
Spinnaker::GenApi::CCategoryPtr cCategoryPtr = iNodeMap.GetNode("DeviceInformation");
if (Spinnaker::GenApi::IsAvailable(cCategoryPtr) && Spinnaker::GenApi::IsReadable(cCategoryPtr))
{
cCategoryPtr->GetFeatures(features);
Spinnaker::GenApi::FeatureList_t::const_iterator it;
for (it = features.begin(); it != features.end(); ++it)
{
Spinnaker::GenApi::CNodePtr pfeatureNode = *it;
const auto cValuePtr = (Spinnaker::GenApi::CValuePtr)pfeatureNode;
log(pfeatureNode->GetName() + " : " +
(IsReadable(cValuePtr) ? cValuePtr->ToString() : "Node not readable"), Priority::High);
}
}
else
log("Device control information not available.", Priority::High);
log(" ", Priority::High);
return result;
}
#else
const std::string WITH_FLIR_CAMERA_ERROR{"OpenPose CMake must be compiled with the `WITH_FLIR_CAMERA` flag in"
" order to use the FLIR camera. Alternatively, disable `--flir_camera`."};
#endif
struct SpinnakerWrapper::ImplSpinnakerWrapper
{
#ifdef WITH_FLIR_CAMERA
bool mInitialized;
CameraParameterReader mCameraParameterReader;
Point<int> mResolution;
Spinnaker::CameraList mCameraList;
Spinnaker::SystemPtr mSystemPtr;
ImplSpinnakerWrapper() :
mInitialized{false}
{
}
#endif
};
SpinnakerWrapper::SpinnakerWrapper(const std::string& cameraParameterPath) :
#ifdef WITH_FLIR_CAMERA
upImpl{new ImplSpinnakerWrapper{}}
#endif
{
#ifdef WITH_FLIR_CAMERA
try
{
upImpl->mInitialized = true;
// Print application build information
log(std::string{ "Application build date: " } + __DATE__ + " " + __TIME__, Priority::High);
// Retrieve singleton reference to upImpl->mSystemPtr object
upImpl->mSystemPtr = Spinnaker::System::GetInstance();
// Retrieve list of cameras from the upImpl->mSystemPtr
upImpl->mCameraList = upImpl->mSystemPtr->GetCameras();
unsigned int numCameras = upImpl->mCameraList.GetSize();
log("Number of cameras detected: " + std::to_string(numCameras), Priority::High);
// Finish if there are no cameras
if (numCameras == 0)
{
// Clear camera list before releasing upImpl->mSystemPtr
upImpl->mCameraList.Clear();
// Release upImpl->mSystemPtr
upImpl->mSystemPtr->ReleaseInstance();
log("Not enough cameras!\nPress Enter to exit...", Priority::High);
getchar();
error("No cameras detected.", __LINE__, __FUNCTION__, __FILE__);
}
log("Camera system initialized...", Priority::High);
//
// Retrieve transport layer nodemaps and print device information for
// each camera
//
// *** NOTES ***
// This example retrieves information from the transport layer nodemap
// twice: once to print device information and once to grab the device
// serial number. Rather than caching the nodemap, each nodemap is
// retrieved both times as needed.
//
log("\n*** DEVICE INFORMATION ***\n", Priority::High);
for (int i = 0; i < upImpl->mCameraList.GetSize(); i++)
{
// Select camera
auto cameraPtr = upImpl->mCameraList.GetByIndex(i);
// Retrieve TL device nodemap
auto& iNodeMapTLDevice = cameraPtr->GetTLDeviceNodeMap();
// Print device information
auto result = printDeviceInfo(iNodeMapTLDevice, i);
if (result < 0)
error("Result > 0, error " + std::to_string(result) + " occurred...",
__LINE__, __FUNCTION__, __FILE__);
}
for (auto i = 0; i < upImpl->mCameraList.GetSize(); i++)
{
// Select camera
auto cameraPtr = upImpl->mCameraList.GetByIndex(i);
// Initialize each camera
// You may notice that the steps in this function have more loops with
// less steps per loop; this contrasts the acquireImages() function
// which has less loops but more steps per loop. This is done for
// demonstrative purposes as both work equally well.
// Later: Each camera needs to be deinitialized once all images have been
// acquired.
cameraPtr->Init();
// Retrieve GenICam nodemap
// auto& iNodeMap = cameraPtr->GetNodeMap();
// // Configure trigger
// result = configureTrigger(iNodeMap);
// if (result < 0)
// error("Result > 0, error " + std::to_string(result) + " occurred...",
// __LINE__, __FUNCTION__, __FILE__);
// // Configure chunk data
// result = configureChunkData(iNodeMap);
// if (result < 0)
// return result;
// Remove buffer --> Always get newest frame
Spinnaker::GenApi::INodeMap& snodeMap = cameraPtr->GetTLStreamNodeMap();
Spinnaker::GenApi::CEnumerationPtr ptrBufferHandlingMode = snodeMap.GetNode(
"StreamBufferHandlingMode");
if (!Spinnaker::GenApi::IsAvailable(ptrBufferHandlingMode)
|| !Spinnaker::GenApi::IsWritable(ptrBufferHandlingMode))
error("Unable to change buffer handling mode", __LINE__, __FUNCTION__, __FILE__);
Spinnaker::GenApi::CEnumEntryPtr ptrBufferHandlingModeNewest = ptrBufferHandlingMode->GetEntryByName(
"NewestFirstOverwrite");
if (!Spinnaker::GenApi::IsAvailable(ptrBufferHandlingModeNewest)
|| !IsReadable(ptrBufferHandlingModeNewest))
error("Unable to set buffer handling mode to newest (entry 'NewestFirstOverwrite' retrieval)."
" Aborting...", __LINE__, __FUNCTION__, __FILE__);
int64_t bufferHandlingModeNewest = ptrBufferHandlingModeNewest->GetValue();
ptrBufferHandlingMode->SetIntValue(bufferHandlingModeNewest);
}
// Prepare each camera to acquire images
//
// *** NOTES ***
// For pseudo-simultaneous streaming, each camera is prepared as if it
// were just one, but in a loop. Notice that cameras are selected with
// an index. We demonstrate pseduo-simultaneous streaming because true
// simultaneous streaming would require multiple process or threads,
// which is too complex for an example.
//
// Serial numbers are the only persistent objects we gather in this
// example, which is why a std::vector is created.
std::vector<Spinnaker::GenICam::gcstring> strSerialNumbers(upImpl->mCameraList.GetSize());
for (auto i = 0u; i < strSerialNumbers.size(); i++)
{
// Select camera
auto cameraPtr = upImpl->mCameraList.GetByIndex(i);
// Set acquisition mode to continuous
Spinnaker::GenApi::CEnumerationPtr ptrAcquisitionMode = cameraPtr->GetNodeMap().GetNode(
"AcquisitionMode");
if (!Spinnaker::GenApi::IsAvailable(ptrAcquisitionMode)
|| !Spinnaker::GenApi::IsWritable(ptrAcquisitionMode))
error("Unable to set acquisition mode to continuous (node retrieval; camera "
+ std::to_string(i) + "). Aborting...", __LINE__, __FUNCTION__, __FILE__);
Spinnaker::GenApi::CEnumEntryPtr ptrAcquisitionModeContinuous = ptrAcquisitionMode->GetEntryByName(
"Continuous");
if (!Spinnaker::GenApi::IsAvailable(ptrAcquisitionModeContinuous)
|| !Spinnaker::GenApi::IsReadable(ptrAcquisitionModeContinuous))
error("Unable to set acquisition mode to continuous (entry 'continuous' retrieval "
+ std::to_string(i) + "). Aborting...", __LINE__, __FUNCTION__, __FILE__);
int64_t acquisitionModeContinuous = ptrAcquisitionModeContinuous->GetValue();
ptrAcquisitionMode->SetIntValue(acquisitionModeContinuous);
log("Camera " + std::to_string(i) + " acquisition mode set to continuous...", Priority::High);
// Begin acquiring images
cameraPtr->BeginAcquisition();
log("Camera " + std::to_string(i) + " started acquiring images...", Priority::High);
// Retrieve device serial number for filename
strSerialNumbers[i] = "";
Spinnaker::GenApi::CStringPtr ptrStringSerial = cameraPtr->GetTLDeviceNodeMap().GetNode(
"DeviceSerialNumber"
);
if (Spinnaker::GenApi::IsAvailable(ptrStringSerial)
&& Spinnaker::GenApi::IsReadable(ptrStringSerial))
{
strSerialNumbers[i] = ptrStringSerial->GetValue();
log("Camera " + std::to_string(i) + " serial number set to "
+ strSerialNumbers[i].c_str() + "...", Priority::High);
}
log(" ", Priority::High);
}
// Read camera parameters from SN
std::vector<std::string> serialNumbers(strSerialNumbers.size());
for (auto i = 0u ; i < serialNumbers.size() ; i++)
serialNumbers[i] = strSerialNumbers[i];
upImpl->mCameraParameterReader.readParameters(cameraParameterPath, serialNumbers);
// Get resolution + security checks
const auto cvMats = getRawFrames();
// Security checks
if (cvMats.empty())
error("Cameras could not be opened.", __LINE__, __FUNCTION__, __FILE__);
// Get resolution
else
upImpl->mResolution = Point<int>{cvMats[0].cols, cvMats[0].rows};
log("\nRunning for all cameras...\n\n*** IMAGE ACQUISITION ***\n", Priority::High);
}
catch (const Spinnaker::Exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
#else
error(WITH_FLIR_CAMERA_ERROR, __LINE__, __FUNCTION__, __FILE__);
#endif
}
SpinnakerWrapper::~SpinnakerWrapper()
{
try
{
release();
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
}
std::vector<cv::Mat> SpinnakerWrapper::getRawFrames()
{
try
{
#ifdef WITH_FLIR_CAMERA
try
{
// Security checks
if ((unsigned long long) upImpl->mCameraList.GetSize()
!= upImpl->mCameraParameterReader.getNumberCameras())
error("The number of cameras must be the same as the INTRINSICS vector size.",
__LINE__, __FUNCTION__, __FILE__);
return acquireImages(upImpl->mCameraList,
upImpl->mCameraParameterReader.getCameraIntrinsics(),
upImpl->mCameraParameterReader.getCameraDistortions());
}
catch (const Spinnaker::Exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return {};
}
#else
error(WITH_FLIR_CAMERA_ERROR, __LINE__, __FUNCTION__, __FILE__);
return {};
#endif
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return {};
}
}
std::vector<cv::Mat> SpinnakerWrapper::getCameraMatrices() const
{
try
{
#ifdef WITH_FLIR_CAMERA
return upImpl->mCameraParameterReader.getCameraMatrices();
#else
return {};
#endif
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return {};
}
}
Point<int> SpinnakerWrapper::getResolution() const
{
try
{
#ifdef WITH_FLIR_CAMERA
return upImpl->mResolution;
#else
return Point<int>{};
#endif
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return Point<int>{};
}
}
bool SpinnakerWrapper::isOpened() const
{
try
{
#ifdef WITH_FLIR_CAMERA
return upImpl->mInitialized;
#else
return false;
#endif
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
return false;
}
}
void SpinnakerWrapper::release()
{
#ifdef WITH_FLIR_CAMERA
try
{
if (upImpl->mInitialized)
{
// End acquisition for each camera
// Notice that what is usually a one-step process is now two steps
// because of the additional step of selecting the camera. It is worth
// repeating that camera selection needs to be done once per loop.
// It is possible to interact with cameras through the camera list with
// GetByIndex(); this is an alternative to retrieving cameras as
// Spinnaker::CameraPtr objects that can be quick and easy for small tasks.
//
for (auto i = 0; i < upImpl->mCameraList.GetSize(); i++)
upImpl->mCameraList.GetByIndex(i)->EndAcquisition();
for (auto i = 0; i < upImpl->mCameraList.GetSize(); i++)
{
// Select camera
auto cameraPtr = upImpl->mCameraList.GetByIndex(i);
// Retrieve GenICam nodemap
auto& iNodeMap = cameraPtr->GetNodeMap();
// // Disable chunk data
// result = disableChunkData(iNodeMap);
// // if (result < 0)
// // return result;
// Reset trigger
auto result = resetTrigger(iNodeMap);
if (result < 0)
error("Error happened..." + std::to_string(result), __LINE__, __FUNCTION__, __FILE__);
// Deinitialize each camera
// Each camera must be deinitialized separately by first
// selecting the camera and then deinitializing it.
cameraPtr->DeInit();
}
log("FLIR (Point-grey) capture completed. Releasing cameras...", Priority::High);
// Clear camera list before releasing upImpl->mSystemPtr
upImpl->mCameraList.Clear();
// Release upImpl->mSystemPtr
upImpl->mSystemPtr->ReleaseInstance();
// Setting the class as released
upImpl->mInitialized = false;
log("Cameras released! Exiting program.", Priority::High);
}
}
catch (const Spinnaker::Exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
catch (const std::exception& e)
{
error(e.what(), __LINE__, __FUNCTION__, __FILE__);
}
#endif
}
}
......@@ -137,7 +137,7 @@ namespace op
std::shared_ptr<Producer> flagsToProducer(const std::string& imageDirectory, const std::string& videoPath,
const std::string& ipCameraPath, const int webcamIndex,
const bool flirCamera, const std::string& webcamResolution,
const double webcamFps)
const double webcamFps, const std::string& cameraParameterPath)
{
try
{
......@@ -151,7 +151,7 @@ namespace op
else if (type == ProducerType::IPCamera)
return std::make_shared<IpCameraReader>(ipCameraPath);
else if (type == ProducerType::FlirCamera)
return std::make_shared<FlirReader>();
return std::make_shared<FlirReader>(cameraParameterPath);
else if (type == ProducerType::Webcam)
{
// cameraFrameSize
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册