提交 f1e8f69d 编写于 作者: V Vadim Pisarevsky 提交者: OpenCV Buildbot

Merge pull request #861 from albenoit:master

......@@ -21,7 +21,7 @@ In this tutorial you will learn how to:
General overview
================
The proposed model originates from Jeanny Herault's research at `Gipsa <http://www.gipsa-lab.inpg.fr>`_. It is involved in image processing applications with `Listic <http://www.listic.univ-savoie.fr>`_ (code maintainer) lab. This is not a complete model but it already present interesting properties that can be involved for enhanced image processing experience. The model allows the following human retina properties to be used :
The proposed model originates from Jeanny Herault's research [herault2010]_ at `Gipsa <http://www.gipsa-lab.inpg.fr>`_. It is involved in image processing applications with `Listic <http://www.listic.univ-savoie.fr>`_ (code maintainer and user) lab. This is not a complete model but it already present interesting properties that can be involved for enhanced image processing experience. The model allows the following human retina properties to be used :
* spectral whitening that has 3 important effects: high spatio-temporal frequency signals canceling (noise), mid-frequencies details enhancement and low frequencies luminance energy reduction. This *all in one* property directly allows visual signals cleaning of classical undesired distortions introduced by image sensors and input luminance range.
......@@ -37,7 +37,7 @@ In the figure below, the OpenEXR image sample *CrissyField.exr*, a High Dynamic
:alt: A High dynamic range image linearly rescaled within range [0-255].
:align: center
In the following image, as your retina does, local luminance adaptation, spatial noise removal and spectral whitening work together and transmit accurate information on lower range 8bit data channels. On this picture, noise in significantly removed, local details hidden by strong luminance contrasts are enhanced. Output image keeps its naturalness and visual content is enhanced.
In the following image, applying the ideas proposed in [benoit2010]_, as your retina does, local luminance adaptation, spatial noise removal and spectral whitening work together and transmit accurate information on lower range 8bit data channels. On this picture, noise in significantly removed, local details hidden by strong luminance contrasts are enhanced. Output image keeps its naturalness and visual content is enhanced. Color processing is based on the color multiplexing/demultiplexing method proposed in [chaix2007]_.
.. image:: images/retina_TreeHdr_retina.jpg
:alt: A High dynamic range image compressed within range [0-255] using the retina.
......@@ -86,19 +86,23 @@ This model can be used basically for spatio-temporal video effects but also in t
* performing motion analysis also taking benefit of the previously cited properties.
Literature
==========
For more information, refer to the following papers :
* Benoit A., Caplier A., Durette B., Herault, J., "Using Human Visual System Modeling For Bio-Inspired Low Level Image Processing", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773. DOI <http://dx.doi.org/10.1016/j.cviu.2010.01.011>
.. [benoit2010] Benoit A., Caplier A., Durette B., Herault, J., "Using Human Visual System Modeling For Bio-Inspired Low Level Image Processing", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773. DOI <http://dx.doi.org/10.1016/j.cviu.2010.01.011>
* Please have a look at the reference work of Jeanny Herault that you can read in his book :
Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
.. [herault2010] Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
This retina filter code includes the research contributions of phd/research collegues from which code has been redrawn by the author :
* take a look at the *retinacolor.hpp* module to discover Brice Chaix de Lavarene phD color mosaicing/demosaicing and his reference paper: B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
* take a look at the *retinacolor.hpp* module to discover Brice Chaix de Lavarene phD color mosaicing/demosaicing and his reference paper:
* take a look at *imagelogpolprojection.hpp* to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions. ====> more information in the above cited Jeanny Heraults's book.
.. [chaix2007] B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
* take a look at *imagelogpolprojection.hpp* to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions. More informations in the above cited Jeanny Heraults's book.
Code tutorial
=============
......@@ -229,68 +233,67 @@ Now, everything is ready to run the retina model. I propose here to allocate a r
.. code-block:: cpp
// pointer to a retina object
cv::Ptr<cv::Retina> myRetina;
// if the last parameter is 'log', then activate log sampling (favour foveal vision and subsamples peripheral vision)
if (useLogSampling)
{
myRetina = new cv::Retina(inputFrame.size(), true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
}
else// -> else allocate "classical" retina :
myRetina = new cv::Retina(inputFrame.size());
// pointer to a retina object
cv::Ptr<cv::Retina> myRetina;
// if the last parameter is 'log', then activate log sampling (favour foveal vision and subsamples peripheral vision)
if (useLogSampling)
{
myRetina = cv::createRetina(inputFrame.size(), true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
}
else// -> else allocate "classical" retina :
myRetina = cv::createRetina(inputFrame.size());
Once done, the proposed code writes a default xml file that contains the default parameters of the retina. This is useful to make your own config using this template. Here generated template xml file is called *RetinaDefaultParameters.xml*.
.. code-block:: cpp
// save default retina parameters file in order to let you see this and maybe modify it and reload using method "setup"
myRetina->write("RetinaDefaultParameters.xml");
// save default retina parameters file in order to let you see this and maybe modify it and reload using method "setup"
myRetina->write("RetinaDefaultParameters.xml");
In the following line, the retina attempts to load another xml file called *RetinaSpecificParameters.xml*. If you created it and introduced your own setup, it will be loaded, in the other case, default retina parameters are used.
.. code-block:: cpp
// load parameters if file exists
myRetina->setup("RetinaSpecificParameters.xml");
// load parameters if file exists
myRetina->setup("RetinaSpecificParameters.xml");
It is not required here but just to show it is possible, you can reset the retina buffers to zero to force it to forget past events.
.. code-block:: cpp
// reset all retina buffers (imagine you close your eyes for a long time)
myRetina->clearBuffers();
// reset all retina buffers (imagine you close your eyes for a long time)
myRetina->clearBuffers();
Now, it is time to run the retina ! First create some output buffers ready to receive the two retina channels outputs
.. code-block:: cpp
// declare retina output buffers
cv::Mat retinaOutput_parvo;
cv::Mat retinaOutput_magno;
// declare retina output buffers
cv::Mat retinaOutput_parvo;
cv::Mat retinaOutput_magno;
Then, run retina in a loop, load new frames from video sequence if necessary and get retina outputs back to dedicated buffers.
.. code-block:: cpp
// processing loop with no stop condition
while(true)
{
// if using video stream, then, grabbing a new frame, else, input remains the same
if (videoCapture.isOpened())
videoCapture>>inputFrame;
// run retina filter on the loaded input frame
myRetina->run(inputFrame);
// Retrieve and display retina output
myRetina->getParvo(retinaOutput_parvo);
myRetina->getMagno(retinaOutput_magno);
cv::imshow("retina input", inputFrame);
cv::imshow("Retina Parvo", retinaOutput_parvo);
cv::imshow("Retina Magno", retinaOutput_magno);
cv::waitKey(10);
}
// processing loop with no stop condition
while(true)
{
// if using video stream, then, grabbing a new frame, else, input remains the same
if (videoCapture.isOpened())
videoCapture>>inputFrame;
// run retina filter on the loaded input frame
myRetina->run(inputFrame);
// Retrieve and display retina output
myRetina->getParvo(retinaOutput_parvo);
myRetina->getMagno(retinaOutput_magno);
cv::imshow("retina input", inputFrame);
cv::imshow("Retina Parvo", retinaOutput_parvo);
cv::imshow("Retina Magno", retinaOutput_magno);
cv::waitKey(10);
}
That's done ! But if you want to secure the system, take care and manage Exceptions. The retina can throw some when it sees irrelevant data (no input frame, wrong setup, etc.).
Then, i recommend to surround all the retina code by a try/catch system like this :
......@@ -323,29 +326,28 @@ Once done open the configuration file *RetinaDefaultParameters.xml* generated by
.. code-block:: cpp
<?xml version="1.0"?>
<opencv_storage>
<OPLandIPLparvo>
<colorMode>1</colorMode>
<normaliseOutput>1</normaliseOutput>
<photoreceptorsLocalAdaptationSensitivity>7.0e-01</photoreceptorsLocalAdaptationSensitivity>
<photoreceptorsTemporalConstant>5.0e-01</photoreceptorsTemporalConstant>
<photoreceptorsSpatialConstant>5.3e-01</photoreceptorsSpatialConstant>
<horizontalCellsGain>0.</horizontalCellsGain>
<hcellsTemporalConstant>1.</hcellsTemporalConstant>
<hcellsSpatialConstant>7.</hcellsSpatialConstant>
<ganglionCellsSensitivity>7.0e-01</ganglionCellsSensitivity></OPLandIPLparvo>
<IPLmagno>
<normaliseOutput>1</normaliseOutput>
<parasolCells_beta>0.</parasolCells_beta>
<parasolCells_tau>0.</parasolCells_tau>
<parasolCells_k>7.</parasolCells_k>
<amacrinCellsTemporalCutFrequency>1.2e+00</amacrinCellsTemporalCutFrequency>
<V0CompressionParameter>9.5e-01</V0CompressionParameter>
<localAdaptintegration_tau>0.</localAdaptintegration_tau>
<localAdaptintegration_k>7.</localAdaptintegration_k></IPLmagno>
</opencv_storage>
<?xml version="1.0"?>
<opencv_storage>
<OPLandIPLparvo>
<colorMode>1</colorMode>
<normaliseOutput>1</normaliseOutput>
<photoreceptorsLocalAdaptationSensitivity>7.5e-01</photoreceptorsLocalAdaptationSensitivity>
<photoreceptorsTemporalConstant>9.0e-01</photoreceptorsTemporalConstant>
<photoreceptorsSpatialConstant>5.7e-01</photoreceptorsSpatialConstant>
<horizontalCellsGain>0.01</horizontalCellsGain>
<hcellsTemporalConstant>0.5</hcellsTemporalConstant>
<hcellsSpatialConstant>7.</hcellsSpatialConstant>
<ganglionCellsSensitivity>7.5e-01</ganglionCellsSensitivity></OPLandIPLparvo>
<IPLmagno>
<normaliseOutput>1</normaliseOutput>
<parasolCells_beta>0.</parasolCells_beta>
<parasolCells_tau>0.</parasolCells_tau>
<parasolCells_k>7.</parasolCells_k>
<amacrinCellsTemporalCutFrequency>2.0e+00</amacrinCellsTemporalCutFrequency>
<V0CompressionParameter>9.5e-01</V0CompressionParameter>
<localAdaptintegration_tau>0.</localAdaptintegration_tau>
<localAdaptintegration_k>7.</localAdaptintegration_k></IPLmagno>
</opencv_storage>
Here are some hints but actually, the best parameter setup depends more on what you want to do with the retina rather than the images input that you give to retina. Apart from the more specific case of High Dynamic Range images (HDR) that require more specific setup for specific luminance compression objective, the retina behaviors should be rather stable from content to content. Note that OpenCV is able to manage such HDR format thanks to the OpenEXR images compatibility.
......@@ -381,7 +383,7 @@ This parameter set tunes the neural network connected to the photo-receptors, th
* **horizontalCellsGain** here is a critical parameter ! If you are not interested by the mean luminance and focus on details enhancement, then, set to zero. But if you want to keep some environment luminance data, let some low spatial frequencies pass into the system and set a higher value (<1).
* **hcellsTemporalConstant** similar to photo-receptors, this acts on the temporal constant of a low pass temporal filter that smooths input data. Here, a high value generates a high retina after effect while a lower value makes the retina more reactive.
* **hcellsTemporalConstant** similar to photo-receptors, this acts on the temporal constant of a low pass temporal filter that smooths input data. Here, a high value generates a high retina after effect while a lower value makes the retina more reactive. This value should be lower than **photoreceptorsTemporalConstant** to limit strong retina after effects.
* **hcellsSpatialConstant** is the spatial constant of the low pass filter of these cells filter. It specifies the lowest spatial frequency allowed in the following. Visually, a high value leads to very low spatial frequencies processing and leads to salient halo effects. Lower values reduce this effect but the limit is : do not go lower than the value of **photoreceptorsSpatialConstant**. Those 2 parameters actually specify the spatial band-pass of the retina.
......
此差异由.gitattributes 抑制。
......@@ -85,10 +85,8 @@ enum RETINA_COLORSAMPLINGMETHOD
RETINA_COLOR_BAYER//!< standard bayer sampling
};
class RetinaFilter;
/**
* @class Retina a wrapper class which allows the Gipsa/Listic Labs model to be used.
* @class Retina a wrapper class which allows the Gipsa/Listic Labs model to be used with OpenCV.
* This retina model allows spatio-temporal image processing (applied on still images, video sequences).
* As a summary, these are the retina model properties:
* => It applies a spectral whithening (mid-frequency details enhancement)
......@@ -110,7 +108,7 @@ class RetinaFilter;
* _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
* ====> more informations in the above cited Jeanny Heraults's book.
*/
class CV_EXPORTS Retina {
class CV_EXPORTS Retina : public Algorithm {
public:
......@@ -119,13 +117,13 @@ public:
struct OPLandIplParvoParameters{ // Outer Plexiform Layer (OPL) and Inner Plexiform Layer Parvocellular (IplParvo) parameters
OPLandIplParvoParameters():colorMode(true),
normaliseOutput(true),
photoreceptorsLocalAdaptationSensitivity(0.7f),
photoreceptorsTemporalConstant(0.5f),
photoreceptorsLocalAdaptationSensitivity(0.75f),
photoreceptorsTemporalConstant(0.9f),
photoreceptorsSpatialConstant(0.53f),
horizontalCellsGain(0.0f),
hcellsTemporalConstant(1.f),
horizontalCellsGain(0.01f),
hcellsTemporalConstant(0.5f),
hcellsSpatialConstant(7.f),
ganglionCellsSensitivity(0.7f){};// default setup
ganglionCellsSensitivity(0.75f){};// default setup
bool colorMode, normaliseOutput;
float photoreceptorsLocalAdaptationSensitivity, photoreceptorsTemporalConstant, photoreceptorsSpatialConstant, horizontalCellsGain, hcellsTemporalConstant, hcellsSpatialConstant, ganglionCellsSensitivity;
};
......@@ -135,7 +133,7 @@ public:
parasolCells_beta(0.f),
parasolCells_tau(0.f),
parasolCells_k(7.f),
amacrinCellsTemporalCutFrequency(1.2f),
amacrinCellsTemporalCutFrequency(2.0f),
V0CompressionParameter(0.95f),
localAdaptintegration_tau(0.f),
localAdaptintegration_k(7.f){};// default setup
......@@ -146,34 +144,15 @@ public:
struct IplMagnoParameters IplMagno;
};
/**
* Main constructor with most commun use setup : create an instance of color ready retina model
* @param inputSize : the input frame size
*/
Retina(Size inputSize);
/**
* Complete Retina filter constructor which allows all basic structural parameters definition
* @param inputSize : the input frame size
* @param colorMode : the chosen processing mode : with or without color processing
* @param colorSamplingMethod: specifies which kind of color sampling will be used
* @param useRetinaLogSampling: activate retina log sampling, if true, the 2 following parameters can be used
* @param reductionFactor: only usefull if param useRetinaLogSampling=true, specifies the reduction factor of the output frame (as the center (fovea) is high resolution and corners can be underscaled, then a reduction of the output is allowed without precision leak
* @param samplingStrenght: only usefull if param useRetinaLogSampling=true, specifies the strenght of the log scale that is applied
*/
Retina(Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
virtual ~Retina();
/**
* retreive retina input buffer size
*/
Size inputSize();
virtual Size getInputSize()=0;
/**
* retreive retina output buffer size
*/
Size outputSize();
virtual Size getOutputSize()=0;
/**
* try to open an XML retina parameters file to adjust current retina instance setup
......@@ -182,8 +161,7 @@ public:
* @param retinaParameterFile : the parameters filename
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
*/
void setup(String retinaParameterFile="", const bool applyDefaultSetupOnFailure=true);
virtual void setup(String retinaParameterFile="", const bool applyDefaultSetupOnFailure=true)=0;
/**
* try to open an XML retina parameters file to adjust current retina instance setup
......@@ -192,7 +170,7 @@ public:
* @param fs : the open Filestorage which contains retina parameters
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
*/
void setup(cv::FileStorage &fs, const bool applyDefaultSetupOnFailure=true);
virtual void setup(cv::FileStorage &fs, const bool applyDefaultSetupOnFailure=true)=0;
/**
* try to open an XML retina parameters file to adjust current retina instance setup
......@@ -201,31 +179,30 @@ public:
* @param newParameters : a parameters structures updated with the new target configuration
* @param applyDefaultSetupOnFailure : set to true if an error must be thrown on error
*/
void setup(RetinaParameters newParameters);
virtual void setup(RetinaParameters newParameters)=0;
/**
* @return the current parameters setup
*/
Retina::RetinaParameters getParameters();
* @return the current parameters setup
*/
virtual struct Retina::RetinaParameters getParameters()=0;
/**
* parameters setup display method
* @return a string which contains formatted parameters information
*/
const String printSetup();
virtual const String printSetup()=0;
/**
* write xml/yml formated parameters information
* @rparam fs : the filename of the xml file that will be open and writen with formatted parameters information
*/
virtual void write( String fs ) const;
virtual void write( String fs ) const=0;
/**
* write xml/yml formated parameters information
* @param fs : a cv::Filestorage object ready to be filled
*/
virtual void write( FileStorage& fs ) const;
virtual void write( FileStorage& fs ) const=0;
/**
* setup the OPL and IPL parvo channels (see biologocal model)
......@@ -242,7 +219,7 @@ public:
* @param HcellsSpatialConstant: the spatial constant of the first order low pass filter of the horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model)
* @param ganglionCellsSensitivity: the compression strengh of the ganglion cells local adaptation output, set a value between 160 and 250 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 230
*/
void setupOPLandIPLParvoChannel(const bool colorMode=true, const bool normaliseOutput = true, const float photoreceptorsLocalAdaptationSensitivity=0.7, const float photoreceptorsTemporalConstant=0.5, const float photoreceptorsSpatialConstant=0.53, const float horizontalCellsGain=0, const float HcellsTemporalConstant=1, const float HcellsSpatialConstant=7, const float ganglionCellsSensitivity=0.7);
virtual void setupOPLandIPLParvoChannel(const bool colorMode=true, const bool normaliseOutput = true, const float photoreceptorsLocalAdaptationSensitivity=0.7, const float photoreceptorsTemporalConstant=0.5, const float photoreceptorsSpatialConstant=0.53, const float horizontalCellsGain=0, const float HcellsTemporalConstant=1, const float HcellsSpatialConstant=7, const float ganglionCellsSensitivity=0.7)=0;
/**
* set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel
......@@ -256,41 +233,41 @@ public:
* @param localAdaptintegration_tau: specifies the temporal constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
* @param localAdaptintegration_k: specifies the spatial constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
*/
void setupIPLMagnoChannel(const bool normaliseOutput = true, const float parasolCells_beta=0, const float parasolCells_tau=0, const float parasolCells_k=7, const float amacrinCellsTemporalCutFrequency=1.2, const float V0CompressionParameter=0.95, const float localAdaptintegration_tau=0, const float localAdaptintegration_k=7);
virtual void setupIPLMagnoChannel(const bool normaliseOutput = true, const float parasolCells_beta=0, const float parasolCells_tau=0, const float parasolCells_k=7, const float amacrinCellsTemporalCutFrequency=1.2, const float V0CompressionParameter=0.95, const float localAdaptintegration_tau=0, const float localAdaptintegration_k=7)=0;
/**
* method which allows retina to be applied on an input image, after run, encapsulated retina module is ready to deliver its outputs using dedicated acccessors, see getParvo and getMagno methods
* @param inputImage : the input cv::Mat image to be processed, can be gray level or BGR coded in any format (from 8bit to 16bits)
*/
void run(const Mat &inputImage);
virtual void run(InputArray inputImage)=0;
/**
* accessor of the details channel of the retina (models foveal vision)
* @param retinaOutput_parvo : the output buffer (reallocated if necessary), this output is rescaled for standard 8bits image processing use in OpenCV
*/
void getParvo(Mat &retinaOutput_parvo);
virtual void getParvo(OutputArray retinaOutput_parvo)=0;
/**
* accessor of the details channel of the retina (models foveal vision)
* @param retinaOutput_parvo : the output buffer (reallocated if necessary), this output is the original retina filter model output, without any quantification or rescaling
* @param retinaOutput_parvo : a cv::Mat header filled with the internal parvo buffer of the retina module. This output is the original retina filter model output, without any quantification or rescaling
*/
void getParvo(std::valarray<float> &retinaOutput_parvo);
virtual void getParvoRAW(OutputArray retinaOutput_parvo)=0;
/**
* accessor of the motion channel of the retina (models peripheral vision)
* @param retinaOutput_magno : the output buffer (reallocated if necessary), this output is rescaled for standard 8bits image processing use in OpenCV
*/
void getMagno(Mat &retinaOutput_magno);
virtual void getMagno(OutputArray retinaOutput_magno)=0;
/**
* accessor of the motion channel of the retina (models peripheral vision)
* @param retinaOutput_magno : the output buffer (reallocated if necessary), this output is the original retina filter model output, without any quantification or rescaling
* @param retinaOutput_magno : a cv::Mat header filled with the internal retina magno buffer of the retina module. This output is the original retina filter model output, without any quantification or rescaling
*/
void getMagno(std::valarray<float> &retinaOutput_magno);
virtual void getMagnoRAW(OutputArray retinaOutput_magno)=0;
// original API level data accessors : get buffers addresses...
const std::valarray<float> & getMagno() const;
const std::valarray<float> & getParvo() const;
// original API level data accessors : get buffers addresses from a Mat header, similar to getParvoRAW and getMagnoRAW...
virtual const Mat getMagnoRAW() const=0;
virtual const Mat getParvoRAW() const=0;
/**
* activate color saturation as the final step of the color demultiplexing process
......@@ -298,58 +275,27 @@ public:
* @param saturateColors: boolean that activates color saturation (if true) or desactivate (if false)
* @param colorSaturationValue: the saturation factor
*/
void setColorSaturation(const bool saturateColors=true, const float colorSaturationValue=4.0);
virtual void setColorSaturation(const bool saturateColors=true, const float colorSaturationValue=4.0)=0;
/**
* clear all retina buffers (equivalent to opening the eyes after a long period of eye close ;o)
*/
void clearBuffers();
virtual void clearBuffers()=0;
/**
* Activate/desactivate the Magnocellular pathway processing (motion information extraction), by default, it is activated
* @param activate: true if Magnocellular output should be activated, false if not
*/
void activateMovingContoursProcessing(const bool activate);
virtual void activateMovingContoursProcessing(const bool activate)=0;
/**
* Activate/desactivate the Parvocellular pathway processing (contours information extraction), by default, it is activated
* @param activate: true if Parvocellular (contours information extraction) output should be activated, false if not
*/
void activateContoursProcessing(const bool activate);
protected:
// Parameteres setup members
RetinaParameters _retinaParameters; // structure of parameters
// Retina model related modules
std::valarray<float> _inputBuffer; //!< buffer used to convert input cv::Mat to internal retina buffers format (valarrays)
// pointer to retina model
RetinaFilter* _retinaFilter; //!< the pointer to the retina module, allocated with instance construction
/**
* exports a valarray buffer outing from HVStools objects to a cv::Mat in CV_8UC1 (gray level picture) or CV_8UC3 (color) format
* @param grayMatrixToConvert the valarray to export to OpenCV
* @param nbRows : the number of rows of the valarray flatten matrix
* @param nbColumns : the number of rows of the valarray flatten matrix
* @param colorMode : a flag which mentions if matrix is color (true) or graylevel (false)
* @param outBuffer : the output matrix which is reallocated to satisfy Retina output buffer dimensions
*/
void _convertValarrayBuffer2cvMat(const std::valarray<float> &grayMatrixToConvert, const unsigned int nbRows, const unsigned int nbColumns, const bool colorMode, Mat &outBuffer);
/**
*
* @param inputMatToConvert : the OpenCV cv::Mat that has to be converted to gray or RGB valarray buffer that will be processed by the retina model
* @param outputValarrayMatrix : the output valarray
* @return the input image color mode (color=true, gray levels=false)
*/
bool _convertCvMat2ValarrayBuffer(const cv::Mat inputMatToConvert, std::valarray<float> &outputValarrayMatrix);
//! private method called by constructors, gathers their parameters and use them in a unified way
void _init(const Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
virtual void activateContoursProcessing(const bool activate)=0;
};
CV_EXPORTS Ptr<Retina> createRetina(Size inputSize);
CV_EXPORTS Ptr<Retina> createRetina(Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
}
#endif /* __OPENCV_CONTRIB_RETINA_HPP__ */
......
此差异已折叠。
......@@ -338,8 +338,11 @@ void RetinaColor::runColorDemultiplexing(const std::valarray<float> &multiplexed
}
// compute the gradient of the luminance
#ifdef MAKE_PARALLEL // call the TemplateBuffer TBB clipping method
cv::parallel_for_(cv::Range(2,_filterOutput.getNBrows()-2), Parallel_computeGradient(_filterOutput.getNBcolumns(), _filterOutput.getNBrows(), &(*_luminance)[0], &_imageGradient[0]));
#else
_computeGradient(&(*_luminance)[0]);
#endif
// adaptively filter the submosaics to get the adaptive densities, here the buffer _chrominance is used as a temp buffer
_adaptiveSpatialLPfilter(&_RGBmosaic[0], &_chrominance[0]);
_adaptiveSpatialLPfilter(&_RGBmosaic[0]+_filterOutput.getNBpixels(), &_chrominance[0]+_filterOutput.getNBpixels());
......
......@@ -333,6 +333,53 @@ namespace cv
}
}
};
class Parallel_computeGradient: public cv::ParallelLoopBody
{
private:
float *imageGradient;
const float *luminance;
unsigned int nbColumns, doubleNbColumns, nbRows, nbPixels;
public:
Parallel_computeGradient(const unsigned int nbCols, const unsigned int nbRws, const float *lum, float *imageGrad)
:imageGradient(imageGrad), luminance(lum), nbColumns(nbCols), doubleNbColumns(2*nbCols), nbRows(nbRws), nbPixels(nbRws*nbCols){};
virtual void operator()( const Range& r ) const {
for (int idLine=r.start;idLine!=r.end;++idLine)
{
for (unsigned int idColumn=2;idColumn<nbColumns-2;++idColumn)
{
const unsigned int pixelIndex=idColumn+nbColumns*idLine;
// horizontal and vertical local gradients
const float verticalGrad=fabs(luminance[pixelIndex+nbColumns]-luminance[pixelIndex-nbColumns]);
const float horizontalGrad=fabs(luminance[pixelIndex+1]-luminance[pixelIndex-1]);
// neighborhood horizontal and vertical gradients
const float verticalGrad_p=fabs(luminance[pixelIndex]-luminance[pixelIndex-doubleNbColumns]);
const float horizontalGrad_p=fabs(luminance[pixelIndex]-luminance[pixelIndex-2]);
const float verticalGrad_n=fabs(luminance[pixelIndex+doubleNbColumns]-luminance[pixelIndex]);
const float horizontalGrad_n=fabs(luminance[pixelIndex+2]-luminance[pixelIndex]);
const float horizontalGradient=0.5f*horizontalGrad+0.25f*(horizontalGrad_p+horizontalGrad_n);
const float verticalGradient=0.5f*verticalGrad+0.25f*(verticalGrad_p+verticalGrad_n);
// compare local gradient means and fill the appropriate filtering coefficient value that will be used in adaptative filters
if (horizontalGradient<verticalGradient)
{
imageGradient[pixelIndex+nbPixels]=0.06f;
imageGradient[pixelIndex]=0.57f;
}
else
{
imageGradient[pixelIndex+nbPixels]=0.57f;
imageGradient[pixelIndex]=0.06f;
}
}
}
}
};
#endif
};
}
......
......@@ -211,10 +211,10 @@ static void drawPlot(const cv::Mat curve, const std::string figureTitle, const i
*/
if (useLogSampling)
{
retina = new cv::Retina(inputImage.size(),true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
retina = cv::createRetina(inputImage.size(),true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
}
else// -> else allocate "classical" retina :
retina = new cv::Retina(inputImage.size());
retina = cv::createRetina(inputImage.size());
// save default retina parameters file in order to let you see this and maybe modify it and reload using method "setup"
retina->write("RetinaDefaultParameters.xml");
......
......@@ -280,10 +280,10 @@ static void loadNewFrame(const std::string filenamePrototype, const int currentF
*/
if (useLogSampling)
{
retina = new cv::Retina(inputImage.size(),true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
retina = cv::createRetina(inputImage.size(),true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
}
else// -> else allocate "classical" retina :
retina = new cv::Retina(inputImage.size());
retina = cv::createRetina(inputImage.size());
// save default retina parameters file in order to let you see this and maybe modify it and reload using method "setup"
retina->write("RetinaDefaultParameters.xml");
......
......@@ -111,10 +111,10 @@ int main(int argc, char* argv[]) {
// if the last parameter is 'log', then activate log sampling (favour foveal vision and subsamples peripheral vision)
if (useLogSampling)
{
myRetina = new cv::Retina(inputFrame.size(), true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
myRetina = cv::createRetina(inputFrame.size(), true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
}
else// -> else allocate "classical" retina :
myRetina = new cv::Retina(inputFrame.size());
myRetina = cv::createRetina(inputFrame.size());
// save default retina parameters file in order to let you see this and maybe modify it and reload using method "setup"
myRetina->write("RetinaDefaultParameters.xml");
......
......@@ -39,7 +39,7 @@ int main(int argc, char* argv[]) {
// welcome message
std::cout<<"****************************************************"<<std::endl;
std::cout<<"* Retina demonstration : demonstrates the use of is a wrapper class of the Gipsa/Listic Labs retina model."<<std::endl;
std::cout<<"* This demo will try to load the file 'RetinaSpecificParameters.xml' (if exists).\nTo create it, copy the autogenerated template 'RetinaDefaultParameters.xml'.\nThen twaek it with your own retina parameters."<<std::endl;
std::cout<<"* This demo will try to load the file 'RetinaSpecificParameters.xml' (if exists).\nTo create it, copy the autogenerated template 'RetinaDefaultParameters.xml'.\nThen tweak it with your own retina parameters."<<std::endl;
// basic input arguments checking
if (argc<2)
{
......@@ -100,10 +100,12 @@ int main(int argc, char* argv[]) {
// if the last parameter is 'log', then activate log sampling (favour foveal vision and subsamples peripheral vision)
if (useLogSampling)
{
myRetina = new cv::Retina(inputFrame.size(), true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
myRetina = cv::createRetina(inputFrame.size(), true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
}
else// -> else allocate "classical" retina :
myRetina = new cv::Retina(inputFrame.size());
{
myRetina = cv::createRetina(inputFrame.size());
}
// save default retina parameters file in order to let you see this and maybe modify it and reload using method "setup"
myRetina->write("RetinaDefaultParameters.xml");
......@@ -137,7 +139,7 @@ int main(int argc, char* argv[]) {
}
}catch(cv::Exception e)
{
std::cerr<<"Error using Retina : "<<e.what()<<std::endl;
std::cerr<<"Error using Retina or end of video sequence reached : "<<e.what()<<std::endl;
}
// Program end message
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册