提交 2842d346 编写于 作者: V Vadim Pisarevsky

added retina model, by Alexander Benoit

上级 04ebef1b
......@@ -607,6 +607,7 @@ namespace cv
CV_EXPORTS void polyfit(const Mat& srcx, const Mat& srcy, Mat& dst, int order);
}
#include "opencv2/contrib/retina.hpp"
#endif
......
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** HVStools : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (hvstools)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#ifndef __OPENCV_CONTRIB_RETINA_HPP__
#define __OPENCV_CONTRIB_RETINA_HPP__
/*
* Retina.hpp
*
* Created on: Jul 19, 2011
* Author: Alexandre Benoit
*/
#include "opencv2/core/core.hpp" // for all OpenCV core functionalities access, including cv::Exception support
#include <valarray>
namespace cv
{
enum RETINA_COLORSAMPLINGMETHOD
{
RETINA_COLOR_RANDOM, /// each pixel position is either R, G or B in a random choice
RETINA_COLOR_DIAGONAL,/// color sampling is RGBRGBRGB..., line 2 BRGBRGBRG..., line 3, GBRGBRGBR...
RETINA_COLOR_BAYER/// standard bayer sampling
};
class RetinaFilter;
/**
* @brief a wrapper class which allows the use of the Gipsa/Listic Labs retina model
* @class Retina object is a wrapper class which allows the Gipsa/Listic Labs model to be used.
* This retina model allows spatio-temporal image processing (applied on still images, video sequences).
* As a summary, these are the retina model properties:
* => It applies a spectral whithening (mid-frequency details enhancement)
* => high frequency spatio-temporal noise reduction
* => low frequency luminance to be reduced (luminance range compression)
* => local logarithmic luminance compression allows details to be enhanced in low light conditions
*
* for more information, reer to the following papers :
* Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
* Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
*/
class CV_EXPORTS Retina {
public:
/**
* Main constructor with most commun use setup : create an instance of color ready retina model
* @param inputSize : the input frame size
*/
Retina(const std::string parametersSaveFile, Size inputSize);
/**
* Complete Retina filter constructor which allows all basic structural parameters definition
* @param inputSize : the input frame size
* @param colorMode : the chosen processing mode : with or without color processing
* @param samplingMethod: specifies which kind of color sampling will be used
* @param useRetinaLogSampling: activate retina log sampling, if true, the 2 following parameters can be used
* @param reductionFactor: only usefull if param useRetinaLogSampling=true, specifies the reduction factor of the output frame (as the center (fovea) is high resolution and corners can be underscaled, then a reduction of the output is allowed without precision leak
* @param samplingStrenght: only usefull if param useRetinaLogSampling=true, specifies the strenght of the log scale that is applied
*/
Retina(const std::string parametersSaveFile, Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
virtual ~Retina();
/**
* try to open an XML retina parameters file to adjust current retina instance setup
* => if the xml file does not exist, then default setup is applied
* => warning, Exceptions are thrown if read XML file is not valid
* @param retinaParameterFile : the parameters filename
*/
void setup(std::string retinaParameterFile="", const bool applyDefaultSetupOnFailure=true);
/**
* parameters setup display method
* @return a string which contains formatted parameters information
*/
const std::string printSetup();
/**
* setup the OPL and IPL parvo channels (see biologocal model)
* OPL is referred as Outer Plexiform Layer of the retina, it allows the spatio-temporal filtering which withens the spectrum and reduces spatio-temporal noise while attenuating global luminance (low frequency energy)
* IPL parvo is the OPL next processing stage, it refers to Inner Plexiform layer of the retina, it allows high contours sensitivity in foveal vision.
* for more informations, please have a look at the paper Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
* @param colorMode : specifies if (true) color is processed of not (false) to then processing gray level image
* @param normaliseOutput : specifies if (true) output is rescaled between 0 and 255 of not (false)
* @param photoreceptorsLocalAdaptationSensitivity: the photoreceptors sensitivity renage is 0-1 (more log compression effect when value increases)
* @param photoreceptorsTemporalConstant: the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame
* @param photoreceptorsSpatialConstant: the spatial constant of the first order low pass filter of the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel
* @param horizontalCellsGain: gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0
* @param HcellsTemporalConstant: the time constant of the first order low pass filter of the horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors
* @param HcellsSpatialConstant: the spatial constant of the first order low pass filter of the horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model)
* @param ganglionCellsSensitivity: the compression strengh of the ganglion cells local adaptation output, set a value between 160 and 250 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 230
*/
void setupOPLandIPLParvoChannel(const bool colorMode=true, const bool normaliseOutput = true, const double photoreceptorsLocalAdaptationSensitivity=0.7, const double photoreceptorsTemporalConstant=0.5, const double photoreceptorsSpatialConstant=0.53, const double horizontalCellsGain=0, const double HcellsTemporalConstant=1, const double HcellsSpatialConstant=7, const double ganglionCellsSensitivity=0.7);
/**
* set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel
* this channel processes signals outpint from OPL processing stage in peripheral vision, it allows motion information enhancement. It is decorrelated from the details channel. See reference paper for more details.
* @param normaliseOutput : specifies if (true) output is rescaled between 0 and 255 of not (false)
* @param parasolCells_beta: the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0
* @param parasolCells_tau: the low pass filter time constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response)
* @param parasolCells_k: the low pass filter spatial constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5
* @param amacrinCellsTemporalCutFrequency: the time constant of the first order high pass fiter of the magnocellular way (motion information channel), unit is frames, tipicall value is 5
* @param V0CompressionParameter: the compression strengh of the ganglion cells local adaptation output, set a value between 160 and 250 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 200
* @param localAdaptintegration_tau: specifies the temporal constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
* @param localAdaptintegration_k: specifies the spatial constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
*/
void setupIPLMagnoChannel(const bool normaliseOutput = true, const double parasolCells_beta=0, const double parasolCells_tau=0, const double parasolCells_k=7, const double amacrinCellsTemporalCutFrequency=1.2, const double V0CompressionParameter=0.95, const double localAdaptintegration_tau=0, const double localAdaptintegration_k=7);
/**
* method which allows retina to be applied on an input image
* @param
* /// encapsulated retina module is ready to deliver its outputs using dedicated acccessors, see getParvo and getMagno methods
*
*/
void run(const Mat &inputImage);
/**
* accessor of the details channel of the retina (models foveal vision)
* @param retinaOutput_parvo : the output buffer (reallocated if necessary)
*/
void getParvo(Mat &retinaOutput_parvo);
/**
* accessor of the motion channel of the retina (models peripheral vision)
* @param retinaOutput_magno : the output buffer (reallocated if necessary)
*/
void getMagno(Mat &retinaOutput_magno);
void clearBuffers();
protected:
//// Parameteres setup members
// parameters file ... saved on instance delete
FileStorage _parametersSaveFile;
std::string _parametersSaveFileName;
//// Retina model related modules
// buffer that ensure library cross-compatibility
std::valarray<double> _inputBuffer;
// pointer to retina model
RetinaFilter* _retinaFilter;
/**
* exports a valarray buffer outing from HVStools objects to a cv::Mat in CV_8UC1 (gray level picture) or CV_8UC3 (color) format
* @param grayMatrixToConvert the valarray to export to OpenCV
* @param nbRows : the number of rows of the valarray flatten matrix
* @param nbColumns : the number of rows of the valarray flatten matrix
* @param colorMode : a flag which mentions if matrix is color (true) or graylevel (false)
* @param outBuffer : the output matrix which is reallocated to satisfy Retina output buffer dimensions
*/
void _convertValarrayGrayBuffer2cvMat(const std::valarray<double> &grayMatrixToConvert, const unsigned int nbRows, const unsigned int nbColumns, const bool colorMode, Mat &outBuffer);
// private method called by constructirs
void _init(const std::string parametersSaveFile, Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
};
}
#endif /* __OPENCV_CONTRIB_RETINA_HPP__ */
此差异已折叠。
此差异已折叠。
此差异已折叠。
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** HVStools : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (hvstools)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#ifndef IMAGELOGPOLPROJECTION_H_
#define IMAGELOGPOLPROJECTION_H_
/**
* @class ImageLogPolProjection
* @brief class able to perform a log sampling of an image input (models the log sampling of the photoreceptors of the retina)
* or a log polar projection which models the retina information projection on the primary visual cortex: a linear projection in the center for detail analysis and a log projection of the borders (low spatial frequency motion information in general)
*
* collaboration: Barthelemy DURETTE who experimented the retina log projection
-> "Traitement visuels Bio mimtiques pour la supplance perceptive", internal technical report, May 2005, Gipsa-lab/DIS, Grenoble, FRANCE
*
* * TYPICAL USE:
*
* // create object, here for a log sampling (keyword:RETINALOGPROJECTION): (dynamic object allocation sample)
* ImageLogPolProjection *imageSamplingTool;
* imageSamplingTool = new ImageLogPolProjection(frameSizeRows, frameSizeColumns, RETINALOGPROJECTION);
*
* // init log projection:
* imageSamplingTool->initProjection(1.0, 15.0);
*
* // during program execution, call the log transform applied to a frame called "FrameBuffer" :
* imageSamplingTool->runProjection(FrameBuffer);
* // get output frame and its size:
* const unsigned int logSampledFrame_nbRows=imageSamplingTool->getOutputNBrows();
* const unsigned int logSampledFrame_nbColumns=imageSamplingTool->getOutputNBcolumns();
* const double *logSampledFrame=imageSamplingTool->getSampledFrame();
*
* // at the end of the program, destroy object:
* delete imageSamplingTool;
*
* @author Alexandre BENOIT, benoit.alexandre.vision@gmail.com, LISTIC : www.listic.univ-savoie.fr, Gipsa-Lab, France: www.gipsa-lab.inpg.fr/
* Creation date 2007
*/
//#define __IMAGELOGPOLPROJECTION_DEBUG // used for std output debug information
#include "basicretinafilter.hpp"
namespace cv
{
class ImageLogPolProjection:public BasicRetinaFilter
{
public:
enum PROJECTIONTYPE{RETINALOGPROJECTION, CORTEXLOGPOLARPROJECTION};
/**
* constructor, just specifies the image input size and the projection type, no projection initialisation is done
* -> use initLogRetinaSampling() or initLogPolarCortexSampling() for that
* @param nbRows: number of rows of the input image
* @param nbColumns: number of columns of the input image
* @param projection: the type of projection, RETINALOGPROJECTION or CORTEXLOGPOLARPROJECTION
* @param colorMode: specifies if the projection is applied on a grayscale image (false) or color images (3 layers) (true)
*/
ImageLogPolProjection(const unsigned int nbRows, const unsigned int nbColumns, const PROJECTIONTYPE projection, const bool colorMode=false);
/**
* standard destructor
*/
virtual ~ImageLogPolProjection();
/**
* function that clears all buffers of the object
*/
void clearAllBuffers();
/**
* resize retina color filter object (resize all allocated buffers)
* @param NBrows: the new height size
* @param NBcolumns: the new width size
*/
void resize(const unsigned int NBrows, const unsigned int NBcolumns);
/**
* init function depending on the projection type
* @param reductionFactor: the size reduction factor of the ouptup image in regard of the size of the input image, must be superior to 1
* @param samplingStrenght: specifies the strenght of the log compression effect (magnifying coefficient)
* @return true if the init was performed without any errors
*/
bool initProjection(const double reductionFactor, const double samplingStrenght);
/**
* main funtion of the class: run projection function
* @param inputFrame: the input frame to be processed
* @return the output frame
*/
std::valarray<double> &runProjection(const std::valarray<double> &inputFrame, const double colorMode=false);
/**
* @return the numbers of rows (height) of the images OUTPUTS of the object
*/
inline const unsigned int getOutputNBrows(){return _outputNBrows;};
/**
* @return the numbers of columns (width) of the images OUTPUTS of the object
*/
inline const unsigned int getOutputNBcolumns(){return _outputNBcolumns;};
/**
* main funtion of the class: run projection function
* @param size: one of the input frame initial dimensions to be processed
* @return the output frame dimension
*/
inline static const unsigned int predictOutputSize(const unsigned int size, const double reductionFactor){return (unsigned int)((double)size/reductionFactor);};
/**
* @return the output of the filter which applies an irregular Low Pass spatial filter to the imag input (see function
*/
inline const std::valarray<double> &getIrregularLPfilteredInputFrame() const {return _irregularLPfilteredFrame;};
/**
* function which allows to retrieve the output frame which was updated after the "runProjection(...) function BasicRetinaFilter::runProgressiveFilter(...)
* @return the projection result
*/
inline const std::valarray<double> &getSampledFrame() const {return _sampledFrame;};
/**
* function which allows gives the tranformation table, its size is (getNBrows()*getNBcolumns()*2)
* @return the transformation matrix [outputPixIndex_i, inputPixIndex_i, outputPixIndex_i+1, inputPixIndex_i+1....]
*/
inline const std::valarray<unsigned int> &getSamplingMap() const {return _transformTable;};
inline const double getOriginalRadiusLength(const double projectedRadiusLength){return _azero/(_alim-projectedRadiusLength*2.0/_minDimension);};
// unsigned int getInputPixelIndex(const unsigned int ){ return _transformTable[index*2+1]};
private:
PROJECTIONTYPE _selectedProjection;
// size of the image output
unsigned int _outputNBrows;
unsigned int _outputNBcolumns;
unsigned int _outputNBpixels;
unsigned int _outputDoubleNBpixels;
unsigned int _inputDoubleNBpixels;
// is the object able to manage color flag
bool _colorModeCapable;
// sampling strenght factor
double _samplingStrenght;
// sampling reduction factor
double _reductionFactor;
// log sampling parameters
double _azero;
double _alim;
double _minDimension;
// template buffers
std::valarray<double>_sampledFrame;
std::valarray<double>&_tempBuffer;
std::valarray<unsigned int>_transformTable;
std::valarray<double> &_irregularLPfilteredFrame; // just a reference for easier understanding
unsigned int _usefullpixelIndex;
// init transformation tables
bool _computeLogProjection();
bool _computeLogPolarProjection();
// specifies if init was done correctly
bool _initOK;
// private init projections functions called by "initProjection(...)" function
bool _initLogRetinaSampling(const double reductionFactor, const double samplingStrenght);
bool _initLogPolarCortexSampling(const double reductionFactor, const double samplingStrenght);
};
}
#endif /*IMAGELOGPOLPROJECTION_H_*/
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** HVStools : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (hvstools)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#include "precomp.hpp"
#include <iostream>
#include "magnoretinafilter.hpp"
#include <cmath>
namespace cv
{
// Constructor and Desctructor of the OPL retina filter
MagnoRetinaFilter::MagnoRetinaFilter(const unsigned int NBrows, const unsigned int NBcolumns)
:BasicRetinaFilter(NBrows, NBcolumns, 2),
_previousInput_ON(NBrows*NBcolumns),
_previousInput_OFF(NBrows*NBcolumns),
_amacrinCellsTempOutput_ON(NBrows*NBcolumns),
_amacrinCellsTempOutput_OFF(NBrows*NBcolumns),
_magnoXOutputON(NBrows*NBcolumns),
_magnoXOutputOFF(NBrows*NBcolumns),
_localProcessBufferON(NBrows*NBcolumns),
_localProcessBufferOFF(NBrows*NBcolumns)
{
_magnoYOutput=&_filterOutput;
_magnoYsaturated=&_localBuffer;
clearAllBuffers();
#ifdef IPL_RETINA_ELEMENT_DEBUG
std::cout<<"MagnoRetinaFilter::Init IPL retina filter at specified frame size OK"<<std::endl;
#endif
}
MagnoRetinaFilter::~MagnoRetinaFilter()
{
#ifdef IPL_RETINA_ELEMENT_DEBUG
std::cout<<"MagnoRetinaFilter::Delete IPL retina filter OK"<<std::endl;
#endif
}
// function that clears all buffers of the object
void MagnoRetinaFilter::clearAllBuffers()
{
BasicRetinaFilter::clearAllBuffers();
_previousInput_ON=0;
_previousInput_OFF=0;
_amacrinCellsTempOutput_ON=0;
_amacrinCellsTempOutput_OFF=0;
_magnoXOutputON=0;
_magnoXOutputOFF=0;
_localProcessBufferON=0;
_localProcessBufferOFF=0;
}
/**
* resize retina magno filter object (resize all allocated buffers
* @param NBrows: the new height size
* @param NBcolumns: the new width size
*/
void MagnoRetinaFilter::resize(const unsigned int NBrows, const unsigned int NBcolumns)
{
BasicRetinaFilter::resize(NBrows, NBcolumns);
_previousInput_ON.resize(NBrows*NBcolumns);
_previousInput_OFF.resize(NBrows*NBcolumns);
_amacrinCellsTempOutput_ON.resize(NBrows*NBcolumns);
_amacrinCellsTempOutput_OFF.resize(NBrows*NBcolumns);
_magnoXOutputON.resize(NBrows*NBcolumns);
_magnoXOutputOFF.resize(NBrows*NBcolumns);
_localProcessBufferON.resize(NBrows*NBcolumns);
_localProcessBufferOFF.resize(NBrows*NBcolumns);
// to be sure, relink buffers
_magnoYOutput=&_filterOutput;
_magnoYsaturated=&_localBuffer;
// reset all buffers
clearAllBuffers();
}
void MagnoRetinaFilter::setCoefficientsTable(const double parasolCells_beta, const double parasolCells_tau, const double parasolCells_k, const double amacrinCellsTemporalCutFrequency, const double localAdaptIntegration_tau, const double localAdaptIntegration_k )
{
_temporalCoefficient=exp(-1.0/amacrinCellsTemporalCutFrequency);
// the first set of parameters is dedicated to the low pass filtering property of the ganglion cells
BasicRetinaFilter::setLPfilterParameters(parasolCells_beta, parasolCells_tau, parasolCells_k, 0);
// the second set of parameters is dedicated to the ganglion cells output intergartion for their local adaptation property
BasicRetinaFilter::setLPfilterParameters(0, localAdaptIntegration_tau, localAdaptIntegration_k, 1);
}
void MagnoRetinaFilter::_amacrineCellsComputing(const double *OPL_ON, const double *OPL_OFF)
{
register const double *OPL_ON_PTR=OPL_ON;
register const double *OPL_OFF_PTR=OPL_OFF;
register double *previousInput_ON_PTR= &_previousInput_ON[0];
register double *previousInput_OFF_PTR= &_previousInput_OFF[0];
register double *amacrinCellsTempOutput_ON_PTR= &_amacrinCellsTempOutput_ON[0];
register double *amacrinCellsTempOutput_OFF_PTR= &_amacrinCellsTempOutput_OFF[0];
for (unsigned int IDpixel=0 ; IDpixel<this->getNBpixels(); ++IDpixel)
{
/* Compute ON and OFF amacrin cells high pass temporal filter */
double magnoXonPixelResult = _temporalCoefficient*(*amacrinCellsTempOutput_ON_PTR+ *OPL_ON_PTR-*previousInput_ON_PTR);
*(amacrinCellsTempOutput_ON_PTR++)=((double)(magnoXonPixelResult>0))*magnoXonPixelResult;
double magnoXoffPixelResult = _temporalCoefficient*(*amacrinCellsTempOutput_OFF_PTR+ *OPL_OFF_PTR-*previousInput_OFF_PTR);
*(amacrinCellsTempOutput_OFF_PTR++)=((double)(magnoXoffPixelResult>0))*magnoXoffPixelResult;
/* prepare next loop */
*(previousInput_ON_PTR++)=*(OPL_ON_PTR++);
*(previousInput_OFF_PTR++)=*(OPL_OFF_PTR++);
}
}
// launch filter that runs all the IPL filter
const std::valarray<double> &MagnoRetinaFilter::runFilter(const std::valarray<double> &OPL_ON, const std::valarray<double> &OPL_OFF)
{
// Compute the high pass temporal filter
_amacrineCellsComputing(&OPL_ON[0], &OPL_OFF[0]);
// apply low pass filtering on ON and OFF ways after temporal high pass filtering
_spatiotemporalLPfilter(&_amacrinCellsTempOutput_ON[0], &_magnoXOutputON[0], 0);
_spatiotemporalLPfilter(&_amacrinCellsTempOutput_OFF[0], &_magnoXOutputOFF[0], 0);
// local adaptation of the ganglion cells to the local contrast of the moving contours
_spatiotemporalLPfilter(&_magnoXOutputON[0], &_localProcessBufferON[0], 1);
_localLuminanceAdaptation(&_magnoXOutputON[0], &_localProcessBufferON[0]);
_spatiotemporalLPfilter(&_magnoXOutputOFF[0], &_localProcessBufferOFF[0], 1);
_localLuminanceAdaptation(&_magnoXOutputOFF[0], &_localProcessBufferOFF[0]);
/* Compute MagnoY */
register double *magnoYOutput= &(*_magnoYOutput)[0];
register double *magnoXOutputON_PTR= &_magnoXOutputON[0];
register double *magnoXOutputOFF_PTR= &_magnoXOutputOFF[0];
for (register unsigned int IDpixel=0 ; IDpixel<_filterOutput.getNBpixels() ; ++IDpixel)
*(magnoYOutput++)=*(magnoXOutputON_PTR++)+*(magnoXOutputOFF_PTR++);
return (*_magnoYOutput);
}
}
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** HVStools : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (hvstools)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#ifndef MagnoRetinaFilter_H_
#define MagnoRetinaFilter_H_
/**
* @class MagnoRetinaFilter
* @brief class which describes the magnocellular channel of the retina:
* -> performs a moving contours extraction with powerfull local data enhancement
*
* TYPICAL USE:
*
* // create object at a specified picture size
* MagnoRetinaFilter *movingContoursExtractor;
* movingContoursExtractor =new MagnoRetinaFilter(frameSizeRows, frameSizeColumns);
*
* // init gain, spatial and temporal parameters:
* movingContoursExtractor->setCoefficientsTable(0, 0.7, 5, 3);
*
* // during program execution, call the filter for contours extraction for an input picture called "FrameBuffer":
* movingContoursExtractor->runfilter(FrameBuffer);
*
* // get the output frame, check in the class description below for more outputs:
* const double *movingContours=movingContoursExtractor->getMagnoYsaturated();
*
* // at the end of the program, destroy object:
* delete movingContoursExtractor;
* @author Alexandre BENOIT, benoit.alexandre.vision@gmail.com, LISTIC : www.listic.univ-savoie.fr, Gipsa-Lab, France: www.gipsa-lab.inpg.fr/
* Creation date 2007
* Based on Alexandre BENOIT thesis: "Le système visuel humain au secours de la vision par ordinateur"
*/
#include "basicretinafilter.hpp"
//#define _IPL_RETINA_ELEMENT_DEBUG
namespace cv
{
class MagnoRetinaFilter: public BasicRetinaFilter
{
public:
/**
* constructor parameters are only linked to image input size
* @param NBrows: number of rows of the input image
* @param NBcolumns: number of columns of the input image
*/
MagnoRetinaFilter(const unsigned int NBrows, const unsigned int NBcolumns);
/**
* destructor
*/
virtual ~MagnoRetinaFilter();
/**
* function that clears all buffers of the object
*/
void clearAllBuffers();
/**
* resize retina magno filter object (resize all allocated buffers)
* @param NBrows: the new height size
* @param NBcolumns: the new width size
*/
void resize(const unsigned int NBrows, const unsigned int NBcolumns);
/**
* set parameters values
* @param parasolCells_beta: the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0
* @param parasolCells_tau: the low pass filter time constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response)
* @param parasolCells_k: the low pass filter spatial constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5
* @param amacrinCellsTemporalCutFrequency: the time constant of the first order high pass fiter of the magnocellular way (motion information channel), unit is frames, tipicall value is 5
* @param localAdaptIntegration_tau: specifies the temporal constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
* @param localAdaptIntegration_k: specifies the spatial constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
*/
void setCoefficientsTable(const double parasolCells_beta, const double parasolCells_tau, const double parasolCells_k, const double amacrinCellsTemporalCutFrequency, const double localAdaptIntegration_tau, const double localAdaptIntegration_k);
/**
* launch filter that runs all the IPL magno filter (model of the magnocellular channel of the Inner Plexiform Layer of the retina)
* @param OPL_ON: the output of the bipolar ON cells of the retina (available from the ParvoRetinaFilter class (getBipolarCellsON() function)
* @param OPL_OFF: the output of the bipolar OFF cells of the retina (available from the ParvoRetinaFilter class (getBipolarCellsOFF() function)
* @return the processed result without post-processing
*/
const std::valarray<double> &runFilter(const std::valarray<double> &OPL_ON, const std::valarray<double> &OPL_OFF);
/**
* @return the Magnocellular ON channel filtering output
*/
inline const std::valarray<double> &getMagnoON() const {return _magnoXOutputON;};
/**
* @return the Magnocellular OFF channel filtering output
*/
inline const std::valarray<double> &getMagnoOFF() const {return _magnoXOutputOFF;};
/**
* @return the Magnocellular Y (sum of the ON and OFF magno channels) filtering output
*/
inline const std::valarray<double> &getMagnoYsaturated() const {return *_magnoYsaturated;};
/**
* applies an image normalization which saturates the high output values by the use of an assymetric sigmoide
*/
inline void normalizeGrayOutputNearZeroCentreredSigmoide(){_filterOutput.normalizeGrayOutputNearZeroCentreredSigmoide(&(*_magnoYOutput)[0], &(*_magnoYsaturated)[0]);};
/**
* @return the horizontal cells' temporal constant
*/
inline const double getTemporalConstant(){return this->_filteringCoeficientsTable[2];};
private:
// related pointers to these buffers
std::valarray<double> _previousInput_ON;
std::valarray<double> _previousInput_OFF;
std::valarray<double> _amacrinCellsTempOutput_ON;
std::valarray<double> _amacrinCellsTempOutput_OFF;
std::valarray<double> _magnoXOutputON;
std::valarray<double> _magnoXOutputOFF;
std::valarray<double> _localProcessBufferON;
std::valarray<double> _localProcessBufferOFF;
// reference to parent buffers and allow better readability
TemplateBuffer<double> *_magnoYOutput;
std::valarray<double> *_magnoYsaturated;
// varialbles
double _temporalCoefficient;
// amacrine cells filter : high pass temporal filter
void _amacrineCellsComputing(const double *ONinput, const double *OFFinput);
};
}
#endif /*MagnoRetinaFilter_H_*/
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** HVStools : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (hvstools)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#include "precomp.hpp"
#include "parvoretinafilter.hpp"
// @author Alexandre BENOIT, benoit.alexandre.vision@gmail.com, LISTIC : www.listic.univ-savoie.fr, Gipsa-Lab, France: www.gipsa-lab.inpg.fr/
#include <iostream>
#include <cmath>
namespace cv
{
//////////////////////////////////////////////////////////
// OPL RETINA FILTER
//////////////////////////////////////////////////////////
// Constructor and Desctructor of the OPL retina filter
ParvoRetinaFilter::ParvoRetinaFilter(const unsigned int NBrows, const unsigned int NBcolumns)
:BasicRetinaFilter(NBrows, NBcolumns, 3),
_photoreceptorsOutput(NBrows*NBcolumns),
_horizontalCellsOutput(NBrows*NBcolumns),
_parvocellularOutputON(NBrows*NBcolumns),
_parvocellularOutputOFF(NBrows*NBcolumns),
_bipolarCellsOutputON(NBrows*NBcolumns),
_bipolarCellsOutputOFF(NBrows*NBcolumns),
_localAdaptationOFF(NBrows*NBcolumns)
{
// link to the required local parent adaptation buffers
_localAdaptationON=&_localBuffer;
_parvocellularOutputONminusOFF=&_filterOutput;
// (*_localAdaptationON)=&_localBuffer;
// (*_parvocellularOutputONminusOFF)=&(BasicRetinaFilter::TemplateBuffer);
// init: set all the values to 0
clearAllBuffers();
#ifdef OPL_RETINA_ELEMENT_DEBUG
std::cout<<"ParvoRetinaFilter::Init OPL retina filter at specified frame size OK\n"<<std::endl;
#endif
}
ParvoRetinaFilter::~ParvoRetinaFilter()
{
#ifdef OPL_RETINA_ELEMENT_DEBUG
std::cout<<"ParvoRetinaFilter::Delete OPL retina filter OK"<<std::endl;
#endif
}
////////////////////////////////////
// functions of the PARVO filter
////////////////////////////////////
// function that clears all buffers of the object
void ParvoRetinaFilter::clearAllBuffers()
{
BasicRetinaFilter::clearAllBuffers();
_photoreceptorsOutput=0;
_horizontalCellsOutput=0;
_parvocellularOutputON=0;
_parvocellularOutputOFF=0;
_bipolarCellsOutputON=0;
_bipolarCellsOutputOFF=0;
_localAdaptationOFF=0;
}
/**
* resize parvo retina filter object (resize all allocated buffers
* @param NBrows: the new height size
* @param NBcolumns: the new width size
*/
void ParvoRetinaFilter::resize(const unsigned int NBrows, const unsigned int NBcolumns)
{
BasicRetinaFilter::resize(NBrows, NBcolumns);
_photoreceptorsOutput.resize(NBrows*NBcolumns);
_horizontalCellsOutput.resize(NBrows*NBcolumns);
_parvocellularOutputON.resize(NBrows*NBcolumns);
_parvocellularOutputOFF.resize(NBrows*NBcolumns);
_bipolarCellsOutputON.resize(NBrows*NBcolumns);
_bipolarCellsOutputOFF.resize(NBrows*NBcolumns);
_localAdaptationOFF.resize(NBrows*NBcolumns);
// link to the required local parent adaptation buffers
_localAdaptationON=&_localBuffer;
_parvocellularOutputONminusOFF=&_filterOutput;
// clean buffers
clearAllBuffers();
}
// change the parameters of the filter
void ParvoRetinaFilter::setOPLandParvoFiltersParameters(const double beta1, const double tau1, const double k1, const double beta2, const double tau2, const double k2)
{
// init photoreceptors low pass filter
setLPfilterParameters(beta1, tau1, k1);
// init horizontal cells low pass filter
setLPfilterParameters(beta2, tau2, k2, 1);
// init parasol ganglion cells low pass filter (default parameters)
setLPfilterParameters(0, tau1, k1, 2);
}
// update/set size of the frames
// run filter for a new frame input
// output return is (*_parvocellularOutputONminusOFF)
const std::valarray<double> &ParvoRetinaFilter::runFilter(const std::valarray<double> &inputFrame, const bool useParvoOutput)
{
_spatiotemporalLPfilter(&inputFrame[0], &_photoreceptorsOutput[0]);
_spatiotemporalLPfilter(&_photoreceptorsOutput[0], &_horizontalCellsOutput[0], 1);
_OPL_OnOffWaysComputing();
if (useParvoOutput)
{
// local adaptation processes on ON and OFF ways
_spatiotemporalLPfilter(&_bipolarCellsOutputON[0], &(*_localAdaptationON)[0], 2);
_localLuminanceAdaptation(&_parvocellularOutputON[0], &(*_localAdaptationON)[0]);
_spatiotemporalLPfilter(&_bipolarCellsOutputOFF[0], &_localAdaptationOFF[0], 2);
_localLuminanceAdaptation(&_parvocellularOutputOFF[0], &_localAdaptationOFF[0]);
//// Final loop that computes the main output of this filter
//
//// loop that makes the difference between photoreceptor cells output and horizontal cells
//// positive part goes on the ON way, negative pat goes on the OFF way
register double *parvocellularOutputONminusOFF_PTR=&(*_parvocellularOutputONminusOFF)[0];
register double *parvocellularOutputON_PTR=&_parvocellularOutputON[0];
register double *parvocellularOutputOFF_PTR=&_parvocellularOutputOFF[0];
for (register unsigned int IDpixel=0 ; IDpixel<_filterOutput.getNBpixels() ; ++IDpixel)
*(parvocellularOutputONminusOFF_PTR++)= (*(parvocellularOutputON_PTR++)-*(parvocellularOutputOFF_PTR++));
}
return (*_parvocellularOutputONminusOFF);
}
void ParvoRetinaFilter::_OPL_OnOffWaysComputing()
{
// loop that makes the difference between photoreceptor cells output and horizontal cells
// positive part goes on the ON way, negative pat goes on the OFF way
register double *photoreceptorsOutput_PTR= &_photoreceptorsOutput[0];
register double *horizontalCellsOutput_PTR= &_horizontalCellsOutput[0];
register double *bipolarCellsON_PTR = &_bipolarCellsOutputON[0];
register double *bipolarCellsOFF_PTR = &_bipolarCellsOutputOFF[0];
register double *parvocellularOutputON_PTR= &_parvocellularOutputON[0];
register double *parvocellularOutputOFF_PTR= &_parvocellularOutputOFF[0];
// compute bipolar cells response equal to photoreceptors minus horizontal cells response
// and copy the result on parvo cellular outputs... keeping time before their local contrast adaptation for final result
for (register unsigned int IDpixel=0 ; IDpixel<_filterOutput.getNBpixels() ; ++IDpixel)
{
double pixelDifference = *(photoreceptorsOutput_PTR++) -*(horizontalCellsOutput_PTR++);
// test condition to allow write pixelDifference in ON or OFF buffer and 0 in the over
double isPositive=(double) (pixelDifference>0);
// ON and OFF channels writing step
*(parvocellularOutputON_PTR++)=*(bipolarCellsON_PTR++) = isPositive*pixelDifference;
*(parvocellularOutputOFF_PTR++)=*(bipolarCellsOFF_PTR++)= (isPositive-1.0)*pixelDifference;
}
}
}
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** HVStools : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (hvstools)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
#ifndef ParvoRetinaFilter_H_
#define ParvoRetinaFilter_H_
/**
* @class ParvoRetinaFilter
* @brief class which describes the OPL retina model and the Inner Plexiform Layer parvocellular channel of the retina:
* -> performs a contours extraction with powerfull local data enhancement as at the retina level
* -> spectrum whitening occurs at the OPL (Outer Plexiform Layer) of the retina: corrects the 1/f spectrum tendancy of natural images
* ---> enhances details with mid spatial frequencies, attenuates low spatial frequencies (luminance), attenuates high temporal frequencies and high spatial frequencies, etc.
*
* TYPICAL USE:
*
* // create object at a specified picture size
* ParvoRetinaFilter *contoursExtractor;
* contoursExtractor =new ParvoRetinaFilter(frameSizeRows, frameSizeColumns);
*
* // init gain, spatial and temporal parameters:
* contoursExtractor->setCoefficientsTable(0, 0.7, 1, 0, 7, 1);
*
* // during program execution, call the filter for contours extraction for an input picture called "FrameBuffer":
* contoursExtractor->runfilter(FrameBuffer);
*
* // get the output frame, check in the class description below for more outputs:
* const double *contours=contoursExtractor->getParvoONminusOFF();
*
* // at the end of the program, destroy object:
* delete contoursExtractor;
* @author Alexandre BENOIT, benoit.alexandre.vision@gmail.com, LISTIC : www.listic.univ-savoie.fr, Gipsa-Lab, France: www.gipsa-lab.inpg.fr/
* Creation date 2007
* Based on Alexandre BENOIT thesis: "Le système visuel humain au secours de la vision par ordinateur"
*
*/
#include "basicretinafilter.hpp"
//#define _OPL_RETINA_ELEMENT_DEBUG
namespace cv
{
//retina classes that derivate from the Basic Retrina class
class ParvoRetinaFilter: public BasicRetinaFilter
{
public:
/**
* constructor parameters are only linked to image input size
* @param NBrows: number of rows of the input image
* @param NBcolumns: number of columns of the input image
*/
ParvoRetinaFilter(const unsigned int NBrows=480, const unsigned int NBcolumns=640);
/**
* standard desctructor
*/
virtual ~ParvoRetinaFilter();
/**
* resize method, keeps initial parameters, all buffers are flushed
* @param NBrows: number of rows of the input image
* @param NBcolumns: number of columns of the input image
*/
void resize(const unsigned int NBrows, const unsigned int NBcolumns);
/**
* function that clears all buffers of the object
*/
void clearAllBuffers();
/**
* setup the OPL and IPL parvo channels
* @param beta1: gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, the amplitude is boosted but it should only be used for values rescaling... if needed
* @param tau1: the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame
* @param k1: the spatial constant of the first order low pass filter of the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel
* @param beta2: gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0
* @param tau2: the time constant of the first order low pass filter of the horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors
* @param k2: the spatial constant of the first order low pass filter of the horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model)
*/
void setOPLandParvoFiltersParameters(const double beta1, const double tau1, const double k1, const double beta2, const double tau2, const double k2);
/**
* setup more precisely the low pass filter used for the ganglion cells low pass filtering (used for local luminance adaptation)
* @param tau: time constant of the filter (unit is frame for video processing)
* @param k: spatial constant of the filter (unit is pixels)
*/
void setGanglionCellsLocalAdaptationLPfilterParameters(const double tau, const double k){BasicRetinaFilter::setLPfilterParameters(0, tau, k, 2);}; // change the parameters of the filter
/**
* launch filter that runs the OPL spatiotemporal filtering and optionally finalizes IPL Pagno filter (model of the Parvocellular channel of the Inner Plexiform Layer of the retina)
* @param inputFrame: the input image to be processed, this can be the direct gray level input frame, but a better efficacy is expected if the input is preliminary processed by the photoreceptors local adaptation possible to acheive with the help of a BasicRetinaFilter object
* @param useParvoOutput: set true if the final IPL filtering step has to be computed (local contrast enhancement)
* @return the processed Parvocellular channel output (updated only if useParvoOutput is true)
* @details: in any case, after this function call, photoreceptors and horizontal cells output are updated, use getPhotoreceptorsLPfilteringOutput() and getHorizontalCellsOutput() to get them
* also, bipolar cells output are accessible (difference between photoreceptors and horizontal cells, ON output has positive values, OFF ouput has negative values), use the following access methods: getBipolarCellsON() and getBipolarCellsOFF()if useParvoOutput is true,
* if useParvoOutput is true, the complete Parvocellular channel is computed, more outputs are updated and can be accessed threw: getParvoON(), getParvoOFF() and their difference with getOutput()
*/
const std::valarray<double> &runFilter(const std::valarray<double> &inputFrame, const bool useParvoOutput=true); // output return is _parvocellularOutputONminusOFF
/**
* @return the output of the photoreceptors filtering step (high cut frequency spatio-temporal low pass filter)
*/
inline const std::valarray<double> &getPhotoreceptorsLPfilteringOutput() const {return _photoreceptorsOutput;};
/**
* @return the output of the photoreceptors filtering step (low cut frequency spatio-temporal low pass filter)
*/
inline const std::valarray<double> &getHorizontalCellsOutput() const { return _horizontalCellsOutput;};
/**
* @return the output Parvocellular ON channel of the retina model
*/
inline const std::valarray<double> &getParvoON() const {return _parvocellularOutputON;};
/**
* @return the output Parvocellular OFF channel of the retina model
*/
inline const std::valarray<double> &getParvoOFF() const {return _parvocellularOutputOFF;};
/**
* @return the output of the Bipolar cells of the ON channel of the retina model same as function getParvoON() but without luminance local adaptation
*/
inline const std::valarray<double> &getBipolarCellsON() const {return _bipolarCellsOutputON;};
/**
* @return the output of the Bipolar cells of the OFF channel of the retina model same as function getParvoON() but without luminance local adaptation
*/
inline const std::valarray<double> &getBipolarCellsOFF() const {return _bipolarCellsOutputOFF;};
/**
* @return the photoreceptors's temporal constant
*/
inline const double getPhotoreceptorsTemporalConstant(){return this->_filteringCoeficientsTable[2];};
/**
* @return the horizontal cells' temporal constant
*/
inline const double getHcellsTemporalConstant(){return this->_filteringCoeficientsTable[5];};
private:
// template buffers
std::valarray <double>_photoreceptorsOutput;
std::valarray <double>_horizontalCellsOutput;
std::valarray <double>_parvocellularOutputON;
std::valarray <double>_parvocellularOutputOFF;
std::valarray <double>_bipolarCellsOutputON;
std::valarray <double>_bipolarCellsOutputOFF;
std::valarray <double>_localAdaptationOFF;
std::valarray <double> *_localAdaptationON;
TemplateBuffer<double> *_parvocellularOutputONminusOFF;
// private functions
void _OPL_OnOffWaysComputing();
};
}
#endif
此差异已折叠。
此差异已折叠。
/*#******************************************************************************
** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
**
** By downloading, copying, installing or using the software you agree to this license.
** If you do not agree to this license, do not download, install,
** copy or use the software.
**
**
** HVStools : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
** Use: extract still images & image sequences features, from contours details to motion spatio-temporal features, etc. for high level visual scene analysis. Also contribute to image enhancement/compression such as tone mapping.
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
** Creation - enhancement process 2007-2011
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** Refer to the following research paper for more information:
** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
**
** The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author :
** _take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper:
** ====> B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
** _take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions.
** ====> more informations in the above cited Jeanny Heraults's book.
**
** License Agreement
** For Open Source Computer Vision Library
**
** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
**
** For Human Visual System tools (hvstools)
** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
**
** Third party copyrights are property of their respective owners.
**
** Redistribution and use in source and binary forms, with or without modification,
** are permitted provided that the following conditions are met:
**
** * Redistributions of source code must retain the above copyright notice,
** this list of conditions and the following disclaimer.
**
** * Redistributions in binary form must reproduce the above copyright notice,
** this list of conditions and the following disclaimer in the documentation
** and/or other materials provided with the distribution.
**
** * The name of the copyright holders may not be used to endorse or promote products
** derived from this software without specific prior written permission.
**
** This software is provided by the copyright holders and contributors "as is" and
** any express or implied warranties, including, but not limited to, the implied
** warranties of merchantability and fitness for a particular purpose are disclaimed.
** In no event shall the Intel Corporation or contributors be liable for any direct,
** indirect, incidental, special, exemplary, or consequential damages
** (including, but not limited to, procurement of substitute goods or services;
** loss of use, data, or profits; or business interruption) however caused
** and on any theory of liability, whether in contract, strict liability,
** or tort (including negligence or otherwise) arising in any way out of
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
/**
* @class RetinaColor a color multilexing/demultiplexing (demosaicing) based on a human vision inspiration. Different mosaicing strategies can be used, included random sampling !
* => please take a look at the nice and efficient demosaicing strategy introduced by B.Chaix de Lavarene, take a look at the cited paper for more mathematical details
* @brief Retina color sampling model which allows classical bayer sampling, random and potentially several other method ! Low color errors on corners !
* -> Based on the research of:
* .Brice Chaix Lavarene (chaix@lis.inpg.fr)
* .Jeanny Herault (herault@lis.inpg.fr)
* .David Alleyson (david.alleyson@upmf-grenoble.fr)
* .collaboration: alexandre benoit (benoit.alexandre.vision@gmail.com or benoit@lis.inpg.fr)
* Please cite: B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
* @author Alexandre BENOIT, benoit.alexandre.vision@gmail.com, LISTIC / Gipsa-Lab, France: www.gipsa-lab.inpg.fr/
* Creation date 2007
*/
#ifndef RETINACOLOR_HPP_
#define RETINACOLOR_HPP_
#include "basicretinafilter.hpp"
//#define __RETINACOLORDEBUG //define RETINACOLORDEBUG in order to display debug data
namespace cv
{
class RetinaColor: public BasicRetinaFilter
{
public:
/**
* @typedef which allows to select the type of photoreceptors color sampling
*/
/**
* constructor of the retina color processing model
* @param NBrows: number of rows of the input image
* @param NBcolumns: number of columns of the input image
* @param samplingMethod: the chosen color sampling method
*/
RetinaColor(const unsigned int NBrows, const unsigned int NBcolumns, const RETINA_COLORSAMPLINGMETHOD samplingMethod=RETINA_COLOR_DIAGONAL);
/**
* standard destructor
*/
virtual ~RetinaColor();
/**
* function that clears all buffers of the object
*/
void clearAllBuffers();
/**
* resize retina color filter object (resize all allocated buffers)
* @param NBrows: the new height size
* @param NBcolumns: the new width size
*/
void resize(const unsigned int NBrows, const unsigned int NBcolumns);
/**
* color multiplexing function: a demultiplexed RGB frame of size M*N*3 is transformed into a multiplexed M*N*1 pixels frame where each pixel is either Red, or Green or Blue
* @param inputRGBFrame: the input RGB frame to be processed
* @return, nothing but the multiplexed frame is available by the use of the getMultiplexedFrame() function
*/
inline void runColorMultiplexing(const std::valarray<double> &inputRGBFrame){runColorMultiplexing(inputRGBFrame, *_multiplexedFrame);};
/**
* color multiplexing function: a demultipleed RGB frame of size M*N*3 is transformed into a multiplexed M*N*1 pixels frame where each pixel is either Red, or Green or Blue if using RGB images
* @param demultiplexedInputFrame: the demultiplexed input frame to be processed of size M*N*3
* @param multiplexedFrame: the resulting multiplexed frame
*/
void runColorMultiplexing(const std::valarray<double> &demultiplexedInputFrame, std::valarray<double> &multiplexedFrame);
/**
* color demultiplexing function: a multiplexed frame of size M*N*1 pixels is transformed into a RGB demultiplexed M*N*3 pixels frame
* @param multiplexedColorFrame: the input multiplexed frame to be processed
* @param adaptiveFiltering: specifies if an adaptive filtering has to be perform rather than standard filtering (adaptive filtering allows a better rendering)
* @param maxInputValue: the maximum input data value (should be 255 for 8 bits images but it can change in the case of High Dynamic Range Images (HDRI)
* @return, nothing but the output demultiplexed frame is available by the use of the getDemultiplexedColorFrame() function, also use getLuminance() and getChrominance() in order to retreive either luminance or chrominance
*/
void runColorDemultiplexing(const std::valarray<double> &multiplexedColorFrame, const bool adaptiveFiltering=false, const double maxInputValue=255.0);
/**
* activate color saturation as the final step of the color demultiplexing process
* -> this saturation is a sigmoide function applied to each channel of the demultiplexed image.
* @param saturateColors: boolean that activates color saturation (if true) or desactivate (if false)
* @param colorSaturationValue: the saturation factor
* */
void setColorSaturation(const bool saturateColors=true, const double colorSaturationValue=4.0){_saturateColors=saturateColors; _colorSaturationValue=colorSaturationValue;};
/**
* set parameters of the low pass spatio-temporal filter used to retreive the low chrominance
* @param beta: gain of the filter (generally set to zero)
* @param tau: time constant of the filter (unit is frame for video processing), typically 0 when considering static processing, 1 or more if a temporal smoothing effect is required
* @param k: spatial constant of the filter (unit is pixels), typical value is 2.5
*/
void setChrominanceLPfilterParameters(const double beta, const double tau, const double k){setLPfilterParameters(beta, tau, k);};
/**
* apply to the retina color output the Krauskopf transformation which leads to an opponent color system: output colorspace if Acr1cr2 if input of the retina was LMS color space
* @param result: the input buffer to fill with the transformed colorspace retina output
* @return true if process ended successfully
*/
const bool applyKrauskopfLMS2Acr1cr2Transform(std::valarray<double> &result);
/**
* apply to the retina color output the CIE Lab color transformation
* @param result: the input buffer to fill with the transformed colorspace retina output
* @return true if process ended successfully
*/
const bool applyLMS2LabTransform(std::valarray<double> &result);
/**
* @return the multiplexed frame result (use this after function runColorMultiplexing)
*/
inline const std::valarray<double> &getMultiplexedFrame() const {return *_multiplexedFrame;};
/**
* @return the demultiplexed frame result (use this after function runColorDemultiplexing)
*/
inline const std::valarray<double> &getDemultiplexedColorFrame() const {return _demultiplexedColorFrame;};
/**
* @return the luminance of the processed frame (use this after function runColorDemultiplexing)
*/
inline const std::valarray<double> &getLuminance() const {return *_luminance;};
/**
* @return the chrominance of the processed frame (use this after function runColorDemultiplexing)
*/
inline const std::valarray<double> &getChrominance() const {return _chrominance;};
/**
* standard 0 to 255 image clipping function appled to RGB images (of size M*N*3 pixels)
* @param inputOutputBuffer: the image to be normalized (rewrites the input), if no parameter, then, the built in buffer reachable by getOutput() function is normalized
* @param maxOutputValue: the maximum value allowed at the output (values superior to it would be clipped
*/
void clipRGBOutput_0_maxInputValue(double *inputOutputBuffer, const double maxOutputValue=255.0);
/**
* standard 0 to 255 image normalization function appled to RGB images (of size M*N*3 pixels)
* @param maxOutputValue: the maximum value allowed at the output (values superior to it would be clipped
*/
void normalizeRGBOutput_0_maxOutputValue(const double maxOutputValue=255.0);
/**
* return the color sampling map: a Nrows*Mcolumns image in which each pixel value is the ofsset adress which gives the adress of the sampled pixel on an Nrows*Mcolumns*3 color image ordered by layers: layer1, layer2, layer3
*/
inline const std::valarray<unsigned int> &getSamplingMap() const {return _colorSampling;};
/**
* function used (to bypass processing) to manually set the color output
* @param demultiplexedImage: the color image (luminance+chrominance) which has to be written in the object buffer
*/
inline void setDemultiplexedColorFrame(const std::valarray<double> &demultiplexedImage){_demultiplexedColorFrame=demultiplexedImage;};
protected:
// private functions
RETINA_COLORSAMPLINGMETHOD _samplingMethod;
bool _saturateColors;
double _colorSaturationValue;
// links to parent buffers (more convienient names
TemplateBuffer<double> *_luminance;
std::valarray<double> *_multiplexedFrame;
// instance buffers
std::valarray<unsigned int> _colorSampling; // table (size (_nbRows*_nbColumns) which specifies the color of each pixel
std::valarray<double> _RGBmosaic;
std::valarray<double> _tempMultiplexedFrame;
std::valarray<double> _demultiplexedTempBuffer;
std::valarray<double> _demultiplexedColorFrame;
std::valarray<double> _chrominance;
std::valarray<double> _colorLocalDensity;// buffer which contains the local density of the R, G and B photoreceptors for a normalization use
std::valarray<double> _imageGradient;
// variables
double _pR, _pG, _pB; // probabilities of color R, G and B
bool _objectInit;
// protected functions
void _initColorSampling();
void _interpolateImageDemultiplexedImage(double *inputOutputBuffer);
void _interpolateSingleChannelImage111(double *inputOutputBuffer);
void _interpolateBayerRGBchannels(double *inputOutputBuffer);
void _applyRIFfilter(const double *sourceBuffer, double *destinationBuffer);
void _getNormalizedContoursImage(const double *inputFrame, double *outputFrame);
// -> special adaptive filters dedicated to low pass filtering on the chrominance (skeeps filtering on the edges)
void _adaptiveSpatialLPfilter(const double *inputFrame, double *outputFrame);
void _adaptiveHorizontalCausalFilter_addInput(const double *inputFrame, double *outputFrame, const unsigned int IDrowStart, const unsigned int IDrowEnd);
void _adaptiveHorizontalAnticausalFilter(double *outputFrame, const unsigned int IDrowStart, const unsigned int IDrowEnd);
void _adaptiveVerticalCausalFilter(double *outputFrame, const unsigned int IDcolumnStart, const unsigned int IDcolumnEnd);
void _adaptiveVerticalAnticausalFilter_multGain(double *outputFrame, const unsigned int IDcolumnStart, const unsigned int IDcolumnEnd);
void _computeGradient(const double *luminance);
void _normalizeOutputs_0_maxOutputValue(void);
// color space transform
void _applyImageColorSpaceConversion(const std::valarray<double> &inputFrame, std::valarray<double> &outputFrame, const double *transformTable);
};
}
#endif /*RETINACOLOR_HPP_*/
此差异已折叠。
此差异已折叠。
此差异已折叠。
//============================================================================
// Name : retinademo.cpp
// Author : Alexandre Benoit, benoit.alexandre.vision@gmail.com
// Version : 0.1
// Copyright : LISTIC/GIPSA French Labs, july 2011
// Description : Gipsa/LISTIC Labs retina demo in C++, Ansi-style
//============================================================================
#include <iostream>
#include <cstring>
#include "HVStools/retina.hpp" // for retina processing
#include <opencv/highgui.h> // image IO
void help(std::string errorMessage)
{
std::cout<<"Program init error : "<<errorMessage<<std::endl;
std::cout<<"\nProgram call procedure : retinaDemo [processing mode] [Optional : media target] [Optional LAST parameter: \"log\" to activate retina log sampling]"<<std::endl;
std::cout<<"\t[processing mode] :"<<std::endl;
std::cout<<"\t -image : for still image processing"<<std::endl;
std::cout<<"\t -video : for video stream processing"<<std::endl;
std::cout<<"\t[Optional : media target] :"<<std::endl;
std::cout<<"\t if processing an image or video file, then, specify the path and filename of the target to process"<<std::endl;
std::cout<<"\t leave empty if processing video stream coming from a connected video device"<<std::endl;
std::cout<<"\t[Optional : activate retina log sampling] : an optional last parameter can be specified for retina spatial log sampling"<<std::endl;
std::cout<<"\t set \"log\" without quotes to activate this sampling, output frame size will be divided by 4"<<std::endl;
std::cout<<"\nExamples:"<<std::endl;
std::cout<<"\t-Image processing : ./retinaDemo -image lena.jpg"<<std::endl;
std::cout<<"\t-Image processing with log sampling : ./retinaDemo -image lena.jpg log"<<std::endl;
std::cout<<"\t-Video processing : ./retinaDemo -video myMovie.mp4"<<std::endl;
std::cout<<"\t-Live video processing : ./retinaDemo -video"<<std::endl;
std::cout<<"\nPlease start again with new parameters"<<std::endl;
}
int main(int argc, char* argv[]) {
// welcome message
std::cout<<"****************************************************"<<std::endl;
std::cout<<"* Retina demonstration : demonstrates the use of is a wrapper class of the Gipsa/Listic Labs retina model."<<std::endl;
std::cout<<"* This retina model allows spatio-temporal image processing (applied on still images, video sequences)."<<std::endl;
std::cout<<"* As a summary, these are the retina model properties:"<<std::endl;
std::cout<<"* => It applies a spectral whithening (mid-frequency details enhancement)"<<std::endl;
std::cout<<"* => high frequency spatio-temporal noise reduction"<<std::endl;
std::cout<<"* => low frequency luminance to be reduced (luminance range compression)"<<std::endl;
std::cout<<"* => local logarithmic luminance compression allows details to be enhanced in low light conditions\n"<<std::endl;
std::cout<<"* for more information, reer to the following papers :"<<std::endl;
std::cout<<"* Benoit A., Caplier A., Durette B., Herault, J., \"USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING\", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011"<<std::endl;
std::cout<<"* Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891."<<std::endl;
std::cout<<"* => reports comments/remarks at benoit.alexandre.vision@gmail.com"<<std::endl;
std::cout<<"****************************************************"<<std::endl;
// basic input arguments checking
if (argc<2)
{
help("bad number of parameter");
return -1;
}
bool useLogSampling = !strcmp(argv[argc-1], "log"); // check if user wants retina log sampling processing
std::string inputMediaType=argv[1];
// declare the retina input buffer... that will be fed differently in regard of the input media
cv::Mat inputFrame;
cv::VideoCapture videoCapture; // in case a video media is used, its manager is declared here
//////////////////////////////////////////////////////////////////////////////
// checking input media type (still image, video file, live video acquisition)
if (!strcmp(inputMediaType.c_str(), "-image") && argc >= 3)
{
std::cout<<"RetinaDemo: processing image "<<argv[2]<<std::endl;
// image processing case
inputFrame = cv::imread(std::string(argv[2]), 1); // load image in RGB mode
}else
if (!strcmp(inputMediaType.c_str(), "-video"))
{
if (argc == 2 || (argc == 3 && useLogSampling)) // attempt to grab images from a video capture device
{
videoCapture.open(0);
}else// attempt to grab images from a video filestream
{
std::cout<<"RetinaDemo: processing video stream "<<argv[2]<<std::endl;
videoCapture.open(argv[2]);
}
// grab a first frame to check if everything is ok
videoCapture>>inputFrame;
}else
{
// bad command parameter
help("bad command parameter");
return -1;
}
if (inputFrame.empty())
{
help("Input media could not be loaded, aborting");
return -1;
}
//////////////////////////////////////////////////////////////////////////////
// Program start in a try/catch safety context (Retina may throw errors)
try
{
// create a retina instance with default parameters setup, uncomment the initialisation you wanna test
cv::Ptr<cv::Retina> myRetina;
// if the last parameter is 'log', then activate log sampling (favour foveal vision and subsamples peripheral vision)
if (useLogSampling)
myRetina = new cv::Retina("params.xml", inputFrame.size(), true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
else// -> else allocate "classical" retina :
myRetina = new cv::Retina("params.xml", inputFrame.size());
// declare retina output buffers
cv::Mat retinaOutput_parvo;
cv::Mat retinaOutput_magno;
// processing loop with stop condition
bool continueProcessing=true; // FIXME : not yet managed during process...
while(continueProcessing)
{
// if using video stream, then, grabbing a new frame, else, input remains the same
if (videoCapture.isOpened())
videoCapture>>inputFrame;
// run retina filter
myRetina->run(inputFrame);
// Retrieve and display retina output
myRetina->getParvo(retinaOutput_parvo);
myRetina->getMagno(retinaOutput_magno);
cv::imshow("retina input", inputFrame);
cv::imshow("Retina Parvo", retinaOutput_parvo);
cv::imshow("Retina Magno", retinaOutput_magno);
cv::waitKey(10);
}
}catch(cv::Exception e)
{
std::cerr<<"Error using Retina : "<<e.what()<<std::endl;
}
// Program end message
std::cout<<"Retina demo end"<<std::endl;
return 0;
}
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册