提交 3d4ca2d7 编写于 作者: V Vadim Pisarevsky

added ios tutorials by Charu Hans and Eduard Feicho

上级 b2acf50c
......@@ -6,5 +6,5 @@
.. |Author_VictorE| unicode:: Victor U+0020 Eruhimov
.. |Author_ArtemM| unicode:: Artem U+0020 Myagkov
.. |Author_FernandoI| unicode:: Fernando U+0020 Iglesias U+0020 Garc U+00ED a
.. |Author_EduardF| unicode:: Eduard U+0020 Feicho
此差异由.gitattributes 抑制。
......@@ -7,19 +7,19 @@ Required packages
==================
* CMake 2.8.8 or higher
* Xcode 4.3 or higher
* Xcode 4.2 or higher
Getting the cutting-edge OpenCV from SourceForge SVN repository
-----------------------------------------------------------------
Getting the cutting-edge OpenCV from GIT repository
---------------------------------------------------
Launch SVN client and checkout the current OpenCV snapshot from here: http://code.opencv.org/svn/opencv/trunk/opencv
Launch GIT client and clone OpenCV repository from here: http://github.com/itseez/opencv
In MacOS it can be done using the following command in Terminal:
.. code-block:: bash
cd ~/<my_working _directory>
svn co http://code.opencv.org/svn/opencv/trunk/opencv
git clone https://github.com/Itseez/opencv.git
Building OpenCV from source using CMake, using the command line
......@@ -39,4 +39,9 @@ Building OpenCV from source using CMake, using the command line
cd ~/<my_working_directory>
python opencv/ios/build_framework.py ios
If everything's fine, after a few minutes you will get ~/<my_working_directory>/ios/opencv2.framework. You can add this framework to your Xcode projects.
If everything's fine, a few minutes later you will get ~/<my_working_directory>/ios/opencv2.framework. You can add this framework to your Xcode projects.
Further Reading
=====================
You can find several OpenCV+iOS tutorials here :ref:`Table-Of-Content-iOS`
......@@ -155,18 +155,18 @@ Here you can read tutorials about how to set up your computer to work with the O
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
=========== ======================================================
|Install_2| **Title:** :ref:`iOS-Installation`
============= ======================================================
|Install_iOS| **Title:** :ref:`iOS-Installation`
*Compatibility:* > OpenCV 2.3.1
*Compatibility:* > OpenCV 2.4.2
*Author:* |Author_ArtemM|
*Author:* |Author_ArtemM|, |Author_EduardF|
We will learn how to setup OpenCV for using it in iOS!
=========== ======================================================
============= ======================================================
.. |Install_2| image:: images/ios4_logo.jpg
.. |Install_iOS| image:: images/opencv_ios.png
:width: 90pt
.. tabularcolumns:: m{100pt} m{300pt}
......
.. _OpenCViOSHelloWorld:
OpenCV iOS Hello
*******************************
Goal
====
In this tutorial we will learn how to:
.. container:: enumeratevisibleitemswithsquare
* Link OpenCV framework with Xcode
* How to write simple Hello World application using OpenCV and Xcode.
*Linking OpenCV iOS*
======================
Follow this step by step guide to link OpenCV to iOS.
1. Create a new XCode project.
2. Now we need to link *opencv2.framework* with Xcode. Select the project Navigator in the left hand panel and click on project name.
3. Under the TARGETS click on Build Phases. Expand Link Binary With Libraries option.
4. Click on Add others and go to directory where *opencv2.framework* is located and click open
5. Now you can start writing your application.
.. image:: images/linking_opencv_ios.png
:alt: OpenCV iOS in Xcode
:align: center
*Hello OpenCV iOS Application*
===============================
Now we will learn how to write a simple Hello World Application in Xcode using OpenCV.
.. container:: enumeratevisibleitemswithsquare
* Link your project with OpenCV as shown in previous section.
* Open the file named *NameOfProject-Prefix.pch* ( replace NameOfProject with name of your project) and add the following lines of code.
.. code-block:: cpp
#ifdef __cplusplus
#import <opencv2/opencv.hpp>
#endif
.. image:: images/header_directive.png
:alt: header
:align: center
.. container:: enumeratevisibleitemswithsquare
* Add the following lines of code to viewDidLoad method in ViewController.m.
.. code-block:: cpp
UIAlertView * alert = [[UIAlertView alloc] initWithTitle:@"Hello!" message:@"Welcome to OpenCV" delegate:self cancelButtonTitle:@"Continue" otherButtonTitles:nil];
[alert show];
.. image:: images/view_did_load.png
:alt: view did load
:align: center
.. container:: enumeratevisibleitemswithsquare
* You are good to run the project.
*Output*
=========
.. image:: images/output.png
:alt: output
:align: center
.. _OpenCViOSImageManipulation:
OpenCV iOS - Image Processing
*******************************
Goal
====
In this tutorial we will learn how to do basic image processing using OpenCV in iOS.
*Introduction*
==============
In *OpenCV* all the image processing operations are done on *Mat*. iOS uses UIImage object to display image. One of the thing is to convert UIImage object to Mat object. Below is the code to convert UIImage to Mat.
.. code-block:: cpp
- (cv::Mat)cvMatFromUIImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return cvMat;
}
.. code-block:: cpp
- (cv::Mat)cvMatGrayFromUIImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC1); // 8 bits per component, 1 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return cvMat;
}
Once we obtain the Mat Object. We can do all our processing on Mat object, similar to cpp. For example if we want to convert image to gray, we can do it via below code.
.. code-block:: cpp
cv::Mat greyMat;
cv::cvtColor(inputMat, greyMat, CV_BGR2GRAY);
After the processing we need to convert it back to UIImage.
.. code-block:: cpp
-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(cvMat.cols, //width
cvMat.rows, //height
8, //bits per component
8 * cvMat.elemSize(), //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
*Output*
==================================
.. image:: images/output.jpg
:alt: header
:align: center
Check out an instance of running code with more Image Effects on `YouTube <http://www.youtube.com/watch?v=Ko3K_xdhJ1I>`_ .
.. raw:: html
<div align="center">
<iframe width="560" height="350" src="http://www.youtube.com/embed/Ko3K_xdhJ1I" frameborder="0" allowfullscreen></iframe>
</div>
\ No newline at end of file
.. _Table-Of-Content-iOS:
**OpenCV iOS**
-----------------------------------------------------------
.. include:: ../../definitions/tocDefinitions.rst
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
============ ===============================================================================
|iOSOpenCV| **Title:** :ref:`OpenCViOSHelloWorld`
*Compatibility:* > OpenCV 2.4.3
*Author:* Charu Hans
You will learn how to link OpenCV with iOS and write a basic application.
============ ===============================================================================
.. |iOSOpenCV| image:: images/intro.png
:height: 120pt
:width: 90pt
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
================ ============================================================================
|iOSOpenCVImg| **Title:** :ref:`OpenCViOSImageManipulation`
*Compatibility:* > OpenCV 2.4.3
*Author:* Charu Hans
You will learn how to do simple image manipulation using OpenCV in iOS.
================ ============================================================================
.. |iOSOpenCVImg| image:: images/image_effects.png
:height: 90pt
:width: 90pt
+
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
================= ============================================================================
|iOSOpenCVVideo| **Title:** :ref:`OpenCViOSVideoProcessing`
*Compatibility:* > OpenCV 2.4.3
*Author:* Eduard Feicho
You will learn how to capture and process video from camera using OpenCV in iOS.
================= ============================================================================
.. |iOSOpenCVVideo| image:: images/facedetect.jpg
:height: 120pt
:width: 90pt
.. raw:: latex
\pagebreak
.. toctree::
:hidden:
../hello/hello
../image_manipulation/image_manipulation
../video_processing/video_processing
.. _OpenCViOSVideoProcessing:
OpenCV iOS - Video Processing
*******************************
This tutorial explains how to process video frames using the iPhone's camera and OpenCV.
Prerequisites:
==================
* Xcode 4.3 or higher
* Basic knowledge of iOS programming (Objective-C, Interface Builder)
Including OpenCV library in your iOS project
================================================
The OpenCV library comes as a so-called framework, which you can directly drag-and-drop into your XCode project. Download the latest binary from <http://sourceforge.net/projects/opencvlibrary/files/opencv-ios/>. Alternatively follow this guide :ref:`iOS-Installation` to compile the framework manually. Once you have the framework, just drag-and-drop into XCode:
.. image:: images/xcode_hello_ios_framework_drag_and_drop.png
Also you have to locate the prefix header that is used for all header files in the project. The file is typically located at "ProjectName/Supporting Files/ProjectName-Prefix.pch". There, you have add an include statement to import the opencv library. However, make sure you include opencv before you include UIKit and Foundation, because else you will get some weird compile errors that some macros like min and max are defined multiple times. For example the prefix header could look like the following:
.. code-block:: objc
:linenos:
//
// Prefix header for all source files of the 'VideoFilters' target in the 'VideoFilters' project
//
#import <Availability.h>
#ifndef __IPHONE_4_0
#warning "This project uses features only available in iOS SDK 4.0 and later."
#endif
#ifdef __cplusplus
#import <opencv2/opencv.hpp>
#endif
#ifdef __OBJC__
#import <UIKit/UIKit.h>
#import <Foundation/Foundation.h>
#endif
Example video frame processing project
--------------------------------------
User Interface
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
First, we create a simple iOS project, for example Single View Application. Then, we create and add an UIImageView and UIButton to start the camera and display the video frames. The storyboard could look like that:
.. image:: images/xcode_hello_ios_viewcontroller_layout.png
Make sure to add and connect the IBOutlets and IBActions to the corresponding ViewController:
.. code-block:: objc
:linenos:
@interface ViewController : UIViewController
{
IBOutlet UIImageView* imageView;
IBOutlet UIButton* button;
}
- (IBAction)actionStart:(id)sender;
@end
Adding the Camera
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We add a camera controller to the view controller and initialize it when the view has loaded:
.. code-block:: objc
:linenos:
#import <opencv2/highgui/cap_ios.h>
using namespace cv;
@interface ViewController : UIViewController
{
...
CvVideoCamera* videoCamera;
}
...
@property (nonatomic, retain) CvVideoCamera* videoCamera;
@end
.. code-block:: objc
:linenos:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
self.videoCamera = [[CvVideoCamera alloc] initWithParentView:imageView];
self.videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionFront;
self.videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPreset352x288;
self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortrait;
self.videoCamera.defaultFPS = 30;
self.videoCamera.grayscale = NO;
}
In this case, we initialize the camera and provide the imageView as a target for rendering each frame. CvVideoCamera is basically a wrapper around AVFoundation, so we provie as properties some of the AVFoundation camera options. For example we want to use the front camera, set the video size to 352x288 and a video orientation (the video camera normally outputs in landscape mode, which results in transposed data when you design a portrait application).
The property defaultFPS sets the FPS of the camera. If the processing is less fast than the desired FPS, frames are automatically dropped.
The property grayscale=YES results in a different colorspace, namely "YUV (YpCbCr 4:2:0)", while grayscale=NO will output 32 bit BGRA.
Additionally, we have to manually add framework dependencies of the opencv framework. Finally, you should have at least the following frameworks in your project:
* opencv2
* Accelerate
* AssetsLibrary
* AVFoundation
* CoreGraphics
* CoreImage
* CoreMedia
* CoreVideo
* CoreAnimation
* QuartzCore
* UIKit
* Foundation
.. image:: images/xcode_hello_ios_frameworks_add_dependencies.png
Processing frames
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We follow the delegation pattern, which is very common in iOS, to provide access to each camera frame. Basically, the View Controller has to implement the CvVideoCameraDelegate protocol and has to be set as delegate to the video camera:
.. code-block:: objc
:linenos:
@interface ViewController : UIViewController<CvVideoCameraDelegate>
.. code-block:: objc
:linenos:
- (void)viewDidLoad
{
...
self.videoCamera = [[CvVideoCamera alloc] initWithParentView:imageView];
self.videoCamera.delegate = self;
...
}
.. code-block:: objc
:linenos:
#pragma mark - Protocol CvVideoCameraDelegate
#ifdef __cplusplus
- (void)processImage:(Mat&)image;
{
// Do some OpenCV stuff with the image
}
#endif
Note that we are using C++ here (cv::Mat).
Important: You have to rename the view controller's extension .m into .mm, so that the compiler compiles it under the assumption of Objective-C++ (Objective-C and C++ mixed). Then, __cplusplus is defined when the compiler is processing the file for C++ code. Therefore, we put our code within a block where __cpluscplus is defined.
Basic video processing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
From here you can start processing video frames. For example the following snippet color-inverts the image:
.. code-block:: objc
:linenos:
- (void)processImage:(Mat&)image;
{
// Do some OpenCV stuff with the image
Mat image_copy;
cvtColor(image, image_copy, CV_BGRA2BGR);
// invert image
bitwise_not(image_copy, image_copy);
cvtColor(image_copy, image, CV_BGR2BGRA);
}
Start!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Finally, we have to tell the camera to actually start/stop working. The following code will start the camera when you press the button, assuming you connected the UI properly:
.. code-block:: objc
:linenos:
#pragma mark - UI Actions
- (IBAction)actionStart:(id)sender;
{
[self.videoCamera start];
}
Hints
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Try to avoid costly matrix copy operations as much as you can, especially if you are aiming for real-time. As the image data is passed as reference, work in-place, if possible.
When you are working on grayscale data, turn set grayscale = YES as the YUV colorspace gives you directly access the luminance plane.
The Accelerate framework provides some CPU-accelerated DSP filters, which come handy in your case.
......@@ -156,6 +156,21 @@ As always, we would be happy to hear your comments and receive your contribution
:width: 80pt
:alt: gpu icon
* :ref:`Table-Of-Content-iOS`
.. tabularcolumns:: m{100pt} m{300pt}
.. cssclass:: toctableopencv
=========== =======================================================
|iOS| Run OpenCV and your vision apps on an iDevice
=========== =======================================================
.. |iOS| image:: images/opencv_ios.png
:height: 80pt
:width: 80pt
:alt: gpu icon
* :ref:`Table-Of-Content-General`
.. tabularcolumns:: m{100pt} m{300pt}
......@@ -189,4 +204,5 @@ As always, we would be happy to hear your comments and receive your contribution
objdetect/table_of_content_objdetect/table_of_content_objdetect
ml/table_of_content_ml/table_of_content_ml
gpu/table_of_content_gpu/table_of_content_gpu
ios/table_of_content_ios/table_of_content_ios
general/table_of_content_general/table_of_content_general
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册