提交 323e24e3 编写于 作者: A Alexander Alekhin

change links from samples/python2 to samples/python

上级 2ecb4892
......@@ -16,8 +16,7 @@ intensity value of the pixel. But in two-dimensional histograms, you consider tw
it is used for finding color histograms where two features are Hue & Saturation values of every
pixel.
There is a [python sample in the official
samples](https://github.com/Itseez/opencv/blob/master/samples/python2/color_histogram.py) already
There is a python sample (samples/python/color_histogram.py) already
for finding color histograms. We will try to understand how to create such a color histogram, and it
will be useful in understanding further topics like Histogram Back-Projection.
......@@ -106,10 +105,11 @@ You can verify it with any image editing tools like GIMP.
### Method 3 : OpenCV sample style !!
There is a [sample code for color-histogram in OpenCV-Python2
samples](https://github.com/Itseez/opencv/blob/master/samples/python2/color_histogram.py). If you
run the code, you can see the histogram shows the corresponding color also. Or simply it outputs a
color coded histogram. Its result is very good (although you need to add extra bunch of lines).
There is a sample code for color-histogram in OpenCV-Python2 samples
(samples/python/color_histogram.py).
If you run the code, you can see the histogram shows the corresponding color also.
Or simply it outputs a color coded histogram.
Its result is very good (although you need to add extra bunch of lines).
In that code, the author created a color map in HSV. Then converted it into BGR. The resulting
histogram image is multiplied with this color map. He also uses some preprocessing steps to remove
......
......@@ -155,8 +155,8 @@ should be due to the sky)
Well, here you adjust the values of histograms along with its bin values to look like x,y
coordinates so that you can draw it using cv2.line() or cv2.polyline() function to generate same
image as above. This is already available with OpenCV-Python2 official samples. [Check the
Code](https://github.com/Itseez/opencv/raw/master/samples/python2/hist.py)
image as above. This is already available with OpenCV-Python2 official samples. Check the
code at samples/python/hist.py.
Application of Mask
-------------------
......
......@@ -13,7 +13,7 @@ OCR of Hand-written Digits
Our goal is to build an application which can read the handwritten digits. For this we need some
train_data and test_data. OpenCV comes with an image digits.png (in the folder
opencv/samples/python2/data/) which has 5000 handwritten digits (500 for each digit). Each digit is
opencv/samples/data/) which has 5000 handwritten digits (500 for each digit). Each digit is
a 20x20 image. So our first step is to split this image into 5000 different digits. For each digit,
we flatten it into a single row with 400 pixels. That is our feature set, ie intensity values of all
pixels. It is the simplest feature set we can create. We use first 250 samples of each digit as
......
......@@ -81,7 +81,7 @@ Additional Resources
Exercises
---------
-# OpenCV comes with an interactive sample on inpainting, samples/python2/inpaint.py, try it.
-# OpenCV comes with an interactive sample on inpainting, samples/python/inpaint.py, try it.
2. A few months ago, I watched a video on [Content-Aware
Fill](http://www.youtube.com/watch?v=ZtoUiplKa2A), an advanced inpainting technique used in
Adobe Photoshop. On further search, I was able to find that same technique is already there in
......
......@@ -156,7 +156,7 @@ in image, there is a chance that optical flow finds the next point which may loo
actually for a robust tracking, corner points should be detected in particular intervals. OpenCV
samples comes up with such a sample which finds the feature points at every 5 frames. It also run a
backward-check of the optical flow points got to select only good ones. Check
samples/python2/lk_track.py).
samples/python/lk_track.py).
See the results we got:
......@@ -213,7 +213,7 @@ See the result below:
![image](images/opticalfb.jpg)
OpenCV comes with a more advanced sample on dense optical flow, please see
samples/python2/opt_flow.py.
samples/python/opt_flow.py.
Additional Resources
--------------------
......@@ -221,5 +221,5 @@ Additional Resources
Exercises
---------
-# Check the code in samples/python2/lk_track.py. Try to understand the code.
2. Check the code in samples/python2/opt_flow.py. Try to understand the code.
-# Check the code in samples/python/lk_track.py. Try to understand the code.
2. Check the code in samples/python/opt_flow.py. Try to understand the code.
......@@ -175,7 +175,7 @@ pattern (every view is described by several 3D-2D point correspondences).
- A calibration example on stereo matching can be found at
opencv_source_code/samples/cpp/stereo_match.cpp
- (Python) A camera calibration sample can be found at
opencv_source_code/samples/python2/calibrate.py
opencv_source_code/samples/python/calibrate.py
@{
@defgroup calib3d_fisheye Fisheye camera model
......@@ -553,7 +553,7 @@ projections, as well as the camera matrix and the distortion coefficients.
@note
- An example of how to use solvePnP for planar augmented reality can be found at
opencv_source_code/samples/python2/plane_ar.py
opencv_source_code/samples/python/plane_ar.py
- If you are using Python:
- Numpy array slices won't work as input because solvePnP requires contiguous
arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of
......@@ -1674,7 +1674,7 @@ check, quadratic interpolation and speckle filtering).
@note
- (Python) An example illustrating the use of the StereoSGBM matching algorithm can be found
at opencv_source_code/samples/python2/stereo_match.py
at opencv_source_code/samples/python/stereo_match.py
*/
class CV_EXPORTS_W StereoSGBM : public StereoMatcher
{
......
......@@ -2035,9 +2035,9 @@ so you need to "flip" the second convolution operand B vertically and horizontal
- An example using the discrete fourier transform can be found at
opencv_source_code/samples/cpp/dft.cpp
- (Python) An example using the dft functionality to perform Wiener deconvolution can be found
at opencv_source/samples/python2/deconvolution.py
at opencv_source/samples/python/deconvolution.py
- (Python) An example rearranging the quadrants of a Fourier image can be found at
opencv_source/samples/python2/dft.py
opencv_source/samples/python/dft.py
@param src input array that could be real or complex.
@param dst output array whose size and type depends on the flags .
@param flags transformation flags, representing a combination of the cv::DftFlags
......@@ -2848,7 +2848,7 @@ and groups the input samples around the clusters. As an output, \f$\texttt{label
@note
- (Python) An example on K-means clustering can be found at
opencv_source_code/samples/python2/kmeans.py
opencv_source_code/samples/python/kmeans.py
@param data Data for clustering. An array of N-Dimensional points with float coordinates is needed.
Examples of this array can be:
- Mat points(count, 2, CV_32F);
......
......@@ -73,7 +73,7 @@ namespace cv { namespace cuda {
- A CUDA example applying the HOG descriptor for people detection can be found at
opencv_source_code/samples/gpu/hog.cpp
- (Python) An example applying the HOG descriptor for people detection can be found at
opencv_source_code/samples/python2/peopledetect.py
opencv_source_code/samples/python/peopledetect.py
*/
class CV_EXPORTS HOG : public Algorithm
{
......
......@@ -74,7 +74,7 @@ This section describes approaches based on local 2D features and used to categor
- A complete Bag-Of-Words sample can be found at
opencv_source_code/samples/cpp/bagofwords_classification.cpp
- (Python) An example using the features2D framework to perform object categorization can be
found at opencv_source_code/samples/python2/find_obj.py
found at opencv_source_code/samples/python/find_obj.py
@}
*/
......@@ -331,7 +331,7 @@ than union-find method; it actually get 1.5~2m/s on my centrino L7200 1.2GHz lap
than grey image method ( 3~4 times ); the chi_table.h file is taken directly from paper's source
code which is distributed under GPL.
- (Python) A complete example showing the use of the %MSER detector can be found at samples/python2/mser.py
- (Python) A complete example showing the use of the %MSER detector can be found at samples/python/mser.py
*/
class CV_EXPORTS_W MSER : public Feature2D
{
......
......@@ -261,7 +261,7 @@ public:
@note
- (Python) A face detection example using cascade classifiers can be found at
opencv_source_code/samples/python2/facedetect.py
opencv_source_code/samples/python/facedetect.py
*/
CV_WRAP void detectMultiScale( InputArray image,
CV_OUT std::vector<Rect>& objects,
......
......@@ -107,7 +107,7 @@ objects from still images or video. See <http://en.wikipedia.org/wiki/Inpainting
- An example using the inpainting technique can be found at
opencv_source_code/samples/cpp/inpaint.cpp
- (Python) An example using the inpainting technique can be found at
opencv_source_code/samples/python2/inpaint.py
opencv_source_code/samples/python/inpaint.py
*/
CV_EXPORTS_W void inpaint( InputArray src, InputArray inpaintMask,
OutputArray dst, double inpaintRadius, int flags );
......
......@@ -74,7 +74,7 @@ See the OpenCV sample camshiftdemo.c that tracks colored objects.
@note
- (Python) A sample explaining the camshift tracking algorithm can be found at
opencv_source_code/samples/python2/camshift.py
opencv_source_code/samples/python/camshift.py
*/
CV_EXPORTS_W RotatedRect CamShift( InputArray probImage, CV_IN_OUT Rect& window,
TermCriteria criteria );
......@@ -166,9 +166,9 @@ The function implements a sparse iterative version of the Lucas-Kanade optical f
- An example using the Lucas-Kanade optical flow algorithm can be found at
opencv_source_code/samples/cpp/lkdemo.cpp
- (Python) An example using the Lucas-Kanade optical flow algorithm can be found at
opencv_source_code/samples/python2/lk_track.py
opencv_source_code/samples/python/lk_track.py
- (Python) An example using the Lucas-Kanade tracker for homography matching can be found at
opencv_source_code/samples/python2/lk_homography.py
opencv_source_code/samples/python/lk_homography.py
*/
CV_EXPORTS_W void calcOpticalFlowPyrLK( InputArray prevImg, InputArray nextImg,
InputArray prevPts, InputOutputArray nextPts,
......@@ -213,7 +213,7 @@ The function finds an optical flow for each prev pixel using the @cite Farneback
- An example using the optical flow algorithm described by Gunnar Farneback can be found at
opencv_source_code/samples/cpp/fback.cpp
- (Python) An example using the optical flow algorithm described by Gunnar Farneback can be
found at opencv_source_code/samples/python2/opt_flow.py
found at opencv_source_code/samples/python/opt_flow.py
*/
CV_EXPORTS_W void calcOpticalFlowFarneback( InputArray prev, InputArray next, InputOutputArray flow,
double pyr_scale, int levels, int winsize,
......
......@@ -380,11 +380,11 @@ class can be used: :
- Another basic video processing sample can be found at
opencv_source_code/samples/cpp/video_dmtx.cpp
- (Python) A basic sample on using the VideoCapture interface can be found at
opencv_source_code/samples/python2/video.py
opencv_source_code/samples/python/video.py
- (Python) Another basic video processing sample can be found at
opencv_source_code/samples/python2/video_dmtx.py
opencv_source_code/samples/python/video_dmtx.py
- (Python) A multi threaded video processing sample can be found at
opencv_source_code/samples/python2/video_threaded.py
opencv_source_code/samples/python/video_threaded.py
*/
class CV_EXPORTS_W VideoCapture
{
......
......@@ -31,7 +31,7 @@ if(ANDROID AND BUILD_ANDROID_EXAMPLES)
endif()
if(INSTALL_PYTHON_EXAMPLES)
add_subdirectory(python2)
add_subdirectory(python)
endif()
#
......
if(INSTALL_PYTHON_EXAMPLES)
file(GLOB install_list *.py )
install(FILES ${install_list}
DESTINATION ${OPENCV_SAMPLES_SRC_INSTALL_PATH}/python2
DESTINATION ${OPENCV_SAMPLES_SRC_INSTALL_PATH}/python
PERMISSIONS OWNER_READ GROUP_READ WORLD_READ COMPONENT samples)
endif()
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册