Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
张重言
Opencv Zoo
提交
7e062e54
O
Opencv Zoo
项目概览
张重言
/
Opencv Zoo
10 个月 前同步成功
通知
1
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
O
Opencv Zoo
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
提交
7e062e54
编写于
6月 06, 2023
作者:
W
Wanli
提交者:
GitHub
6月 06, 2023
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
update and modify documentation (#161)
上级
2121e570
变更
18
显示空白变更内容
内联
并排
Showing
18 changed file
with
38 addition
and
32 deletion
+38
-32
README.md
README.md
+4
-0
benchmark/README.md
benchmark/README.md
+1
-1
models/face_detection_yunet/README.md
models/face_detection_yunet/README.md
+3
-3
models/facial_expression_recognition/README.md
models/facial_expression_recognition/README.md
+1
-1
models/handpose_estimation_mediapipe/README.md
models/handpose_estimation_mediapipe/README.md
+5
-4
models/human_segmentation_pphumanseg/README.md
models/human_segmentation_pphumanseg/README.md
+1
-1
models/license_plate_detection_yunet/README.md
models/license_plate_detection_yunet/README.md
+1
-1
models/object_detection_nanodet/README.md
models/object_detection_nanodet/README.md
+1
-1
models/object_detection_yolox/README.md
models/object_detection_yolox/README.md
+1
-1
models/object_tracking_dasiamrpn/README.md
models/object_tracking_dasiamrpn/README.md
+1
-1
models/palm_detection_mediapipe/README.md
models/palm_detection_mediapipe/README.md
+6
-5
models/person_detection_mediapipe/README.md
models/person_detection_mediapipe/README.md
+3
-3
models/person_reid_youtureid/README.md
models/person_reid_youtureid/README.md
+1
-1
models/person_reid_youtureid/demo.py
models/person_reid_youtureid/demo.py
+4
-4
models/qrcode_wechatqrcode/README.md
models/qrcode_wechatqrcode/README.md
+1
-1
models/text_detection_db/README.md
models/text_detection_db/README.md
+1
-1
models/text_recognition_crnn/README.md
models/text_recognition_crnn/README.md
+1
-1
tools/eval/README.md
tools/eval/README.md
+2
-2
未找到文件。
README.md
浏览文件 @
7e062e54
...
...
@@ -82,6 +82,10 @@ Some examples are listed below. You can find more in the directory of each model
![
handpose estimation
](
models/handpose_estimation_mediapipe/examples/mphandpose_demo.webp
)
### Person Detection with [MP-PersonDet](./models/person_detection_mediapipe)
![
person det
](
./models/person_detection_mediapipe/examples/mppersondet_demo.webp
)
### QR Code Detection and Parsing with [WeChatQRCode](./models/qrcode_wechatqrcode/)
![
qrcode
](
./models/qrcode_wechatqrcode/examples/wechat_qrcode_demo.gif
)
...
...
benchmark/README.md
浏览文件 @
7e062e54
...
...
@@ -79,7 +79,7 @@ Benchmark is done with latest `opencv-python==4.7.0.72` and `opencv-contrib-pyth
|
[
YoutuReID
](
../models/person_reid_youtureid
)
| Person Re-Identification | 128x256 | 30.39 | 625.56 | 11117.07 | 195.67 | 898.23 | 14886.02 | 90.07 | 44.61 | 5.58 | --- |
|
[
MP-PalmDet
](
../models/palm_detection_mediapipe
)
| Palm Detection | 192x192 | 6.29 | 86.83 | 872.09 | 38.03 | 142.23 | 1191.81 | 83.20 | 33.81 | 5.17 | --- |
|
[
MP-HandPose
](
../models/handpose_estimation_mediapipe
)
| Hand Pose Estimation | 224x224 | 4.68 | 43.57 | 460.56 | 20.27 | 80.67 | 636.22 | 40.10 | 19.47 | 6.27 | --- |
|
[
MP-PersonDet
](
.
/models/person_detection_mediapipe
)
| Person Detection | 224x224 | 13.88 | 98.52 | 1326.56 | 46.07 | 191.41 | 1835.97 | 56.69 | --- | 16.45 | --- |
|
[
MP-PersonDet
](
.
./models/person_detection_mediapipe
)
| Person Detection | 224x224 | 13.88 | 98.52 | 1326.56 | 46.07 | 191.41 | 1835.97 | 56.69 | --- | 16.45 | --- |
\*
: Models are quantized in per-channel mode, which run slower than per-tensor quantized models on NPU.
...
...
models/face_detection_yunet/README.md
浏览文件 @
7e062e54
...
...
@@ -28,7 +28,7 @@ Run the following command to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py
--input
/path/to/image
python demo.py
--input
/path/to/image
-v
# get help regarding various parameters
python demo.py
--help
...
...
@@ -40,13 +40,13 @@ Install latest OpenCV and CMake >= 3.24.0 to get started with:
```
shell
# A typical and default installation path of OpenCV is /usr/local
cmake
-B
build
-D
OPENCV_INSTALLATION_PATH
/path/to/opencv/installation
.
cmake
-B
build
-D
OPENCV_INSTALLATION_PATH
=
/path/to/opencv/installation
.
cmake
--build
build
# detect on camera input
./build/demo
# detect on an image
./build/demo
-i
=
/path/to/image
./build/demo
-i
=
/path/to/image
-v
# get help messages
./build/demo
-h
```
...
...
models/facial_expression_recognition/README.md
浏览文件 @
7e062e54
...
...
@@ -22,7 +22,7 @@ Results of accuracy evaluation on [RAF-DB](http://whdeng.cn/RAF/model1.html).
Run the following command to try the demo:
```
shell
# recognize the facial expression on images
python demo.py
--input
/path/to/image
python demo.py
--input
/path/to/image
-v
```
### Example outputs
...
...
models/handpose_estimation_mediapipe/README.md
浏览文件 @
7e062e54
...
...
@@ -10,7 +10,7 @@ This model is converted from TFlite to ONNX using following tools:
**Note**
:
-
The int8-quantized model may produce invalid results due to a significant drop of accuracy.
-
Visit https://g
oogle.github.io/mediapipe/solutions/models.html
#hands for models of larger scale.
-
Visit https://g
ithub.com/google/mediapipe/blob/master/docs/solutions/models.md
#hands for models of larger scale.
## Demo
...
...
@@ -19,7 +19,7 @@ Run the following commands to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py
-i
/path/to/image
python demo.py
-i
/path/to/image
-v
```
### Example outputs
...
...
@@ -32,6 +32,7 @@ All files in this directory are licensed under [Apache 2.0 License](./LICENSE).
## Reference
-
MediaPipe Handpose: https://github.com/tensorflow/tfjs-models/tree/master/handpose
-
MediaPipe hands model and model card: https://google.github.io/mediapipe/solutions/models.html#hands
-
MediaPipe Handpose: https://developers.google.com/mediapipe/solutions/vision/hand_landmarker
-
MediaPipe hands model and model card: https://github.com/google/mediapipe/blob/master/docs/solutions/models.md#hands
-
Handpose TFJS:https://github.com/tensorflow/tfjs-models/tree/master/handpose
-
Int8 model quantized with rgb evaluation set of FreiHAND: https://lmb.informatik.uni-freiburg.de/resources/datasets/FreihandDataset.en.html
models/human_segmentation_pphumanseg/README.md
浏览文件 @
7e062e54
...
...
@@ -10,7 +10,7 @@ Run the following command to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py
--input
/path/to/image
python demo.py
--input
/path/to/image
-v
# get help regarding various parameters
python demo.py
--help
...
...
models/license_plate_detection_yunet/README.md
浏览文件 @
7e062e54
...
...
@@ -12,7 +12,7 @@ Run the following command to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py
--input
/path/to/image
python demo.py
--input
/path/to/image
-v
# get help regarding various parameters
python demo.py
--help
```
...
...
models/object_detection_nanodet/README.md
浏览文件 @
7e062e54
...
...
@@ -12,7 +12,7 @@ Run the following command to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py
--input
/path/to/image
python demo.py
--input
/path/to/image
-v
```
Note:
-
image result saved as "result.jpg"
...
...
models/object_detection_yolox/README.md
浏览文件 @
7e062e54
...
...
@@ -18,7 +18,7 @@ Run the following command to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py
--input
/path/to/image
python demo.py
--input
/path/to/image
-v
```
Note:
-
image result saved as "result.jpg"
...
...
models/object_tracking_dasiamrpn/README.md
浏览文件 @
7e062e54
...
...
@@ -15,7 +15,7 @@ Run the following command to try the demo:
# track on camera input
python demo.py
# track on video input
python demo.py
--input
/path/to/video
python demo.py
--input
/path/to/video
-v
# get help regarding various parameters
python demo.py
--help
...
...
models/palm_detection_mediapipe/README.md
浏览文件 @
7e062e54
...
...
@@ -4,11 +4,11 @@ This model detects palm bounding boxes and palm landmarks, and is converted from
-
TFLite model to ONNX: https://github.com/onnx/tensorflow-onnx
-
simplified by
[
onnx-simplifier
](
https://github.com/daquexian/onnx-simplifier
)
-
SSD Anchors are generated from
[
GenMediaPipePalmDectionSSDAnchors
](
https://github.com/VimalMollyn/GenMediaPipePalmDectionSSDAnchors
)
SSD Anchors are generated from
[
GenMediaPipePalmDectionSSDAnchors
](
https://github.com/VimalMollyn/GenMediaPipePalmDectionSSDAnchors
)
**Note**
:
-
Visit https://g
oogle.github.io/mediapipe/solutions/models.html
#hands for models of larger scale.
-
Visit https://g
ithub.com/google/mediapipe/blob/master/docs/solutions/models.md
#hands for models of larger scale.
## Demo
...
...
@@ -18,7 +18,7 @@ Run the following commands to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py
-i
/path/to/image
python demo.py
-i
/path/to/image
-v
# get help regarding various parameters
python demo.py
--help
...
...
@@ -34,6 +34,7 @@ All files in this directory are licensed under [Apache 2.0 License](./LICENSE).
## Reference
-
MediaPipe Handpose: https://github.com/tensorflow/tfjs-models/tree/master/handpose
-
MediaPipe hands model and model card: https://google.github.io/mediapipe/solutions/models.html#hands
-
MediaPipe Handpose: https://developers.google.com/mediapipe/solutions/vision/hand_landmarker
-
MediaPipe hands model and model card: https://github.com/google/mediapipe/blob/master/docs/solutions/models.md#hands
-
Handpose TFJS:https://github.com/tensorflow/tfjs-models/tree/master/handpose
-
Int8 model quantized with rgb evaluation set of FreiHAND: https://lmb.informatik.uni-freiburg.de/resources/datasets/FreihandDataset.en.html
\ No newline at end of file
models/person_detection_mediapipe/README.md
浏览文件 @
7e062e54
...
...
@@ -15,7 +15,7 @@ Run the following commands to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py
-i
/path/to/image
python demo.py
-i
/path/to/image
-v
# get help regarding various parameters
python demo.py
--help
...
...
@@ -30,6 +30,6 @@ python demo.py --help
All files in this directory are licensed under
[
Apache 2.0 License
](
LICENSE
)
.
## Reference
-
MediaPipe Pose: https://
google.github.io/mediapipe/solutions/pose
-
MediaPipe pose model and model card: https://g
oogle.github.io/mediapipe/solutions/models.html
#pose
-
MediaPipe Pose: https://
developers.google.com/mediapipe/solutions/vision/pose_landmarker
-
MediaPipe pose model and model card: https://g
ithub.com/google/mediapipe/blob/master/docs/solutions/models.md
#pose
-
BlazePose TFJS: https://github.com/tensorflow/tfjs-models/tree/master/pose-detection/src/blazepose_tfjs
models/person_reid_youtureid/README.md
浏览文件 @
7e062e54
...
...
@@ -11,7 +11,7 @@ Note:
Run the following command to try the demo:
```
shell
python demo.py
--query_dir
/path/to/query
--gallery_dir
/path/to/gallery
python demo.py
--query_dir
/path/to/query
--gallery_dir
/path/to/gallery
-v
# get help regarding various parameters
python demo.py
--help
...
...
models/person_reid_youtureid/demo.py
浏览文件 @
7e062e54
...
...
@@ -43,10 +43,10 @@ parser.add_argument('--topk', type=int, default=10,
help
=
'Top-K closest from gallery for each query.'
)
parser
.
add_argument
(
'--model'
,
'-m'
,
type
=
str
,
default
=
'person_reid_youtu_2021nov.onnx'
,
help
=
'Path to the model.'
)
parser
.
add_argument
(
'--save'
,
'-s'
,
type
=
str2bool
,
default
=
False
,
help
=
'
Set true to save results. This flag is invalid when using camera
.'
)
parser
.
add_argument
(
'--vis'
,
'-v'
,
type
=
str2bool
,
default
=
True
,
help
=
'
Set true to open a window for result visualization. This flag is invalid when using camera
.'
)
parser
.
add_argument
(
'--save'
,
'-s'
,
action
=
'store_true'
,
help
=
'
Usage: Specify to save file with results (i.e. bounding box, confidence level). Invalid in case of camera input
.'
)
parser
.
add_argument
(
'--vis'
,
'-v'
,
action
=
'store_true'
,
help
=
'
Usage: Specify to open a new window to show results. Invalid in case of camera input
.'
)
args
=
parser
.
parse_args
()
def
readImageFromDirectory
(
img_dir
,
w
=
128
,
h
=
256
):
...
...
models/qrcode_wechatqrcode/README.md
浏览文件 @
7e062e54
...
...
@@ -15,7 +15,7 @@ Run the following command to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py
--input
/path/to/image
python demo.py
--input
/path/to/image
-v
# get help regarding various parameters
python demo.py
--help
...
...
models/text_detection_db/README.md
浏览文件 @
7e062e54
...
...
@@ -17,7 +17,7 @@ Run the following command to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py
--input
/path/to/image
python demo.py
--input
/path/to/image
-v
# get help regarding various parameters
python demo.py
--help
...
...
models/text_recognition_crnn/README.md
浏览文件 @
7e062e54
...
...
@@ -42,7 +42,7 @@ Run the demo detecting English:
# detect on camera input
python demo.py
# detect on an image
python demo.py
--input
/path/to/image
python demo.py
--input
/path/to/image
-v
# get help regarding various parameters
python demo.py
--help
...
...
tools/eval/README.md
浏览文件 @
7e062e54
...
...
@@ -19,9 +19,9 @@ Supported datasets:
-
[
ImageNet
](
#imagenet
)
-
[
WIDERFace
](
#widerface
)
-
[
LFW
](
#lfw
)
-
[
ICDAR
](
#
ICDAR
2003
)
-
[
ICDAR
](
#
icdar
2003
)
-
[
IIIT5K
](
#iiit5k
)
-
[
Mini Supervisely
](
#mini
_
supervisely
)
-
[
Mini Supervisely
](
#mini
-
supervisely
)
## ImageNet
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录