提交 7e062e54 编写于 作者: W Wanli 提交者: GitHub

update and modify documentation (#161)

上级 2121e570
......@@ -82,6 +82,10 @@ Some examples are listed below. You can find more in the directory of each model
![handpose estimation](models/handpose_estimation_mediapipe/examples/mphandpose_demo.webp)
### Person Detection with [MP-PersonDet](./models/person_detection_mediapipe)
![person det](./models/person_detection_mediapipe/examples/mppersondet_demo.webp)
### QR Code Detection and Parsing with [WeChatQRCode](./models/qrcode_wechatqrcode/)
![qrcode](./models/qrcode_wechatqrcode/examples/wechat_qrcode_demo.gif)
......
......@@ -79,7 +79,7 @@ Benchmark is done with latest `opencv-python==4.7.0.72` and `opencv-contrib-pyth
| [YoutuReID](../models/person_reid_youtureid) | Person Re-Identification | 128x256 | 30.39 | 625.56 | 11117.07 | 195.67 | 898.23 | 14886.02 | 90.07 | 44.61 | 5.58 | --- |
| [MP-PalmDet](../models/palm_detection_mediapipe) | Palm Detection | 192x192 | 6.29 | 86.83 | 872.09 | 38.03 | 142.23 | 1191.81 | 83.20 | 33.81 | 5.17 | --- |
| [MP-HandPose](../models/handpose_estimation_mediapipe) | Hand Pose Estimation | 224x224 | 4.68 | 43.57 | 460.56 | 20.27 | 80.67 | 636.22 | 40.10 | 19.47 | 6.27 | --- |
| [MP-PersonDet](./models/person_detection_mediapipe) | Person Detection | 224x224 | 13.88 | 98.52 | 1326.56 | 46.07 | 191.41 | 1835.97 | 56.69 | --- | 16.45 | --- |
| [MP-PersonDet](../models/person_detection_mediapipe) | Person Detection | 224x224 | 13.88 | 98.52 | 1326.56 | 46.07 | 191.41 | 1835.97 | 56.69 | --- | 16.45 | --- |
\*: Models are quantized in per-channel mode, which run slower than per-tensor quantized models on NPU.
......
......@@ -28,7 +28,7 @@ Run the following command to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py --input /path/to/image
python demo.py --input /path/to/image -v
# get help regarding various parameters
python demo.py --help
......@@ -40,13 +40,13 @@ Install latest OpenCV and CMake >= 3.24.0 to get started with:
```shell
# A typical and default installation path of OpenCV is /usr/local
cmake -B build -D OPENCV_INSTALLATION_PATH /path/to/opencv/installation .
cmake -B build -D OPENCV_INSTALLATION_PATH=/path/to/opencv/installation .
cmake --build build
# detect on camera input
./build/demo
# detect on an image
./build/demo -i=/path/to/image
./build/demo -i=/path/to/image -v
# get help messages
./build/demo -h
```
......
......@@ -22,7 +22,7 @@ Results of accuracy evaluation on [RAF-DB](http://whdeng.cn/RAF/model1.html).
Run the following command to try the demo:
```shell
# recognize the facial expression on images
python demo.py --input /path/to/image
python demo.py --input /path/to/image -v
```
### Example outputs
......
......@@ -10,7 +10,7 @@ This model is converted from TFlite to ONNX using following tools:
**Note**:
- The int8-quantized model may produce invalid results due to a significant drop of accuracy.
- Visit https://google.github.io/mediapipe/solutions/models.html#hands for models of larger scale.
- Visit https://github.com/google/mediapipe/blob/master/docs/solutions/models.md#hands for models of larger scale.
## Demo
......@@ -19,7 +19,7 @@ Run the following commands to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py -i /path/to/image
python demo.py -i /path/to/image -v
```
### Example outputs
......@@ -32,6 +32,7 @@ All files in this directory are licensed under [Apache 2.0 License](./LICENSE).
## Reference
- MediaPipe Handpose: https://github.com/tensorflow/tfjs-models/tree/master/handpose
- MediaPipe hands model and model card: https://google.github.io/mediapipe/solutions/models.html#hands
- MediaPipe Handpose: https://developers.google.com/mediapipe/solutions/vision/hand_landmarker
- MediaPipe hands model and model card: https://github.com/google/mediapipe/blob/master/docs/solutions/models.md#hands
- Handpose TFJS:https://github.com/tensorflow/tfjs-models/tree/master/handpose
- Int8 model quantized with rgb evaluation set of FreiHAND: https://lmb.informatik.uni-freiburg.de/resources/datasets/FreihandDataset.en.html
......@@ -10,7 +10,7 @@ Run the following command to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py --input /path/to/image
python demo.py --input /path/to/image -v
# get help regarding various parameters
python demo.py --help
......
......@@ -12,7 +12,7 @@ Run the following command to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py --input /path/to/image
python demo.py --input /path/to/image -v
# get help regarding various parameters
python demo.py --help
```
......
......@@ -12,7 +12,7 @@ Run the following command to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py --input /path/to/image
python demo.py --input /path/to/image -v
```
Note:
- image result saved as "result.jpg"
......
......@@ -18,7 +18,7 @@ Run the following command to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py --input /path/to/image
python demo.py --input /path/to/image -v
```
Note:
- image result saved as "result.jpg"
......
......@@ -15,7 +15,7 @@ Run the following command to try the demo:
# track on camera input
python demo.py
# track on video input
python demo.py --input /path/to/video
python demo.py --input /path/to/video -v
# get help regarding various parameters
python demo.py --help
......
......@@ -4,11 +4,11 @@ This model detects palm bounding boxes and palm landmarks, and is converted from
- TFLite model to ONNX: https://github.com/onnx/tensorflow-onnx
- simplified by [onnx-simplifier](https://github.com/daquexian/onnx-simplifier)
- SSD Anchors are generated from [GenMediaPipePalmDectionSSDAnchors](https://github.com/VimalMollyn/GenMediaPipePalmDectionSSDAnchors)
SSD Anchors are generated from [GenMediaPipePalmDectionSSDAnchors](https://github.com/VimalMollyn/GenMediaPipePalmDectionSSDAnchors)
**Note**:
- Visit https://google.github.io/mediapipe/solutions/models.html#hands for models of larger scale.
- Visit https://github.com/google/mediapipe/blob/master/docs/solutions/models.md#hands for models of larger scale.
## Demo
......@@ -18,7 +18,7 @@ Run the following commands to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py -i /path/to/image
python demo.py -i /path/to/image -v
# get help regarding various parameters
python demo.py --help
......@@ -34,6 +34,7 @@ All files in this directory are licensed under [Apache 2.0 License](./LICENSE).
## Reference
- MediaPipe Handpose: https://github.com/tensorflow/tfjs-models/tree/master/handpose
- MediaPipe hands model and model card: https://google.github.io/mediapipe/solutions/models.html#hands
- MediaPipe Handpose: https://developers.google.com/mediapipe/solutions/vision/hand_landmarker
- MediaPipe hands model and model card: https://github.com/google/mediapipe/blob/master/docs/solutions/models.md#hands
- Handpose TFJS:https://github.com/tensorflow/tfjs-models/tree/master/handpose
- Int8 model quantized with rgb evaluation set of FreiHAND: https://lmb.informatik.uni-freiburg.de/resources/datasets/FreihandDataset.en.html
\ No newline at end of file
......@@ -15,7 +15,7 @@ Run the following commands to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py -i /path/to/image
python demo.py -i /path/to/image -v
# get help regarding various parameters
python demo.py --help
......@@ -30,6 +30,6 @@ python demo.py --help
All files in this directory are licensed under [Apache 2.0 License](LICENSE).
## Reference
- MediaPipe Pose: https://google.github.io/mediapipe/solutions/pose
- MediaPipe pose model and model card: https://google.github.io/mediapipe/solutions/models.html#pose
- MediaPipe Pose: https://developers.google.com/mediapipe/solutions/vision/pose_landmarker
- MediaPipe pose model and model card: https://github.com/google/mediapipe/blob/master/docs/solutions/models.md#pose
- BlazePose TFJS: https://github.com/tensorflow/tfjs-models/tree/master/pose-detection/src/blazepose_tfjs
......@@ -11,7 +11,7 @@ Note:
Run the following command to try the demo:
```shell
python demo.py --query_dir /path/to/query --gallery_dir /path/to/gallery
python demo.py --query_dir /path/to/query --gallery_dir /path/to/gallery -v
# get help regarding various parameters
python demo.py --help
......
......@@ -43,10 +43,10 @@ parser.add_argument('--topk', type=int, default=10,
help='Top-K closest from gallery for each query.')
parser.add_argument('--model', '-m', type=str, default='person_reid_youtu_2021nov.onnx',
help='Path to the model.')
parser.add_argument('--save', '-s', type=str2bool, default=False,
help='Set true to save results. This flag is invalid when using camera.')
parser.add_argument('--vis', '-v', type=str2bool, default=True,
help='Set true to open a window for result visualization. This flag is invalid when using camera.')
parser.add_argument('--save', '-s', action='store_true',
help='Usage: Specify to save file with results (i.e. bounding box, confidence level). Invalid in case of camera input.')
parser.add_argument('--vis', '-v', action='store_true',
help='Usage: Specify to open a new window to show results. Invalid in case of camera input.')
args = parser.parse_args()
def readImageFromDirectory(img_dir, w=128, h=256):
......
......@@ -15,7 +15,7 @@ Run the following command to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py --input /path/to/image
python demo.py --input /path/to/image -v
# get help regarding various parameters
python demo.py --help
......
......@@ -17,7 +17,7 @@ Run the following command to try the demo:
# detect on camera input
python demo.py
# detect on an image
python demo.py --input /path/to/image
python demo.py --input /path/to/image -v
# get help regarding various parameters
python demo.py --help
......
......@@ -42,7 +42,7 @@ Run the demo detecting English:
# detect on camera input
python demo.py
# detect on an image
python demo.py --input /path/to/image
python demo.py --input /path/to/image -v
# get help regarding various parameters
python demo.py --help
......
......@@ -19,9 +19,9 @@ Supported datasets:
- [ImageNet](#imagenet)
- [WIDERFace](#widerface)
- [LFW](#lfw)
- [ICDAR](#ICDAR2003)
- [ICDAR](#icdar2003)
- [IIIT5K](#iiit5k)
- [Mini Supervisely](#mini_supervisely)
- [Mini Supervisely](#mini-supervisely)
## ImageNet
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册