未验证 提交 8d0cb469 编写于 作者: W Wanli 提交者: GitHub

Add hand pose estimation model from Mediapipe (#83)

上级 dfb5c6a9
......@@ -14,23 +14,24 @@ Guidelines:
## Models & Benchmark Results
| Model | Task | Input Size | INTEL-CPU (ms) | RPI-CPU (ms) | JETSON-GPU (ms) | KV3-NPU (ms) | D1-CPU (ms) |
|-------|------|----------|----------------|--------------|-----------------|----------|-------------|
| [YuNet](./models/face_detection_yunet) | Face Detection | 160x120 | 1.45 | 5.21 | 12.18 | 4.04 | 86.69 |
| [SFace](./models/face_recognition_sface) | Face Recognition | 112x112 | 8.65 | 76.95 | 24.88 | 46.25 | --- |
| [LPD-YuNet](./models/license_plate_detection_yunet/) | License Plate Detection | 320x240 | --- | 134.02 | 56.12 | 154.20\* | |
| [DB-IC15](./models/text_detection_db) | Text Detection | 640x480 | 142.91 | 2456.49 | 208.41 | --- | --- |
| [DB-TD500](./models/text_detection_db) | Text Detection | 640x480 | 142.91 | 2572.10 | 210.51 | --- | --- |
| [CRNN-EN](./models/text_recognition_crnn) | Text Recognition | 100x32 | 50.21 | 230.50 | 196.15 | 125.30 | --- |
| [CRNN-CN](./models/text_recognition_crnn) | Text Recognition | 100x32 | 73.52 | 309.60 | 239.76 | 166.79 | --- |
| [PP-ResNet](./models/image_classification_ppresnet) | Image Classification | 224x224 | 56.05 | 440.90 | 98.64 | 75.45 | --- |
| [MobileNet-V1](./models/image_classification_mobilenet) | Image Classification | 224x224 | 9.04 | 67.97 | 33.18 | 145.66\* | --- |
| [MobileNet-V2](./models/image_classification_mobilenet) | Image Classification | 224x224 | 8.86 | 51.64 | 31.92 | 146.31\* | --- |
| [PP-HumanSeg](./models/human_segmentation_pphumanseg) | Human Segmentation | 192x192 | 19.92 | 94.40 | 67.97 | 74.77 | --- |
| [WeChatQRCode](./models/qrcode_wechatqrcode) | QR Code Detection and Parsing | 100x100 | 7.04 | 36.20 | --- | --- | --- |
| [DaSiamRPN](./models/object_tracking_dasiamrpn) | Object Tracking | 1280x720 | 36.15 | 683.90 | 76.82 | --- | --- |
| [YoutuReID](./models/person_reid_youtureid) | Person Re-Identification | 128x256 | 35.81 | 481.54 | 90.07 | 44.61 | --- |
| [MPPalmDet](./models/palm_detection_mediapipe) | Palm Detection | 256x256 | 15.57 | 168.37 | 50.64 | 145.56\* | --- |
| Model | Task | Input Size | INTEL-CPU (ms) | RPI-CPU (ms) | JETSON-GPU (ms) | KV3-NPU (ms) | D1-CPU (ms) |
|---------------------------------------------------------|-------------------------------|------------|----------------|--------------|-----------------|--------------|-------------|
| [YuNet](./models/face_detection_yunet) | Face Detection | 160x120 | 1.45 | 6.22 | 12.18 | 4.04 | 86.69 |
| [SFace](./models/face_recognition_sface) | Face Recognition | 112x112 | 8.65 | 99.20 | 24.88 | 46.25 | --- |
| [LPD-YuNet](./models/license_plate_detection_yunet/) | License Plate Detection | 320x240 | --- | 168.03 | 56.12 | 154.20\* | |
| [DB-IC15](./models/text_detection_db) | Text Detection | 640x480 | 142.91 | 2835.91 | 208.41 | --- | --- |
| [DB-TD500](./models/text_detection_db) | Text Detection | 640x480 | 142.91 | 2841.71 | 210.51 | --- | --- |
| [CRNN-EN](./models/text_recognition_crnn) | Text Recognition | 100x32 | 50.21 | 234.32 | 196.15 | 125.30 | --- |
| [CRNN-CN](./models/text_recognition_crnn) | Text Recognition | 100x32 | 73.52 | 322.16 | 239.76 | 166.79 | --- |
| [PP-ResNet](./models/image_classification_ppresnet) | Image Classification | 224x224 | 56.05 | 602.58 | 98.64 | 75.45 | --- |
| [MobileNet-V1](./models/image_classification_mobilenet) | Image Classification | 224x224 | 9.04 | 92.25 | 33.18 | 145.66\* | --- |
| [MobileNet-V2](./models/image_classification_mobilenet) | Image Classification | 224x224 | 8.86 | 74.03 | 31.92 | 146.31\* | --- |
| [PP-HumanSeg](./models/human_segmentation_pphumanseg) | Human Segmentation | 192x192 | 19.92 | 105.32 | 67.97 | 74.77 | --- |
| [WeChatQRCode](./models/qrcode_wechatqrcode) | QR Code Detection and Parsing | 100x100 | 7.04 | 37.68 | --- | --- | --- |
| [DaSiamRPN](./models/object_tracking_dasiamrpn) | Object Tracking | 1280x720 | 36.15 | 705.48 | 76.82 | --- | --- |
| [YoutuReID](./models/person_reid_youtureid) | Person Re-Identification | 128x256 | 35.81 | 521.98 | 90.07 | 44.61 | --- |
| [MP-PalmDet](./models/palm_detection_mediapipe) | Palm Detection | 256x256 | 15.57 | 168.37 | 50.64 | 145.56\* | --- |
| [MP-HandPose](./models/handpose_estimation_mediapipe) | Hand Pose Estimation | 256x256 | 20.16 | 148.24 | 156.30 | 663.77\* | --- |
\*: Models are quantized in per-channel mode, which run slower than per-tensor quantized models on NPU.
......@@ -69,7 +70,11 @@ Some examples are listed below. You can find more in the directory of each model
### Palm Detection with [MP-PalmDet](./models/palm_detection_mediapipe/)
![palm det](./models/palm_detection_mediapipe//examples/mppalmdet_demo.gif)
![palm det](./models/palm_detection_mediapipe/examples/mppalmdet_demo.gif)
### Hand Pose Estimation with [MP-HandPose](models/handpose_estimation_mediapipe/)
![handpose estimation](models/handpose_estimation_mediapipe/examples/mphandpose_demo.gif)
### QR Code Detection and Parsing with [WeChatQRCode](./models/qrcode_wechatqrcode/)
......
Benchmark:
name: "Hand Pose Estimation Benchmark"
type: "Recognition"
data:
path: "benchmark/data/palm_detection"
files: ["palm1.jpg", "palm2.jpg", "palm3.jpg"]
sizes: # [[w1, h1], ...], Omit to run at original scale
- [256, 256]
metric:
warmup: 30
repeat: 10
reduction: "median"
backend: "default"
target: "cpu"
Model:
name: "MPHandPose"
modelPath: "models/handpose_estimation_mediapipe/handpose_estimation_mediapipe_2022may.onnx"
confThreshold: 0.9
......@@ -198,9 +198,9 @@ data_downloaders = dict(
sha='5b741fbf34c1fbcf59cad8f2a65327a5899e66f1',
filename='person_reid.zip'),
palm_detection=Downloader(name='palm_detection',
url='https://drive.google.com/u/0/uc?id=1qScOzehV8OIzJJLuD_LMvZq15YcWd_VV&export=download',
sha='c0d4f811d38c6f833364b9196a719307598213a1',
filename='palm_detection.zip'),
url='https://drive.google.com/u/0/uc?id=1zYnOsXxYXn-hFIdyIws9louzqjpt8byQ&export=download',
sha='78ed095b685a9bacdd643782716127afe936f366',
filename='palm_detection_20220826.zip'),
license_plate_detection=Downloader(name='license_plate_detection',
url='https://drive.google.com/u/0/uc?id=1cf9MEyUqMMy8lLeDGd1any6tM_SsSmny&export=download',
sha='997acb143ddc4531e6e41365fb7ad4722064564c',
......
......@@ -10,6 +10,7 @@ from .person_reid_youtureid.youtureid import YoutuReID
from .image_classification_mobilenet.mobilenet_v1 import MobileNetV1
from .image_classification_mobilenet.mobilenet_v2 import MobileNetV2
from .palm_detection_mediapipe.mp_palmdet import MPPalmDet
from .handpose_estimation_mediapipe.mp_handpose import MPHandPose
from .license_plate_detection_yunet.lpd_yunet import LPD_YuNet
class Registery:
......@@ -36,4 +37,5 @@ MODELS.register(YoutuReID)
MODELS.register(MobileNetV1)
MODELS.register(MobileNetV2)
MODELS.register(MPPalmDet)
MODELS.register(MPHandPose)
MODELS.register(LPD_YuNet)
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
\ No newline at end of file
# Hand pose estimation from MediaPipe Handpose
This model estimates 21 hand keypoints per detected hand from [palm detector](../palm_detection_mediapipe). (The image below is referenced from [MediaPipe Hands Keypoints](https://github.com/tensorflow/tfjs-models/tree/master/hand-pose-detection#mediapipe-hands-keypoints-used-in-mediapipe-hands))
![MediaPipe Hands Keypoints](./examples/hand_keypoints.png)
This model is converted from Tensorflow-JS to ONNX using following tools:
- tfjs to tf_saved_model: https://github.com/patlevin/tfjs-to-tf/
- tf_saved_model to ONNX: https://github.com/onnx/tensorflow-onnx
- simplified by [onnx-simplifier](https://github.com/daquexian/onnx-simplifier)
Also note that the model is quantized in per-channel mode with [Intel's neural compressor](https://github.com/intel/neural-compressor), which gives better accuracy but may lose some speed.
## Demo
Run the following commands to try the demo:
```bash
# detect on camera input
python demo.py
# detect on an image
python demo.py -i /path/to/image
```
### Example outputs
![webcam demo](./examples/mphandpose_demo.gif)
## License
All files in this directory are licensed under [Apache 2.0 License](./LICENSE).
## Reference
- MediaPipe Handpose: https://github.com/tensorflow/tfjs-models/tree/master/handpose
import sys
import argparse
import numpy as np
import cv2 as cv
from mp_handpose import MPHandPose
sys.path.append('../palm_detection_mediapipe')
from mp_palmdet import MPPalmDet
def str2bool(v):
if v.lower() in ['on', 'yes', 'true', 'y', 't']:
return True
elif v.lower() in ['off', 'no', 'false', 'n', 'f']:
return False
else:
raise NotImplementedError
backends = [cv.dnn.DNN_BACKEND_OPENCV, cv.dnn.DNN_BACKEND_CUDA]
targets = [cv.dnn.DNN_TARGET_CPU, cv.dnn.DNN_TARGET_CUDA, cv.dnn.DNN_TARGET_CUDA_FP16]
help_msg_backends = "Choose one of the computation backends: {:d}: OpenCV implementation (default); {:d}: CUDA"
help_msg_targets = "Chose one of the target computation devices: {:d}: CPU (default); {:d}: CUDA; {:d}: CUDA fp16"
try:
backends += [cv.dnn.DNN_BACKEND_TIMVX]
targets += [cv.dnn.DNN_TARGET_NPU]
help_msg_backends += "; {:d}: TIMVX"
help_msg_targets += "; {:d}: NPU"
except:
print('This version of OpenCV does not support TIM-VX and NPU. Visit https://gist.github.com/fengyuentau/5a7a5ba36328f2b763aea026c43fa45f for more information.')
parser = argparse.ArgumentParser(description='Hand Pose Estimation from MediaPipe')
parser.add_argument('--input', '-i', type=str, help='Path to the input image. Omit for using default camera.')
parser.add_argument('--model', '-m', type=str, default='./handpose_estimation_mediapipe_2022may.onnx', help='Path to the model.')
parser.add_argument('--backend', '-b', type=int, default=backends[0], help=help_msg_backends.format(*backends))
parser.add_argument('--target', '-t', type=int, default=targets[0], help=help_msg_targets.format(*targets))
parser.add_argument('--conf_threshold', type=float, default=0.8, help='Filter out hands of confidence < conf_threshold.')
parser.add_argument('--save', '-s', type=str, default=False, help='Set true to save results. This flag is invalid when using camera.')
parser.add_argument('--vis', '-v', type=str2bool, default=True, help='Set true to open a window for result visualization. This flag is invalid when using camera.')
args = parser.parse_args()
def visualize(image, hands, print_result=False):
output = image.copy()
for idx, handpose in enumerate(hands):
conf = handpose[-1]
bbox = handpose[0:4].astype(np.int32)
landmarks = handpose[4:-1].reshape(21, 2).astype(np.int32)
# Print results
if print_result:
print('-----------hand {}-----------'.format(idx + 1))
print('conf: {:.2f}'.format(conf))
print('hand box: {}'.format(bbox))
print('hand landmarks: ')
for l in landmarks:
print('\t{}'.format(l))
# Draw line between each key points
cv.line(output, landmarks[0], landmarks[1], (255, 255, 255), 2)
cv.line(output, landmarks[1], landmarks[2], (255, 255, 255), 2)
cv.line(output, landmarks[2], landmarks[3], (255, 255, 255), 2)
cv.line(output, landmarks[3], landmarks[4], (255, 255, 255), 2)
cv.line(output, landmarks[0], landmarks[5], (255, 255, 255), 2)
cv.line(output, landmarks[5], landmarks[6], (255, 255, 255), 2)
cv.line(output, landmarks[6], landmarks[7], (255, 255, 255), 2)
cv.line(output, landmarks[7], landmarks[8], (255, 255, 255), 2)
cv.line(output, landmarks[0], landmarks[9], (255, 255, 255), 2)
cv.line(output, landmarks[9], landmarks[10], (255, 255, 255), 2)
cv.line(output, landmarks[10], landmarks[11], (255, 255, 255), 2)
cv.line(output, landmarks[11], landmarks[12], (255, 255, 255), 2)
cv.line(output, landmarks[0], landmarks[13], (255, 255, 255), 2)
cv.line(output, landmarks[13], landmarks[14], (255, 255, 255), 2)
cv.line(output, landmarks[14], landmarks[15], (255, 255, 255), 2)
cv.line(output, landmarks[15], landmarks[16], (255, 255, 255), 2)
cv.line(output, landmarks[0], landmarks[17], (255, 255, 255), 2)
cv.line(output, landmarks[17], landmarks[18], (255, 255, 255), 2)
cv.line(output, landmarks[18], landmarks[19], (255, 255, 255), 2)
cv.line(output, landmarks[19], landmarks[20], (255, 255, 255), 2)
for p in landmarks:
cv.circle(output, p, 2, (0, 0, 255), 2)
return output
if __name__ == '__main__':
# palm detector
palm_detector = MPPalmDet(modelPath='../palm_detection_mediapipe/palm_detection_mediapipe_2022may.onnx',
nmsThreshold=0.3,
scoreThreshold=0.8,
backendId=args.backend,
targetId=args.target)
# handpose detector
handpose_detector = MPHandPose(modelPath=args.model,
confThreshold=args.conf_threshold,
backendId=args.backend,
targetId=args.target)
# If input is an image
if args.input is not None:
image = cv.imread(args.input)
# Palm detector inference
palms = palm_detector.infer(image)
hands = np.empty(shape=(0, 47))
# Estimate the pose of each hand
for palm in palms:
# Handpose detector inference
handpose = handpose_detector.infer(image, palm)
if handpose is not None:
hands = np.vstack((hands, handpose))
# Draw results on the input image
image = visualize(image, hands, True)
if len(palms) == 0:
print('No palm detected!')
# Save results
if args.save:
cv.imwrite('result.jpg', image)
print('Results saved to result.jpg\n')
# Visualize results in a new window
if args.vis:
cv.namedWindow(args.input, cv.WINDOW_AUTOSIZE)
cv.imshow(args.input, image)
cv.waitKey(0)
else: # Omit input to call default camera
deviceId = 0
cap = cv.VideoCapture(deviceId)
tm = cv.TickMeter()
while cv.waitKey(1) < 0:
hasFrame, frame = cap.read()
if not hasFrame:
print('No frames grabbed!')
break
# Palm detector inference
palms = palm_detector.infer(frame)
hands = np.empty(shape=(0, 47))
tm.start()
# Estimate the pose of each hand
for palm in palms:
# Handpose detector inference
handpose = handpose_detector.infer(frame, palm)
if handpose is not None:
hands = np.vstack((hands, handpose))
tm.stop()
# Draw results on the input image
frame = visualize(frame, hands)
if len(palms) == 0:
print('No palm detected!')
else:
cv.putText(frame, 'FPS: {:.2f}'.format(tm.getFPS()), (0, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255))
cv.imshow('MediaPipe Handpose Detection Demo', frame)
tm.reset()
import numpy as np
import cv2 as cv
class MPHandPose:
def __init__(self, modelPath, confThreshold=0.8, backendId=0, targetId=0):
self.model_path = modelPath
self.conf_threshold = confThreshold
self.backend_id = backendId
self.target_id = targetId
self.input_size = np.array([256, 256]) # wh
self.PALM_LANDMARK_IDS = [0, 5, 9, 13, 17, 1, 2]
self.PALM_LANDMARKS_INDEX_OF_PALM_BASE = 0
self.PALM_LANDMARKS_INDEX_OF_MIDDLE_FINGER_BASE = 2
self.PALM_BOX_SHIFT_VECTOR = [0, -0.4]
self.PALM_BOX_ENLARGE_FACTOR = 3
self.HAND_BOX_SHIFT_VECTOR = [0, -0.1]
self.HAND_BOX_ENLARGE_FACTOR = 1.65
self.model = cv.dnn.readNet(self.model_path)
self.model.setPreferableBackend(self.backend_id)
self.model.setPreferableTarget(self.target_id)
@property
def name(self):
return self.__class__.__name__
def setBackend(self, backendId):
self.backend_id = backendId
self.model.setPreferableBackend(self.backend_id)
def setTarget(self, targetId):
self.target_id = targetId
self.model.setPreferableTarget(self.target_id)
def _preprocess(self, image, palm):
'''
Rotate input for inference.
Parameters:
image - input image of BGR channel order
palm_bbox - palm bounding box found in image of format [[x1, y1], [x2, y2]] (top-left and bottom-right points)
palm_landmarks - 7 landmarks (5 finger base points, 2 palm base points) of shape [7, 2]
Returns:
rotated_hand - rotated hand image for inference
rotation_matrix - matrix for rotation and de-rotation
'''
# Rotate input to have vertically oriented hand image
# compute rotation
palm_bbox = palm[0:4].reshape(2, 2)
palm_landmarks = palm[4:18].reshape(7, 2)
image = cv.cvtColor(image, cv.COLOR_BGR2RGB)
p1 = palm_landmarks[self.PALM_LANDMARKS_INDEX_OF_PALM_BASE]
p2 = palm_landmarks[self.PALM_LANDMARKS_INDEX_OF_MIDDLE_FINGER_BASE]
radians = np.pi / 2 - np.arctan2(-(p2[1] - p1[1]), p2[0] - p1[0])
radians = radians - 2 * np.pi * np.floor((radians + np.pi) / (2 * np.pi))
angle = np.rad2deg(radians)
# get bbox center
center_palm_bbox = np.sum(palm_bbox, axis=0) / 2
# get rotation matrix
rotation_matrix = cv.getRotationMatrix2D(center_palm_bbox, angle, 1.0)
# get rotated image
rotated_image = cv.warpAffine(image, rotation_matrix, (image.shape[1], image.shape[0]))
# get bounding boxes from rotated palm landmarks
homogeneous_coord = np.c_[palm_landmarks, np.ones(palm_landmarks.shape[0])]
rotated_palm_landmarks = np.array([
np.dot(homogeneous_coord, rotation_matrix[0]),
np.dot(homogeneous_coord, rotation_matrix[1])])
# get landmark bounding box
rotated_palm_bbox = np.array([
np.amin(rotated_palm_landmarks, axis=1),
np.amax(rotated_palm_landmarks, axis=1)]) # [top-left, bottom-right]
# shift bounding box
wh_rotated_palm_bbox = rotated_palm_bbox[1] - rotated_palm_bbox[0]
shift_vector = self.PALM_BOX_SHIFT_VECTOR * wh_rotated_palm_bbox
rotated_palm_bbox = rotated_palm_bbox + shift_vector
# squarify bounding boxx
center_rotated_plam_bbox = np.sum(rotated_palm_bbox, axis=0) / 2
wh_rotated_palm_bbox = rotated_palm_bbox[1] - rotated_palm_bbox[0]
new_half_size = np.amax(wh_rotated_palm_bbox) / 2
rotated_palm_bbox = np.array([
center_rotated_plam_bbox - new_half_size,
center_rotated_plam_bbox + new_half_size])
# enlarge bounding box
center_rotated_plam_bbox = np.sum(rotated_palm_bbox, axis=0) / 2
wh_rotated_palm_bbox = rotated_palm_bbox[1] - rotated_palm_bbox[0]
new_half_size = wh_rotated_palm_bbox * self.PALM_BOX_ENLARGE_FACTOR / 2
rotated_palm_bbox = np.array([
center_rotated_plam_bbox - new_half_size,
center_rotated_plam_bbox + new_half_size])
# Crop and resize the rotated image by the bounding box
[[x1, y1], [x2, y2]] = rotated_palm_bbox.astype(np.int32)
diff = np.maximum([-x1, -y1, x2 - rotated_image.shape[1], y2 - rotated_image.shape[0]], 0)
[x1, y1, x2, y2] = [x1, y1, x2, y2] + diff
crop = rotated_image[y1:y2, x1:x2, :]
crop = cv.copyMakeBorder(crop, diff[1], diff[3], diff[0], diff[2], cv.BORDER_CONSTANT, value=(0, 0, 0))
blob = cv.resize(crop, dsize=self.input_size, interpolation=cv.INTER_AREA).astype(np.float32) / 255.0
return blob[np.newaxis, :, :, :], rotated_palm_bbox, angle, rotation_matrix
def infer(self, image, palm):
# Preprocess
input_blob, rotated_palm_bbox, angle, rotation_matrix = self._preprocess(image, palm)
# Forward
self.model.setInput(input_blob)
output_blob = self.model.forward(self.model.getUnconnectedOutLayersNames())
# Postprocess
results = self._postprocess(output_blob, rotated_palm_bbox, angle, rotation_matrix)
return results # [bbox_coords, landmarks_coords, conf]
def _postprocess(self, blob, rotated_palm_bbox, angle, rotation_matrix):
landmarks, conf = blob
if conf < self.conf_threshold:
return None
landmarks = landmarks.reshape(-1, 3) # shape: (1, 63) -> (21, 3)
# transform coords back to the input coords
wh_rotated_palm_bbox = rotated_palm_bbox[1] - rotated_palm_bbox[0]
scale_factor = wh_rotated_palm_bbox / self.input_size
landmarks[:, :2] = (landmarks[:, :2] - self.input_size / 2) * scale_factor
coords_rotation_matrix = cv.getRotationMatrix2D((0, 0), angle, 1.0)
rotated_landmarks = np.dot(landmarks[:, :2], coords_rotation_matrix[:, :2])
rotated_landmarks = np.c_[rotated_landmarks, landmarks[:, 2]]
# invert rotation
rotation_component = np.array([
[rotation_matrix[0][0], rotation_matrix[1][0]],
[rotation_matrix[0][1], rotation_matrix[1][1]]])
translation_component = np.array([
rotation_matrix[0][2], rotation_matrix[1][2]])
inverted_translation = np.array([
-np.dot(rotation_component[0], translation_component),
-np.dot(rotation_component[1], translation_component)])
inverse_rotation_matrix = np.c_[rotation_component, inverted_translation]
# get box center
center = np.append(np.sum(rotated_palm_bbox, axis=0) / 2, 1)
original_center = np.array([
np.dot(center, inverse_rotation_matrix[0]),
np.dot(center, inverse_rotation_matrix[1])])
landmarks = rotated_landmarks[:, :2] + original_center
# get bounding box from rotated_landmarks
bbox = np.array([
np.amin(landmarks, axis=0),
np.amax(landmarks, axis=0)]) # [top-left, bottom-right]
# shift bounding box
wh_bbox = bbox[1] - bbox[0]
shift_vector = self.HAND_BOX_SHIFT_VECTOR * wh_bbox
bbox = bbox + shift_vector
# enlarge bounding box
center_bbox = np.sum(bbox, axis=0) / 2
wh_bbox = bbox[1] - bbox[0]
new_half_size = wh_bbox * self.HAND_BOX_ENLARGE_FACTOR / 2
bbox = np.array([
center_bbox - new_half_size,
center_bbox + new_half_size])
return np.r_[bbox.reshape(-1), landmarks.reshape(-1), conf[0]]
#
# Copyright (c) 2021 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
version: 1.0
model: # mandatory. used to specify model specific information.
name: mp_handpose
framework: onnxrt_qlinearops # mandatory. supported values are tensorflow, pytorch, pytorch_ipex, onnxrt_integer, onnxrt_qlinear or mxnet; allow new framework backend extension.
quantization: # optional. tuning constraints on model-wise for advance user to reduce tuning space.
approach: post_training_static_quant # optional. default value is post_training_static_quant.
calibration:
dataloader:
batch_size: 1
dataset:
dummy:
shape: [1, 256, 256, 3]
low: -1.0
high: 1.0
dtype: float32
label: True
tuning:
accuracy_criterion:
relative: 0.02 # optional. default value is relative, other value is absolute. this example allows relative accuracy loss: 1%.
exit_policy:
timeout: 0 # optional. tuning timeout (seconds). default value is 0 which means early stop. combine with max_trials field to decide when to exit.
random_seed: 9527 # optional. random seed for deterministic tuning.
......@@ -78,6 +78,9 @@ models=dict(
mp_palmdet=Quantize(model_path='../../models/palm_detection_mediapipe/palm_detection_mediapipe_2022may.onnx',
config_path='./inc_configs/mp_palmdet.yaml',
custom_dataset=Dataset(root='../../benchmark/data/palm_detection', dim='hwc', swapRB=True, mean=127.5, std=127.5, toFP32=True)),
mp_handpose=Quantize(model_path='../../models/handpose_estimation_mediapipe/handpose_estimation_mediapipe_2022may.onnx',
config_path='./inc_configs/mp_handpose.yaml',
custom_dataset=Dataset(root='../../benchmark/data/palm_detection', dim='hwc', swapRB=True, mean=127.5, std=127.5, toFP32=True)),
lpd_yunet=Quantize(model_path='../../models/license_plate_detection_yunet/license_plate_detection_lpd_yunet_2022may.onnx',
config_path='./inc_configs/lpd_yunet.yaml',
custom_dataset=Dataset(root='../../benchmark/data/license_plate_detection', size=(320, 240), dim='chw', toFP32=True)),
......
......@@ -6,7 +6,7 @@
import os
import sys
import numpy as ny
import numpy as np
import cv2 as cv
import onnx
......
......@@ -2,4 +2,3 @@ opencv-python>=4.5.4.58
onnx
onnxruntime
neural-compressor
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册