提交 db43b49c 编写于 作者: G Gines Hidalgo

Rewrote README, added community projects and C++API doc

Signed-off-by: NGines Hidalgo <gineshidalgo99@gmail.com>
上级 fcc44e4b
### Posting rules
1. **No duplicated posts, only 1 new post opened a day, and up to 2 opened a week**. Otherwise, extrict user bans will occur.
- Check the [FAQ](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/4_faq.md) section, other GitHub issues, and general documentation before posting. E.g., **low-speed, out-of-memory, output format, 0-people detected, installation issues, ...**).
- Check the [FAQ](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/05_faq.md) section, other GitHub issues, and general documentation before posting. E.g., **low-speed, out-of-memory, output format, 0-people detected, installation issues, ...**).
- Keep posting all your issues in the same post.
- No bans if you are unsure whether the post is duplicated!
2. **Fill all** the **Your System Configuration section** if you are facing an error or unexpected behavior. Some posts (e.g., feature requests) might not require it.
......@@ -82,4 +82,4 @@ Select the topic(s) on your post, delete the rest:
- Portable demo or compiled library?
10. If **speed performance** issue:
- Report OpenPose timing speed based on [this link](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/5_maximizing_openpose_speed.md#profiling-speed).
- Report OpenPose timing speed based on the [profiling documentation](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/06_maximizing_openpose_speed.md#profiling-speed).
......@@ -255,9 +255,9 @@ jobs:
run: scripts/CI/run_tests.sh
- name: Docs APT packages
# The Doxygen apt-get version for Ubuntu 20 is 1.8.17, which has some bugs fixed in 1.9.1
# run: sudo apt-get -yq install doxygen doxygen-doc doxygen-latex doxygen-gui graphviz
run: |
# The Doxygen apt-get version for Ubuntu 20 is 1.8.17, which has some bugs fixed in 1.9.1
# run: sudo apt-get -yq install doxygen doxygen-doc doxygen-latex doxygen-gui graphviz
git clone https://github.com/doxygen/doxygen.git && cd doxygen && git checkout Release_1_9_1
mkdir build && cd build
cmake -G "Unix Makefiles" ..
......@@ -272,6 +272,8 @@ jobs:
echo 'Creating .nojekyll and copying log...'
echo "" > doxygen/html/.nojekyll
cp doxygen.log doxygen/html/doxygen.log
# Required so Doxygen links/finds the license file
cp LICENSE doxygen/html/LICENSE
# Required in order to link .github/media/ images with doc without modifying doc links. Remove if using `publish_dir: doxygen/html/`
mkdir -p doxygen_final/web/html/ && mv doxygen/html doxygen_final/web/html/doc
mkdir -p doxygen_final/web/.github/media/ && cp -rf .github/media/ doxygen_final/web/.github/
......
......@@ -10,7 +10,7 @@
[**OpenPose**](https://github.com/CMU-Perceptual-Computing-Lab/openpose) has represented the **first real-time multi-person system to jointly detect human body, hand, facial, and foot keypoints (in total 135 keypoints) on single images**.
It is **authored by** [**Ginés Hidalgo**](https://www.gineshidalgo.com), [**Zhe Cao**](https://people.eecs.berkeley.edu/~zhecao), [**Tomas Simon**](http://www.cs.cmu.edu/~tsimon), [**Shih-En Wei**](https://scholar.google.com/citations?user=sFQD3k4AAAAJ&hl=en), [**Hanbyul Joo**](https://jhugestar.github.io), **and** [**Yaser Sheikh**](http://www.cs.cmu.edu/~yaser). It is **maintained by** [**Ginés Hidalgo**](https://www.gineshidalgo.com) **and** [**Yaadhav Raaj**](https://www.raaj.tech). OpenPose would not be possible without the [**CMU Panoptic Studio dataset**](http://domedb.perception.cs.cmu.edu). We would also like to thank all the people who [has helped OpenPose in any way](doc/8_authors_and_contributors.md).
It is **authored by** [**Ginés Hidalgo**](https://www.gineshidalgo.com), [**Zhe Cao**](https://people.eecs.berkeley.edu/~zhecao), [**Tomas Simon**](http://www.cs.cmu.edu/~tsimon), [**Shih-En Wei**](https://scholar.google.com/citations?user=sFQD3k4AAAAJ&hl=en), [**Hanbyul Joo**](https://jhugestar.github.io), **and** [**Yaser Sheikh**](http://www.cs.cmu.edu/~yaser). It is **maintained by** [**Ginés Hidalgo**](https://www.gineshidalgo.com) **and** [**Yaadhav Raaj**](https://www.raaj.tech). OpenPose would not be possible without the [**CMU Panoptic Studio dataset**](http://domedb.perception.cs.cmu.edu). We would also like to thank all the people who [has helped OpenPose in any way](doc/09_authors_and_contributors.md).
......@@ -27,7 +27,7 @@ It is **authored by** [**Ginés Hidalgo**](https://www.gineshidalgo.com), [**Zhe
2. [Features](#features)
3. [Related Work](#related-work)
4. [Installation](#installation)
5. [Quick Start](#quick-start)
5. [Quick Start Overview](#quick-start-overview)
6. [Send Us Feedback!](#send-us-feedback)
7. [Citation](#citation)
8. [License](#license)
......@@ -35,39 +35,34 @@ It is **authored by** [**Ginés Hidalgo**](https://www.gineshidalgo.com), [**Zhe
## Results
### Body and Foot Estimation
### Whole-body (Body, Foot, Face, and Hands) 2D Pose Estimation
<p align="center">
<img src=".github/media/dance_foot.gif" width="360">
<img src=".github/media/dance_foot.gif" width="310">
<img src=".github/media/pose_face.gif" width="310">
<img src=".github/media/pose_hands.gif" width="310">
<br>
<sup>Testing the <a href="https://www.youtube.com/watch?v=2DiQUX11YaY" target="_blank"><i>Crazy Uptown Funk flashmob in Sydney</i></a> video sequence with OpenPose</sup>
<sup>Testing OpenPose: (Left) <a href="https://www.youtube.com/watch?v=2DiQUX11YaY" target="_blank"><i>Crazy Uptown Funk flashmob in Sydney</i></a> video sequence. (Center and right) Authors <a href="https://www.gineshidalgo.com" target="_blank">Ginés Hidalgo</a> and <a href="http://www.cs.cmu.edu/~tsimon" target="_blank">Tomas Simon</a> testing face and hands</sup>
</p>
### 3D Reconstruction Module (Body, Foot, Face, and Hands)
### Whole-body 3D Pose Reconstruction and Estimation
<p align="center">
<img src=".github/media/openpose3d.gif" width="360">
<br>
<sup>Testing the 3D Reconstruction Module of OpenPose</sup>
</p>
### Body, Foot, Face, and Hands Estimation
<p align="center">
<img src=".github/media/pose_face.gif" width="360">
<img src=".github/media/pose_hands.gif" width="360">
<br>
<sup>Authors <a href="https://www.gineshidalgo.com" target="_blank">Ginés Hidalgo</a> (left image) and <a href="http://www.cs.cmu.edu/~tsimon" target="_blank">Tomas Simon</a> (right image) testing OpenPose</sup>
<sup><a href="https://ziutinyat.github.io/" target="_blank">Tianyi Zhao</a> testing the OpenPose 3D Module</a></sup>
</p>
### Unity Plugin
<p align="center">
<img src=".github/media/unity_main.png" width="240">
<img src=".github/media/unity_body_foot.png" width="240">
<img src=".github/media/unity_hand_face.png" width="240">
<img src=".github/media/unity_main.png" width="310">
<img src=".github/media/unity_body_foot.png" width="310">
<img src=".github/media/unity_hand_face.png" width="310">
<br>
<sup><a href="https://ziutinyat.github.io/" target="_blank">Tianyi Zhao</a> and <a href="https://www.gineshidalgo.com" target="_blank">Ginés Hidalgo</a> testing the <a href="https://github.com/CMU-Perceptual-Computing-Lab/openpose_unity_plugin" target="_blank">OpenPose Unity Plugin</a></sup>
</p>
### Runtime Analysis
We show an inference time comparison between the 3 available pose estimation libraries (same hardware and conditions): OpenPose, Alpha-Pose (fast Pytorch version), and Mask R-CNN. The OpenPose runtime is constant, while the runtime of Alpha-Pose and Mask R-CNN grow linearly with the number of people. More details [**here**](https://arxiv.org/abs/1812.08008).
<p align="center">
<img src=".github/media/openpose_vs_competition.png" width="360">
</p>
......@@ -91,10 +86,10 @@ We show an inference time comparison between the 3 available pose estimation lib
- **OS**: Ubuntu (20, 18, 16, 14), Windows (10, 8), Mac OSX, Nvidia TX2.
- **Hardware compatibility**: CUDA (Nvidia GPU), OpenCL (AMD GPU), and non-GPU (CPU-only) versions.
- **Usage Alternatives**:
- [**Command-line demo**](doc/1_demo.md) for built-in functionality.
- [**C++ API**](examples/tutorial_api_cpp/) and [**Python API**](doc/3_python_api.md) for custom functionality. E.g., adding your custom inputs, pre-processing, post-posprocessing, and output steps.
- [**Command-line demo**](doc/01_demo.md) for built-in functionality.
- [**C++ API**](doc/04_cpp_api.md/) and [**Python API**](doc/03_python_api.md) for custom functionality. E.g., adding your custom inputs, pre-processing, post-posprocessing, and output steps.
For further details, check the [major released features](doc/6_major_released_features.md) and [release notes](doc/7_release_notes.md) docs.
For further details, check the [major released features](doc/07_major_released_features.md) and [release notes](doc/08_release_notes.md) docs.
......@@ -102,19 +97,19 @@ For further details, check the [major released features](doc/6_major_released_fe
- [**OpenPose training code**](https://github.com/CMU-Perceptual-Computing-Lab/openpose_train)
- [**OpenPose foot dataset**](https://cmu-perceptual-computing-lab.github.io/foot_keypoint_dataset/)
- [**OpenPose Unity Plugin**](https://github.com/CMU-Perceptual-Computing-Lab/openpose_unity_plugin)
- OpenPose papers published in [**IEEE TPAMI** and **CVPR**](#citation). [Cite them](#citation) in your publications if it helps your research!
- OpenPose papers published in **IEEE TPAMI and CVPR**. Cite them in your publications if OpenPose helps your research! (Links and more details in the [Citation](#citation) section below).
## Installation
If you want to use OpenPose without compiling or writing any code, simply [download and use the latest Windows portable version of OpenPose](doc/installation/0_index.md#windows-portable-demo)! Otherwise, you can also [build OpenPose from source](doc/installation/0_index.md#compiling-and-running-openpose-from-source).
If you want to use OpenPose without installing or writing any code, simply [download and use the latest Windows portable version of OpenPose](doc/installation/0_index.md#windows-portable-demo)!
See [doc/installation/0_index.md](doc/installation/0_index.md) for more details.
Otherwise, you could [build OpenPose from source](doc/installation/0_index.md#compiling-and-running-openpose-from-source). See the [installation doc](doc/installation/0_index.md) for all the alternatives.
## Quick Start
Most users do not need to know C++ or Python, they can simply use the OpenPose Demo in their command-line tool (e.g., PowerShell/Terminal). E.g., this would run OpenPose on the webcam and display the body keypoints:
## Quick Start Overview
Simply use the OpenPose Demo from your favorite command-line tool (e.g., Windows PowerShell or Ubuntu Terminal). E.g., this example runs OpenPose on your webcam and displays the body keypoints:
```
# Ubuntu
./build/examples/openpose/openpose.bin
......@@ -124,7 +119,7 @@ Most users do not need to know C++ or Python, they can simply use the OpenPose D
bin\OpenPoseDemo.exe --video examples\media\video.avi
```
You can also add any of the available flags in any order. Do you also want to add face and/or hands? Add the `--face` and/or `--hand` flags. Do you also want to save the output keypoints on JSON files on disk? Add the `--write_json` flag, etc.
You can also add any of the available flags in any order. E.g., the following example runs on a video (`--video {PATH}`), enables face (`--face`) and hands (`--hand`), and saves the output keypoints on JSON files on disk (`--write_json {PATH}`).
```
# Ubuntu
./build/examples/openpose/openpose.bin --video examples/media/video.avi --face --hand --write_json output_json_folder/
......@@ -134,23 +129,19 @@ You can also add any of the available flags in any order. Do you also want to ad
bin\OpenPoseDemo.exe --video examples\media\video.avi --face --hand --write_json output_json_folder/
```
After [installing](#installation) OpenPose, check [doc/0_index.md](doc/0_index.md) for a quick overview of all the alternatives and tutorials.
Optionally, you can also extend OpenPose's functionality from its Python and C++ APIs. After [installing](doc/installation/0_index.md) OpenPose, check its [official doc](doc/00_index.md) for a quick overview of all the alternatives and tutorials.
## Send Us Feedback!
Our library is open source for research purposes, and we want to continuously improve it! So let us know if you...
1. Find any bug (in functionality or speed).
2. Add some functionality on top of OpenPose which we might want to add.
3. Know how to speed up or improve any part of OpenPose.
4. Want to share your cool demo or project made on top of OpenPose (you can email it to us too!).
Just create a new GitHub issue or a pull request and we will answer as soon as possible!
Our library is open source for research purposes, and we want to improve it! So let us know (create a new GitHub issue or pull request, email us, etc.) if you...
1. Find/fix any bug (in functionality or speed) or know how to speed up or improve any part of OpenPose.
2. Want to add/show some cool functionality/demo/project made on top of OpenPose. We can add your project link to our [Community-based Projects](doc/10_community_projects.md) section or even integrate it with OpenPose!
## Citation
Please cite these papers in your publications if it helps your research. All of OpenPose is based on [OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields](https://arxiv.org/abs/1812.08008), while the hand and face detectors also use [Hand Keypoint Detection in Single Images using Multiview Bootstrapping](https://arxiv.org/abs/1704.07809) (the face detector was trained using the same procedure than the hand detector).
Please cite these papers in your publications if OpenPose helps your research. All of OpenPose is based on [OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields](https://arxiv.org/abs/1812.08008), while the hand and face detectors also use [Hand Keypoint Detection in Single Images using Multiview Bootstrapping](https://arxiv.org/abs/1704.07809) (the face detector was trained using the same procedure than the hand detector).
@article{8765346,
author = {Z. {Cao} and G. {Hidalgo Martinez} and T. {Simon} and S. {Wei} and Y. A. {Sheikh}},
......
......@@ -2,16 +2,16 @@ OpenPose Doc
==========================
The OpenPose documentation is available in 2 different formats, choose your preferred one!
- As a traditional website (recommended): [cmu-perceptual-computing-lab.github.io/openpose/web/html/doc](https://cmu-perceptual-computing-lab.github.io/openpose/web/html/doc/).
- As markdown files: [github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/0_index.md](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/0_index.md).
- As a traditional website (recommended): [cmu-perceptual-computing-lab.github.io/openpose/web/html/doc/](https://cmu-perceptual-computing-lab.github.io/openpose/web/html/doc/).
- As markdown files: [github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/00_index.md](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/00_index.md).
Most users can simply use the OpenPose Demo without reading any C++/Python code. Users that need to add additional functionality (new inputs, outputs, etc) should check the C++/Python APIs:
- If you face issues with any of these steps, keep in mind to check the [FAQ](installation/4_faq.md) section.
- If you face issues with any of these steps, keep in mind to check the [FAQ](installation/05_faq.md) section.
- The first step for any software, [install it](installation/0_index.md)!
- [OpenPose Demo](1_demo.md): Choose your input (e.g., images, video, webcam), set of algorithms (body, hand, face), output (e.g., display, JSON keypoint saving, image+keypoints), and run OpenPose from your terminal or PowerShell!
- [OpenPose Demo](01_demo.md): Choose your input (e.g., images, video, webcam), set of algorithms (body, hand, face), output (e.g., display, JSON keypoint saving, image+keypoints), and run OpenPose from your terminal or PowerShell!
- E.g.: Given an input video (`--video`), extract body (by default), face (`--face`) and hand (`--hand`) keypoints, save the keypoints in a JSON file (`--write_json`), and display (by default) the results in the screen. You can remove any of the flags to remove that particular functionality or add any other.
```
# Ubuntu
......@@ -21,15 +21,13 @@ Most users can simply use the OpenPose Demo without reading any C++/Python code.
bin\OpenPoseDemo.exe --video examples\media\video.avi --face --hand --write_json output_json_folder/
```
- [Output information](2_output.md): Learn about the output format, keypoint index ordering, etc.
- [Output information](02_output.md): Learn about the output format, keypoint index ordering, etc.
- [OpenPose Python API](3_python_api.md): Almost all the OpenPose functionality, but in Python!
- [OpenPose Python API](03_python_api.md): Almost all the OpenPose functionality, but in Python! If you want to read a specific input, and/or add your custom post-processing function, and/or implement your own display/saving.
- [OpenPose C++ API](../examples/tutorial_api_cpp/): If you want to read a specific input, and/or add your custom post-processing function, and/or implement your own display/saving.
- You should be familiar with the [OpenPose Demo](1_demo.md) and the main OpenPose flags before trying to read the C++ or Python API examples. Otherwise, it will be way harder to follow.
- For quick prototyping: You can easily **create your custom code** on [examples/user_code/](../examples/user_code/) and CMake will automatically compile it together with the whole OpenPose project. See [examples/user_code/README.md](../examples/user_code/README.md) for more details.
- [OpenPose C++ API](04_cpp_api.md): If you want to read a specific input, and/or add your custom post-processing function, and/or implement your own display/saving.
- [Maximizing OpenPose speed and benchmark](5_maximizing_openpose_speed.md): Check the OpenPose Benchmark as well as some hints to speed up and/or reduce the memory requirements for OpenPose.
- [Maximizing OpenPose speed and benchmark](06_maximizing_openpose_speed.md): Check the OpenPose Benchmark as well as some hints to speed up and/or reduce the memory requirements for OpenPose.
- [Calibration toolbox](advanced/calibration_module.md) and [3D OpenPose](advanced/3d_reconstruction_module.md): Calibrate your cameras for 3D OpenPose (or any other stereo vision tasks) and start obtaining 3D keypoints!
......
......@@ -30,7 +30,7 @@ Forget about the OpenPose code, just download the portable Windows binaries (or
## Quick Start
In Ubuntu, Mac, and other Unix systems, use `Terminal` or `Terminator`. In Windows, the `Windows PowerShell`. Watch any Youtube video tutorial if you are not familiar with these tools. Make sure that you are in the **root directory of the project** when running any command (i.e., in the OpenPose folder, not inside `build/` nor `windows/` nor `bin/`). In addition, `examples/media/video.avi` and `examples/media` exist, so there is no need to change any lines of code.
Test OpenPose by running the following. The expected visual result should look like [doc/2_output.md#ui-and-visual-output](2_output.md#ui-and-visual-output).
Test OpenPose by running the following. The expected visual result should look like [doc/02_output.md#ui-and-visual-output](02_output.md#ui-and-visual-output).
```
# Ubuntu and Mac
./build/examples/openpose/openpose.bin --video examples/media/video.avi
......@@ -108,7 +108,7 @@ bin\OpenPoseDemo.exe --face --hand
## Different Outputs (JSON, Images, Video, UI)
All the output options are complementary to each other. E.g., whether you display the images with the skeletons on the UI (or not) is independent on whether you save them on disk (or not).
- Save the skeletons in a set of JSON files with `--write_json {OUTPUT_JSON_PATH}`, see [doc/2_output.md](2_output.md) to understand its format.
- Save the skeletons in a set of JSON files with `--write_json {OUTPUT_JSON_PATH}`, see [doc/02_output.md](02_output.md) to understand its format.
```
# Ubuntu and Mac (same flags for Windows)
./build/examples/openpose/openpose.bin --image_dir examples/media/ --write_json output_jsons/
......@@ -365,4 +365,4 @@ If you only have an integrated Intel Graphics card, then it will most probably b
### FAQ
Check [doc/4_faq.md](4_faq.md) to see if you can find your error, issue, or concern.
Check [doc/05_faq.md](05_faq.md) to see if you can find your error, issue, or concern.
......@@ -13,7 +13,11 @@ OpenPose Doc - Python API
## Introduction
Almost all the OpenPose functionality, but in Python! You should be familiar with the [**OpenPose Demo**](1_demo.md) and the main OpenPose flags before trying to read the C++ or Python API examples. Otherwise, it will be way harder to follow.
Almost all the OpenPose functionality, but in Python!
When should you look at the [Python](03_python_api.md) or [C++](04_cpp_api.md) APIs? If you want to read a specific input, and/or add your custom post-processing function, and/or implement your own display/saving.
You should be familiar with the [**OpenPose Demo**](01_demo.md) and the main OpenPose flags before trying to read the C++ or Python API examples. Otherwise, it will be way harder to follow.
......
OpenPose Doc - C++ API
====================================
## Contents
1. [Introduction](#introduction)
2. [Advance Introduction (Optional)](#advance-introduction-optional)
3. [Compatibility](#compatibility)
4. [Installation](#installation)
5. [Testing And Developing](#testing-and-developing)
6. [Exporting Python OpenPose](#exporting-python-openpose)
7. [Common Issues](#common-issues)
## Introduction
Extend the OpenPose functionality with all the power and performance of C++!
When should you look at the [Python](03_python_api.md) or [C++](04_cpp_api.md) APIs? If you want to read a specific input, and/or add your custom post-processing function, and/or implement your own display/saving.
You should be familiar with the [**OpenPose Demo**](01_demo.md) and the main OpenPose flags before trying to read the C++ or Python API examples. Otherwise, it will be way harder to follow.
## Adding your Custom Code
Once you are familiar with the [command line demo](01_demo.md), then you should explore the different C++ examples in the [OpenPose C++ API](https://github.com/CMU-Perceptual-Computing-Lab/openpose/tree/master/examples/tutorial_api_cpp) folder.
For quick prototyping, you can simply **duplicate and rename any of the existing sample files** from the [OpenPose C++ API](https://github.com/CMU-Perceptual-Computing-Lab/openpose/tree/master/examples/tutorial_api_cpp) folder into the [examples/user_code/](https://github.com/CMU-Perceptual-Computing-Lab/openpose/tree/master/examples/user_code) folder and start building in there. Add the name of your new file(s) into the [CMake file from that folder](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/examples/user_code/CMakeLists.txt), and CMake will automatically compile it together with the whole OpenPose project.
See [examples/user_code/README.md](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/examples/user_code/README.md) for more details.
......@@ -109,7 +109,7 @@ For problem 2, try the following solutions (in this order):
#### Very Few People Detected
**Q: Low detection rate. It can detect the person on some images (usually higher contrast, with bigger people), but it will fail for most of images with low resolution or small people.**
**A**: Images with low resolution, or with people too tiny will simply not work too well. However, it can be highly improved by using the maximum accuracy configuration detailed in [doc/1_demo.md#maximum-accuracy-configuration](1_demo.md#maximum-accuracy-configuration).
**A**: Images with low resolution, or with people too tiny will simply not work too well. However, it can be highly improved by using the maximum accuracy configuration detailed in [doc/01_demo.md#maximum-accuracy-configuration](01_demo.md#maximum-accuracy-configuration).
......@@ -207,26 +207,26 @@ git submodule update --init --recursive --remote
#### Speed Up, Memory Reduction, and Benchmark
**Q: Low speed** - OpenPose is quite slow, is it normal? How can I speed it up?
**A**: Check [doc/5_maximizing_openpose_speed.md](5_maximizing_openpose_speed.md) to discover the approximate speed of your graphics card and some speed tips.
**A**: Check [doc/06_maximizing_openpose_speed.md](06_maximizing_openpose_speed.md) to discover the approximate speed of your graphics card and some speed tips.
#### How to Measure the Latency Time?
**Q: How to measure/calculate/estimate the latency/lag time?**
**A**: [Profile](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/5_maximizing_openpose_speed.md#profiling-speed) the OpenPose speed. For 1-GPU or CPU-only systems (use `--disable_multi_thread` for simplicity in multi-GPU systems for latency measurement), the latency will be roughly the sum of all the reported measurements.
**A**: [Profile](https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/06_maximizing_openpose_speed.md#profiling-speed) the OpenPose speed. For 1-GPU or CPU-only systems (use `--disable_multi_thread` for simplicity in multi-GPU systems for latency measurement), the latency will be roughly the sum of all the reported measurements.
#### CPU Version Too Slow
**Q: The CPU version is insanely slow compared to the GPU version.**
**A**: Check [doc/5_maximizing_openpose_speed.md#cpu-version](5_maximizing_openpose_speed.md#cpu-version) to discover the approximate speed and some speed tips.
**A**: Check [doc/06_maximizing_openpose_speed.md#cpu-version](06_maximizing_openpose_speed.md#cpu-version) to discover the approximate speed and some speed tips.
#### Profiling Speed and Estimating FPS without Display
Check the [doc/5_maximizing_openpose_speed.md#profiling-speed](5_maximizing_openpose_speed.md#profiling-speed) section.
Check the [doc/06_maximizing_openpose_speed.md#profiling-speed](06_maximizing_openpose_speed.md#profiling-speed) section.
......@@ -241,7 +241,7 @@ Check the [doc/5_maximizing_openpose_speed.md#profiling-speed](5_maximizing_open
### Accuracy Issues
#### Is Maximum Accuracy Configuration Possible on Lower End GPUs?
**Q**: I've read that this command provides the most accurate results possible on Openpose so far: https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/1_demo.md#maximum-accuracy-configuration. However, a 8GB GPU (e.g., 1080 or 2080) will run out of memory, is there any method to achieve the same accuracy on GPU using less memory even if it meant sacrificing speed?
**Q**: I've read that this command provides the most accurate results possible on Openpose so far: https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/01_demo.md#maximum-accuracy-configuration. However, a 8GB GPU (e.g., 1080 or 2080) will run out of memory, is there any method to achieve the same accuracy on GPU using less memory even if it meant sacrificing speed?
**A**: Unfortunately no, there is no way at the moment. Caffe just takes so much memory doing that. You can try with `--scale_number 3` instead of 4, reducing a bit the `net_resolution` (e.g. `720` vs. `736`) and starting the computer without GUI (which also takes about 1GB of memory just to keep the computer GUI running).
......
......@@ -48,7 +48,7 @@ Some speed tips to maximize the OpenPose runtime speed while preserving the accu
## Speed Up and Memory Reduction
Some speed tips to highly maximize the OpenPose speed, but keep in mind the accuracy trade-off:
1. Reduce the `--net_resolution` (e.g., to 320x176) (lower accuracy). Note: For maximum accuracy, follow [doc/1_demo.md#maximum-accuracy-configuration](1_demo.md#maximum-accuracy-configuration).
1. Reduce the `--net_resolution` (e.g., to 320x176) (lower accuracy). Note: For maximum accuracy, follow [doc/01_demo.md#maximum-accuracy-configuration](01_demo.md#maximum-accuracy-configuration).
2. For face, reduce the `--face_net_resolution`. The resolution 320x320 usually works pretty decently.
3. Points 1-2 will also reduce the GPU memory usage (or RAM memory for CPU version).
4. Use the `BODY_25` model for maximum speed. Use `MPI_4_layers` model for minimum GPU memory usage (but lower accuracy, speed, and number of parts).
......@@ -5,11 +5,11 @@ OpenPose Doc - Major Released Features
- Nov 2020: [Compatibility with Nvidia 30XX cards, CUDA 11, and Ubuntu 20](installation/0_index.md)!
- Sep 2019: [**Training code released**](https://github.com/CMU-Perceptual-Computing-Lab/openpose_train)!
- Jan 2019: [**Unity plugin released**](https://github.com/CMU-Perceptual-Computing-Lab/openpose_unity_plugin)!
- Jan 2019: [**Improved Python API**](doc/3_python_api.md) released! Including body, face, hands, and all the functionality of the C++ API!
- Jan 2019: [**Improved Python API**](doc/03_python_api.md) released! Including body, face, hands, and all the functionality of the C++ API!
- Dec 2018: [**Foot dataset released**](https://cmu-perceptual-computing-lab.github.io/foot_keypoint_dataset) and [**new paper released**](https://arxiv.org/abs/1812.08008)!
- Sep 2018: [**Experimental tracker**](1_demo.md#tracking)!
- Sep 2018: [**Experimental tracker**](01_demo.md#tracking)!
- Jun 2018: [**Combined body-foot model released! 40% faster and 5% more accurate**](installation/0_index.md)!
- Jun 2018: [**Python API**](3_python_api.md) released!
- Jun 2018: [**Python API**](03_python_api.md) released!
- Jun 2018: [**OpenCL/AMD graphic card version**](installation/0_index.md) released!
- Jun 2018: [**Calibration toolbox**](advanced/calibration_module.md) released!
- Jun 2018: [**Mac OSX version (CPU)**](installation/0_index.md) released!
......@@ -21,4 +21,4 @@ OpenPose Doc - Major Released Features
- Jun 2017: **Face** released!
- May 2017: **Windows** version!
- Apr 2017: **Body** released!
For further details, check the [release notes](7_release_notes.md).
For further details, check the [release notes](08_release_notes.md).
......@@ -3,14 +3,14 @@ OpenPose Doc - Authors and Contributors
### Authors
## Authors
OpenPose is authored by [Ginés Hidalgo](https://www.gineshidalgo.com), [Zhe Cao](https://people.eecs.berkeley.edu/~zhecao), [Tomas Simon](http://www.cs.cmu.edu/~tsimon), [Shih-En Wei](https://scholar.google.com/citations?user=sFQD3k4AAAAJ&hl=en), [Hanbyul Joo](https://jhugestar.github.io), and [Yaser Sheikh](http://www.cs.cmu.edu/~yaser). It is maintained by [Ginés Hidalgo](https://www.gineshidalgo.com) and [Yaadhav Raaj](https://www.raaj.tech).
OpenPose would not be possible without the [**CMU Panoptic Studio dataset**](http://domedb.perception.cs.cmu.edu). The body pose estimation work is based on the following and original 2 repositories: [CVPR 2017 repository](https://github.com/ZheC/Multi-Person-Pose-Estimation) and [ECCV 2016 repository](https://github.com/CMU-Perceptual-Computing-Lab/caffe_rtpose).
### Contributors
## Contributors
We would also like to thank the following people, who have contributed to key components of OpenPose:
1. [Yaadhav Raaj](https://www.raaj.tech): OpenPose maintainer, CPU version, OpenCL version, Mac version, Python API, and person tracker.
2. [Bikramjot Hanzra](https://www.linkedin.com/in/bikz05): Former OpenPose maintainer, CMake (Ubuntu and Windows) version, and initial Travis Build version for Ubuntu.
......
OpenPose Doc - Community-based Projects
====================================
Here we expose all projects created with OpenPose by the community and that were shared with us. Do you want to share yours? Simply create a pull request and add to this file your demo and a description of it!
1.
2. [Add here your demo name and link](#here_some_full_link): Add here the description of your project.
......@@ -86,7 +86,7 @@ OpenPose will display the cameras sorted by serial number, starting in the left
## Quick Start
Check the [doc/1_demo.md#3-d-reconstruction](../1_demo.md#3-d-reconstruction) for basic examples.
Check the [doc/01_demo.md#3-d-reconstruction](../01_demo.md#3-d-reconstruction) for basic examples.
......@@ -102,7 +102,7 @@ It should be similar to the following image.
<p align="center">
<img src="../../.github/media/openpose3d.gif">
<br>
<sup><a href="https://ziutinyat.github.io/" target="_blank">Tianyi Zhao</a> testing the 3-D Module</a></sup>
<sup><a href="https://ziutinyat.github.io/" target="_blank">Tianyi Zhao</a> testing the OpenPose 3D Module</a></sup>
</p>
......
OpenPose Advanced Doc - Demo - Advanced
====================================
This document is a more detailed continuation of [doc/1_demo.md](../1_demo.md), and it assumes the user is quite familiar with the OpenPose demo and the contents of [doc/1_demo.md](../1_demo.md).
This document is a more detailed continuation of [doc/01_demo.md](../01_demo.md), and it assumes the user is quite familiar with the OpenPose demo and the contents of [doc/01_demo.md](../01_demo.md).
......@@ -25,7 +25,7 @@ This document is a more detailed continuation of [doc/1_demo.md](../1_demo.md),
In general, there are 3 ways to reduce the latency (with some drawbacks each one):
- Reducing `--output_resolution`: It will slightly reduce the latency and increase the FPS. But the quality of the displayed image will deteriorate.
- Reducing `--net_resolution` and/or `--face_net_resolution` and/or `--hand_net_resolution`: It will increase the FPS and reduce the latency. But the accuracy will drop, specially for small people in the image. Note: For maximum accuracy, follow [doc/1_demo.md#maximum-accuracy-configuration](../1_demo.md#maximum-accuracy-configuration).
- Reducing `--net_resolution` and/or `--face_net_resolution` and/or `--hand_net_resolution`: It will increase the FPS and reduce the latency. But the accuracy will drop, specially for small people in the image. Note: For maximum accuracy, follow [doc/01_demo.md#maximum-accuracy-configuration](../01_demo.md#maximum-accuracy-configuration).
- Enabling `--disable_multi_thread`: The latency should be reduced. But the speed will drop to 1-GPU speed (as it will only use 1 GPU). Note that it's practical only for body, if hands and face are also extracted, it's usually not worth it.
......@@ -217,7 +217,7 @@ Now that you are more familiar with OpenPose, this is a list with all the availa
- DEFINE_int32(write_coco_json_variants, 1, "Add 1 for body, add 2 for foot, 4 for face, and/or 8 for hands. Use 0 to use all the possible candidates. E.g., 7 would mean body+foot+face COCO JSON.");
- DEFINE_int32(write_coco_json_variant, 0, "Currently, this option is experimental and only makes effect on car JSON generation. It selects the COCO variant for cocoJsonSaver.");
- DEFINE_string(write_heatmaps, "", "Directory to write body pose heatmaps in PNG format. At least 1 `add_heatmaps_X` flag must be enabled.");
- DEFINE_string(write_heatmaps_format, "png", "File extension and format for `write_heatmaps`, analogous to `write_images_format`. For lossless compression, recommended `png` for integer `heatmaps_scale` and `float` for floating values. See `doc/2_output.md` for more details.");
- DEFINE_string(write_heatmaps_format, "png", "File extension and format for `write_heatmaps`, analogous to `write_images_format`. For lossless compression, recommended `png` for integer `heatmaps_scale` and `float` for floating values. See `doc/02_output.md` for more details.");
- DEFINE_string(write_keypoint, "", "(Deprecated, use `write_json`) Directory to write the people pose keypoint data. Set format with `write_keypoint_format`.");
- DEFINE_string(write_keypoint_format, "yml", "(Deprecated, use `write_json`) File extension and format for `write_keypoint`: json, xml, yaml & yml. Json not available for OpenCV < 3.0, use `write_json` instead.");
......
......@@ -15,7 +15,7 @@ OpenPose Advanced Doc - Heatmap Output
## Keypoints
Check [doc/output_keypoints.md](../2_output.md) for the basic output information. This document is for users that want to use the heatmaps.
Check [doc/output_keypoints.md](../02_output.md) for the basic output information. This document is for users that want to use the heatmaps.
......
......@@ -69,7 +69,7 @@ OpenPose Doc - Installation
1. For maximum speed, you should use OpenPose in a machine with a Nvidia GPU version. If so, you must upgrade your Nvidia drivers to the latest version (in the Nvidia "GeForce Experience" software or its [website](https://www.nvidia.com/Download/index.aspx)).
2. Download the latest OpenPose version from the [Releases](https://github.com/CMU-Perceptual-Computing-Lab/openpose/releases) section.
3. Follow the `Instructions.txt` file inside the downloaded zip file to download the models required by OpenPose (about 500 Mb).
4. Then, you can run OpenPose from the PowerShell command-line by following [doc/1_demo.md](../1_demo.md).
4. Then, you can run OpenPose from the PowerShell command-line by following [doc/01_demo.md](../01_demo.md).
Note: If you are using the GPU-accelerated version and are seeing `Cuda check failed (3 vs. 0): initialization error` when running OpenPose, you can fix it by doing one of these:
- Upgrade your Nvidia drivers. If the error persists, make sure your machine does not contain any CUDA version (or if so, that it's the same than the OpenPose portable demo files). Otherwise, uninstall that CUDA version. If you need to keep that CUDA version installed, [compile OpenPose from Source](#compiling-and-running-openpose-from-source) for that CUDA version instead.
......@@ -85,7 +85,7 @@ The instructions in the following subsections describe the steps to build OpenPo
### Problems and Errors Installing OpenPose
Any problem installing OpenPose while following this guidelines? Check [doc/4_faq.md](../4_faq.md) and/or check existing GitHub issues. If you do you find your issue, post a new one. We will not respond to duplicated issues, as well as GitHub issues about Caffe, OpenCV or CUDA installation errors, as well as issues that do not fill all the information that the GitHub template asks for.
Any problem installing OpenPose while following this guidelines? Check [doc/05_faq.md](../05_faq.md) and/or check existing GitHub issues. If you do you find your issue, post a new one. We will not respond to duplicated issues, as well as GitHub issues about Caffe, OpenCV or CUDA installation errors, as well as issues that do not fill all the information that the GitHub template asks for.
......@@ -132,7 +132,7 @@ cmake-gui ..
5. Set the `GPU_MODE` flag to the proper value and click `Configure` again:
1. If your machine has an Nvidia GPU, you should most probably not modify this flag and skip this step. Cases in which you might have to change it:
- If you have a Nvidia GPU with 2GB of memory or less: Then you will have to follow some of the tricks in [doc/5_maximizing_openpose_speed.md](../5_maximizing_openpose_speed.md) or change `GPU_MODE` back to `CPU_ONLY`.
- If you have a Nvidia GPU with 2GB of memory or less: Then you will have to follow some of the tricks in [doc/06_maximizing_openpose_speed.md](../06_maximizing_openpose_speed.md) or change `GPU_MODE` back to `CPU_ONLY`.
- If you cannot install CUDA, then you can also set `GPU_MODE` to `CPU_ONLY`.
2. Mac OSX and machines with a non-Nvidia GPU (Intel or AMD GPUs): Set the `GPU_MODE` flag to `CPU_ONLY` (easier to install but slower runtime) or `OPENCL` (GPU-accelerated, it is harder to install but provides a faster runtime speed). For more details on OpenCV support, see [doc/1_prerequisites.md](1_prerequisites.md) and [OpenCL Version](#opencl-version).
3. If your machine does not have any GPU, set the `GPU_MODE` flag to `CPU_ONLY`.
......@@ -216,7 +216,7 @@ We welcome users to send us their installation videos (e.g., sharing them as Git
### Running OpenPose
Check OpenPose was properly installed by running any demo example: [doc/1_demo.md](../1_demo.md).
Check OpenPose was properly installed by running any demo example: [doc/01_demo.md](../01_demo.md).
......@@ -276,7 +276,7 @@ See [doc/advanced/deployment.md](../advanced/deployment.md).
### Maximum Speed
Check the OpenPose Benchmark as well as some hints to speed up and/or reduce the memory requirements to run OpenPose on [doc/5_maximizing_openpose_speed.md](../5_maximizing_openpose_speed.md).
Check the OpenPose Benchmark as well as some hints to speed up and/or reduce the memory requirements to run OpenPose on [doc/06_maximizing_openpose_speed.md](../06_maximizing_openpose_speed.md).
......@@ -297,7 +297,7 @@ export MKL_NUM_THREADS="8"
export OMP_NUM_THREADS="8"
```
Increasing the number of threads results in a higher RAM memory usage. You can check the [doc/5_maximizing_openpose_speed.md](../5_maximizing_openpose_speed.md) for more information about speed and memory requirements in several CPUs and GPUs.
Increasing the number of threads results in a higher RAM memory usage. You can check the [doc/06_maximizing_openpose_speed.md](../06_maximizing_openpose_speed.md) for more information about speed and memory requirements in several CPUs and GPUs.
......@@ -313,7 +313,7 @@ Lastly, OpenCL version does not support unfixed `--net_resolution`. So a folder
### COCO and MPI Models
By default, the body `COCO` and `MPI` models are not downloaded (they are slower and less accurate than `BODY_25`, so not useful in most cases!). But you can download them by turning on the `DOWNLOAD_BODY_COCO_MODEL` or `DOWNLOAD_BODY_MPI_MODEL` flags. Check the differences between these models in [doc/4_faq.md#difference-between-body_25-vs-coco-vs-mpi](../4_faq.md#difference-between-body_25-vs-coco-vs-mpi).
By default, the body `COCO` and `MPI` models are not downloaded (they are slower and less accurate than `BODY_25`, so not useful in most cases!). But you can download them by turning on the `DOWNLOAD_BODY_COCO_MODEL` or `DOWNLOAD_BODY_MPI_MODEL` flags. Check the differences between these models in [doc/05_faq.md#difference-between-body_25-vs-coco-vs-mpi](../05_faq.md#difference-between-body_25-vs-coco-vs-mpi).
......
......@@ -156,7 +156,7 @@ Note: This installer will not incorporate any new features, we recommend to use
3. Open the Windows cmd (Windows button + <kbd>X</kbd>, then <kbd>A</kbd>).
4. Go to the OpenPose directory, assuming OpenPose has been downloaded on `C:\openpose`: `cd C:\openpose\`.
5. Run the tutorial commands.
6. Check OpenPose was properly installed by running it on the default images, video or webcam: [doc/1_demo.md](../../1_demo.md).
6. Check OpenPose was properly installed by running it on the default images, video or webcam: [doc/01_demo.md](../../01_demo.md).
......
......@@ -3,11 +3,11 @@ OpenPose Very Advanced Doc - Library Structure
As a user, you do not need to know anything about this section! This section is intended for OpenPose internal developpers. It is exposed publicly, but you can skip this whole folder if you are just trying to use OpenPose or create new code/demos using OpenPose.
Even if you want to e.g., change internal functions and/or extend the OpenPose functionality, the easiest solution as a user is to follow the [examples/tutorial_api_cpp](../../examples/tutorial_api_cpp) examples. If the new functionality is cool, make a pull request so we can add it to OpenPose!
Even if you want to e.g., change internal functions and/or extend the OpenPose functionality, the easiest solution as a user is to follow the [OpenPose C++ API](../../doc/04_cpp_api.md). If the new functionality is cool, make a pull request so we can add it to OpenPose!
In order to learn the basics about how OpenPose works internally:
1. See the Doxygen documentation on [http://cmu-perceptual-computing-lab.github.io/openpose/web/html/doc/](http://cmu-perceptual-computing-lab.github.io/openpose/web/html/doc/) or build that Doxygen doc from the source code.
2. Take a look at the [library Quick Start section](../../README.md#quick-start) from the main README (or its Doxygen analog).
2. Take a look at the [library Quick Start section](../../README.md#quick-start-overview) from the main README (or its Doxygen analog).
3. OpenPose Overview: Learn the basics about the library source code in [doc/very_advanced/library_structure/1_library_deep_overview.md](1_library_deep_overview.md).
4. Extending Functionality: Learn how to extend the library in [doc/very_advanced/library_structure/2_library_extend_functionality.md](2_library_extend_functionality.md).
5. Adding An Extra Module: Learn how to add an extra module in [doc/very_advanced/library_structure/3_library_add_new_module.md](3_library_add_new_module.md).
......@@ -5,9 +5,9 @@ If you intend to extend the functionality of our library:
1. Read the [README.md](../../../README.md) page.
2. Check the basic library overview doc on [doc/very_advanced/library_structure/1_library_deep_overview.md](1_library_deep_overview.md).
3. Read, understand and play with the basic real time pose demo source code [examples/openpose/openpose.cpp](../../../examples/openpose/openpose.cpp) and [examples/tutorial_api_cpp](../../../examples/tutorial_api_cpp). It includes all the functionality of our library, and it has been properly commented.
4. Read, understand and play with the other tutorials in [examples/](../../../examples/). It includes more specific examples.
3. Read, understand and play with the basic real time pose demo source code [OpenPose demo](../../doc/01_demo.md) and [C++ API](../../doc/04_cpp_api.md). It includes all the functionality of our library, and it has been properly commented.
4. Read, understand and play with the other tutorials in [examples/](https://github.com/CMU-Perceptual-Computing-Lab/openpose/tree/master/examples). It includes more specific examples.
5. Check the basic UML diagram on the [doc/very_advanced/library_structure/UML](UML/) to get an idea of each module relations.
6. Take a look to the stucuture of the already existing modules.
7. The C++ headers files add documentation in [Doxygen](http://www.doxygen.org/) format. Create this documentation by compiling the [include](../../../include/) folder with Doxygen. This documentation is slowly but continuously improved.
7. The C++ headers files add documentation in [Doxygen](http://www.doxygen.org/) format. Create this documentation by compiling the [include](https://github.com/CMU-Perceptual-Computing-Lab/openpose/tree/master/include) folder with Doxygen. This documentation is slowly but continuously improved.
8. You can also take a look to the source code or ask us on GitHub.
......@@ -4,8 +4,9 @@
// 2. Extract and render body/hand/face/foot keypoint/heatmap/PAF of that image.
// 3. Save the results on disk.
// 4. Display the rendered pose.
// If the user wants to learn to use the OpenPose C++ library, we highly recommend to start with the examples in
// `examples/tutorial_api_cpp/`.
// For more details on the OpenPose examples, see
// - https://cmu-perceptual-computing-lab.github.io/openpose/web/html/doc/
// - https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/00_index.md
// Command-line user interface
#include <openpose/flags.hpp>
......
# C++ API Examples
See the [OpenPose C++ API doc](../../doc/04_cpp_api.md) for more details on this folder.
This folder provides examples to the basic OpenPose C++ API. The analogous Python API is exposed in [examples/tutorial_api_python/](../tutorial_api_python/).
# Python API Examples
See the [OpenPose Python API doc](../../doc/03_python_api.md) for more details on this folder.
This folder provides examples to the basic OpenPose Python API. The analogous C++ API is exposed in [examples/tutorial_api_cpp/](../tutorial_api_cpp/).
......@@ -16,15 +16,15 @@ You can quickly add your custom code into this folder so that quick prototypes c
5. Then, you can simply modify their content.
6. You can know compile OpenPose after each change in your files:
- Ubuntu:
```
cd build/
make -j`nproc`
```
```
cd build/
make -j`nproc`
```
- Mac:
```
cd build/
make -j`sysctl -n hw.logicalcpu`
```
```
cd build/
make -j`sysctl -n hw.logicalcpu`
```
- Windows: Compile from Visual Studio (F5, F7, green play button, etc.).
5. **Run step 4 every time that you make changes into your code**.
......
......@@ -265,7 +265,7 @@ DEFINE_string(write_heatmaps, "", "Directory to write body
" must be enabled.");
DEFINE_string(write_heatmaps_format, "png", "File extension and format for `write_heatmaps`, analogous to `write_images_format`."
" For lossless compression, recommended `png` for integer `heatmaps_scale` and `float` for"
" floating values. See `doc/2_output.md` for more details.");
" floating values. See `doc/02_output.md` for more details.");
DEFINE_string(write_keypoint, "", "(Deprecated, use `write_json`) Directory to write the people pose keypoint data. Set format"
" with `write_keypoint_format`.");
DEFINE_string(write_keypoint_format, "yml", "(Deprecated, use `write_json`) File extension and format for `write_keypoint`: json, xml,"
......
......@@ -321,7 +321,7 @@ namespace op
m.def("get_images_on_directory", &getImagesFromDirectory, "Get Images On Directory");
// Pose Mapping
// Code example in doc/2_output.md, section Keypoint Ordering in C++/Python
// Code example in doc/02_output.md, section Keypoint Ordering in C++/Python
m.def("getPoseBodyPartMapping", &getPoseBodyPartMapping, "getPoseBodyPartMapping");
m.def("getPoseNumberBodyParts", &getPoseNumberBodyParts, "getPoseNumberBodyParts");
m.def("getPosePartPairs", &getPosePartPairs, "getPosePartPairs");
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册