README.md 11.9 KB
Newer Older
G
gineshidalgo99 已提交
1 2 3 4 5
OpenPose
====================================

## Introduction

G
gineshidalgo99 已提交
6
OpenPose is a **library for real-time multi-person keypoint detection and multi-threading written in C++** using OpenCV and Caffe*, authored by [Gines Hidalgo](https://www.linkedin.com/in/gineshidalgo/), [Zhe Cao](http://www.andrew.cmu.edu/user/zhecao), [Tomas Simon](http://www.cs.cmu.edu/~tsimon/), [Shih-En Wei](https://scholar.google.com/citations?user=sFQD3k4AAAAJ&hl=en), [Hanbyul Joo](http://www.cs.cmu.edu/~hanbyulj/) and [Yaser Sheikh](http://www.cs.cmu.edu/~yaser/).
G
gineshidalgo99 已提交
7

T
tsimon 已提交
8
* It uses Caffe, but the code is ready to be ported to other frameworks (e.g., Tensorflow or Torch). If you implement any of those, please, make a pull request and we will add it!
G
gineshidalgo99 已提交
9

G
gineshidalgo99 已提交
10 11
OpenPose represents the **first real-time system to jointly detect human body, hand and facial keypoints (in total 130 keypoints) on single images**. In addition, the system computational performance on body keypoint estimation is invariant to the number of detected people in the image.

G
gineshidalgo99 已提交
12 13 14 15 16 17
OpenPose is freely available for free non-commercial use, and may be redistributed under these conditions. Please, see the [license](LICENSE) for further details. Contact us for commercial purposes.



Library main functionality:

G
gineshidalgo99 已提交
18
* Multi-person 15 or **18-keypoint body pose** estimation and rendering.
G
gineshidalgo99 已提交
19

G
gineshidalgo99 已提交
20
* Multi-person **2x21-keypoint hand** estimation and rendering (coming soon in around 1-2 months!).
G
gineshidalgo99 已提交
21

G
gineshidalgo99 已提交
22
* Multi-person **70-keypoint face** estimation and rendering (coming soon in around 2-3 months!).
G
gineshidalgo99 已提交
23 24

* Flexible and easy-to-configure **multi-threading** module.
G
gineshidalgo99 已提交
25

T
tsimon 已提交
26
* Image, video, and webcam reader.
G
gineshidalgo99 已提交
27

G
gineshidalgo99 已提交
28
* Able to save and load the results in various formats (JSON, XML, PNG, JPG, ...).
G
gineshidalgo99 已提交
29 30 31

* Small display and GUI for simple result visualization.

G
gineshidalgo99 已提交
32
* All the functionality is wrapped into a **simple-to-use OpenPose Wrapper class**.
G
gineshidalgo99 已提交
33

Z
Zhe Cao 已提交
34
The pose estimation work is based on the C++ code from [the ECCV 2016 demo](https://github.com/CMU-Perceptual-Computing-Lab/caffe_rtpose), "Realtime Multiperson Pose Estimation", [Zhe Cao](http://www.andrew.cmu.edu/user/zhecao), [Tomas Simon](http://www.cs.cmu.edu/~tsimon/), [Shih-En Wei](https://scholar.google.com/citations?user=sFQD3k4AAAAJ&hl=en), [Yaser Sheikh](http://www.cs.cmu.edu/~yaser/). The [full project repo](https://github.com/ZheC/Multi-Person-Pose-Estimation) includes Matlab and Python version, as well as training code.
G
gineshidalgo99 已提交
35 36 37 38



## Results
G
gineshidalgo99 已提交
39

G
gineshidalgo99 已提交
40
### Body Estimation
G
gineshidalgo99 已提交
41
<p align="center">
G
gineshidalgo99 已提交
42
    <img src="doc/media/dance.gif", width="480">
G
gineshidalgo99 已提交
43 44
</p>

G
gineshidalgo99 已提交
45 46
## Coming Soon (But Already Working!)

G
gineshidalgo99 已提交
47
### Body + Hands + Face Estimation
G
gineshidalgo99 已提交
48
<p align="center">
G
gineshidalgo99 已提交
49
    <img src="doc/media/pose_face_hands.gif", width="480">
G
gineshidalgo99 已提交
50 51 52
</p>

### Body + Face Estimation
G
gineshidalgo99 已提交
53
<p align="center">
G
gineshidalgo99 已提交
54
    <img src="doc/media/pose_face.gif", width="480">
G
gineshidalgo99 已提交
55 56
</p>

G
gineshidalgo99 已提交
57
### Body + Hands
G
gineshidalgo99 已提交
58
<p align="center">
G
gineshidalgo99 已提交
59
    <img src="doc/media/pose_hands.gif", width="480">
G
gineshidalgo99 已提交
60 61 62 63
</p>


## Contents
G
gineshidalgo99 已提交
64
1. [Installation, Reinstallation and Uninstallation](#installation-reinstallation-and-uninstallation)
65 66
2. [Custom Caffe](#custom-caffe)
3. [Quick Start](#quick-start)
G
gineshidalgo99 已提交
67 68 69
    1. [Demo](#demo)
    2. [OpenPose Wrapper](#openpose-wrapper)
    3. [OpenPose Library](#openpose-library)
70
4. [Output](#output)
G
gineshidalgo99 已提交
71 72
    1. [Output Format](#output-format)
    2. [Reading Saved Results](#reading-saved-results)
73 74 75
5. [OpenPose Benchmark](#openpose-benchmark)
6. [Send Us Your Feedback!](#send-us-your-feedback)
7. [Citation](#citation)
G
Gines Hidalgo 已提交
76
8. [Other Contributors](#other-contributors)
G
gineshidalgo99 已提交
77 78 79



G
gineshidalgo99 已提交
80 81
## Installation, Reinstallation and Uninstallation
You can find the installation, reinstallation and uninstallation steps on: [doc/installation.md](doc/installation.md).
G
gineshidalgo99 已提交
82 83 84



85 86 87 88 89 90 91 92 93 94 95 96 97
## Custom Caffe
We only modified some Caffe compilation flags and minor details. You can use use your own Caffe distribution, these are the files we added and modified:

1. Added files: `install_caffe.sh`; as well as `Makefile.config.Ubuntu14.example`, `Makefile.config.Ubuntu16.example`, `Makefile.config.Ubuntu14_cuda_7.example` and `Makefile.config.Ubuntu16_cuda_7.example` (extracted from `Makefile.config.example`). Basically, you must enable cuDNN.
2. Edited file: Makefile. Search for "# OpenPose: " to find the edited code. We basically added the C++11 flag to avoid issues in some old computers.
3. Optional - deleted Caffe file: `Makefile.config.example`.
4. In order to link it to OpenPose:
    1. Run `make all && make distribute` in your Caffe version.
    2. Open the OpenPose Makefile config file: `./Makefile.config.UbuntuX.example` (where X depends on your OS and CUDA version).
    3. Modify the Caffe folder directory variable (`CAFFE_DIR`) to your custom Caffe `distribute` folder location in the previous OpenPose Makefile config file.



G
gineshidalgo99 已提交
98
## Quick Start
T
tsimon 已提交
99
Most users cases should not need to dive deep into the library, they might just be able to use the [Demo](#demo) or the simple [OpenPose Wrapper](#openpose-wrapper). So you can most probably skip the library details in [OpenPose Library](#openpose-library).
G
gineshidalgo99 已提交
100 101 102 103 104 105



#### Demo
Your case if you just want to process a folder of images or video or webcam and display or save the pose results.

G
gineshidalgo99 已提交
106
Forget about the OpenPose library details and just read the [doc/demo_overview.md](doc/demo_overview.md) 1-page section.
G
gineshidalgo99 已提交
107 108 109 110 111 112 113 114 115

#### OpenPose Wrapper
Your case if you want to read a specific format of image source and/or add a specific post-processing function and/or implement your own display/saving.

(Almost) forget about the library, just take a look to the `Wrapper` tutorial on [examples/tutorial_wrapper/](examples/tutorial_wrapper/).

Note: you should not need to modify OpenPose source code or examples, so that you can directly upgrade the OpenPose library anytime in the future without changing your code. You might create your custom code on [examples/user_code/](examples/user_code/) and compile it by using `make all` in the OpenPose folder.

#### OpenPose Library
T
tsimon 已提交
116
Your case if you want to change internal functions and/or extend its functionality. First, take a look at the [Demo](#demo) and [OpenPose Wrapper](#openpose-wrapper). Second, read the 2 following subsections: OpenPose Overview and Extending Functionality.
G
gineshidalgo99 已提交
117

T
tsimon 已提交
118
1. OpenPose Overview: Learn the basics about the library source code in [doc/library_overview.md](doc/library_overview.md).
G
gineshidalgo99 已提交
119

T
tsimon 已提交
120
2. Extending Functionality: Learn how to extend the library in [doc/library_extend_functionality.md](doc/library_extend_functionality.md).
G
gineshidalgo99 已提交
121

T
tsimon 已提交
122
3. Adding An Extra Module: Learn how to add an extra module in [doc/library_add_new_module.md](doc/library_add_new_module.md).
G
gineshidalgo99 已提交
123 124

#### Doxygen Documentation Autogeneration
T
tsimon 已提交
125
You can generate the documentation by running the following command. The documentation will be generated in `doc/doxygen/html/index.html`. You can simply open it with double click (your default browser should automatically display it).
G
gineshidalgo99 已提交
126
```
G
gineshidalgo99 已提交
127 128
cd doc/
doxygen doc_autogeneration.doxygen
G
gineshidalgo99 已提交
129 130 131 132 133 134
```



## Output
#### Output Format
T
tsimon 已提交
135
There are 2 alternatives to save the **(x,y,score) body part locations**. The `write_pose` flag uses the OpenCV cv::FileStorage default formats (JSON, XML and YML). However, the JSON format is only available after OpenCV 3.0. Hence, `write_pose_json` saves the people pose data using a custom JSON writer. For the latter, each JSON file has a `people` array of objects, where each object has an array `body_parts` containing the body part locations and detection confidence formatted as `x1,y1,c1,x2,y2,c2,...`. The coordinates `x` and `y` can be normalized to the range [0,1], [-1,1], [0, source size], [0, output size], etc., depending on the flag `scale_mode`. In addition, `c` is the confidence in the range [0,1].
G
gineshidalgo99 已提交
136 137 138 139 140 141 142 143 144 145 146

```
{
    "version":0.1,
    "people":[
        {"body_parts":[1114.15,160.396,0.846207,...]},
        {"body_parts":[...]},
    ]
}
```

T
tsimon 已提交
147
The body part order of the COCO (18 body parts) and MPI (15 body parts) keypoints is described in `POSE_BODY_PART_MAPPING` in [include/openpose/pose/poseParameters.hpp](include/openpose/pose/poseParameters.hpp). E.g., for COCO:
G
gineshidalgo99 已提交
148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171
```
    POSE_COCO_BODY_PARTS {
        {0,  "Nose"},
        {1,  "Neck"},
        {2,  "RShoulder"},
        {3,  "RElbow"},
        {4,  "RWrist"},
        {5,  "LShoulder"},
        {6,  "LElbow"},
        {7,  "LWrist"},
        {8,  "RHip"},
        {9,  "RKnee"},
        {10, "RAnkle"},
        {11, "LHip"},
        {12, "LKnee"},
        {13, "LAnkle"},
        {14, "REye"},
        {15, "LEye"},
        {16, "REar"},
        {17, "LEar"},
        {18, "Bkg"},
    }
```

T
tsimon 已提交
172
For the **heat maps storing format**, instead of individually saving each of the 67 heatmaps (18 body parts + background + 2 x 19 PAFs) individually, the library concatenates them into a huge (width x #heat maps) x (height) matrix, i.e. it concats the heat maps by columns. E.g., columns [0, individual heat map width] contains the first heat map, columns [individual heat map width + 1, 2 * individual heat map width] contains the second heat map, etc. Note that some image viewers are not able to display the resulting images due to the size. However, Chrome and Firefox are able to properly open them.
G
gineshidalgo99 已提交
173

T
tsimon 已提交
174
The saving order is body parts + background + PAFs. Any of them can be disabled with program flags. If background is disabled, then the final image will be body parts + PAFs. The body parts and background follow the order of `POSE_COCO_BODY_PARTS` or `POSE_MPI_BODY_PARTS`, while the PAFs follow the order specified on POSE_BODY_PART_PAIRS in `poseParameters.hpp`. E.g., for COCO:
G
gineshidalgo99 已提交
175 176 177 178
```
    POSE_COCO_PAIRS    {1,2,   1,5,   2,3,   3,4,   5,6,   6,7,   1,8,   8,9,   9,10, 1,11,  11,12, 12,13,  1,0,   0,14, 14,16,  0,15, 15,17,   2,16,  5,17};
```

T
tsimon 已提交
179
Where each index is the key value corresponding to each body part in `POSE_COCO_BODY_PARTS`, e.g., 0 for "Neck", 1 for "RShoulder", etc.
G
gineshidalgo99 已提交
180 181

#### Reading Saved Results
T
tsimon 已提交
182
We use standard formats (JSON, XML, PNG, JPG, ...) to save our results, so there will be lots of frameworks to read them later, but you might also directly use our functions in [include/openpose/filestream.hpp](include/openpose/filestream.hpp). In particular, `loadData` (for JSON, XML and YML files) and `loadImage` (for image formats such as PNG or JPG) to load the data into cv::Mat format.
G
gineshidalgo99 已提交
183 184 185



G
gineshidalgo99 已提交
186
## OpenPose Benchmark
T
tsimon 已提交
187
Initial library running time benchmark on [OpenPose Benchmark](https://docs.google.com/spreadsheets/d/1-DynFGvoScvfWDA1P4jDInCkbD4lg0IKOYbXgEq0sK0/edit#gid=0). You can comment in that document with your graphics card model and running time for that model, and we will add your results to the benchmark!
G
gineshidalgo99 已提交
188 189 190



T
tsimon 已提交
191
## Send Us Your Feedback!
G
gineshidalgo99 已提交
192 193 194 195
Our library is open source for research purposes, and we want to continuously improve it! So please, let us know if...

1. ... you find any bug (in functionality or speed).

T
tsimon 已提交
196
2. ... you added some functionality to some class or some new Worker<T> subclass which we might potentially incorporate.
G
gineshidalgo99 已提交
197

T
tsimon 已提交
198
3. ... you know how to speed up or improve any part of the library.
G
gineshidalgo99 已提交
199

T
tsimon 已提交
200
4. ... you have a request about possible functionality.
G
gineshidalgo99 已提交
201 202 203

5. ... etc.

T
tsimon 已提交
204
Just comment on GibHub or make a pull request and we will answer as soon as possible! Send us an email if you use the library to make a cool demo or YouTube video!
G
gineshidalgo99 已提交
205 206 207 208



## Citation
T
tsimon 已提交
209
Please cite the papers in your publications if it helps your research:
G
gineshidalgo99 已提交
210 211 212 213 214 215 216 217

    @inproceedings{cao2017realtime,
      author = {Zhe Cao and Tomas Simon and Shih-En Wei and Yaser Sheikh},
      booktitle = {CVPR},
      title = {Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},
      year = {2017}
      }

G
gineshidalgo99 已提交
218 219 220 221 222 223 224
    @inproceedings{simon2017hand,
      author = {Tomas Simon and Hanbyul Joo and Iain Matthews and Yaser Sheikh},
      booktitle = {CVPR},
      title = {Hand Keypoint Detection in Single Images using Multiview Bootstrapping},
      year = {2017}
      }

G
gineshidalgo99 已提交
225 226 227 228 229 230
    @inproceedings{wei2016cpm,
      author = {Shih-En Wei and Varun Ramakrishna and Takeo Kanade and Yaser Sheikh},
      booktitle = {CVPR},
      title = {Convolutional pose machines},
      year = {2016}
      }
231 232 233 234



## Other Contributors
G
Gines Hidalgo 已提交
235
We would like to thank the following people who also contributed to OpenPose:
236 237

1. [Helen Medina](https://github.com/helen-medina): For moving OpenPose to Windows (Visual Studio), making it work there and creating the Windows branch.