At this point, this model can be used in PP-Human.
At this point, this model can be used in PP-Human.
### Custom Action Output
In the model of action recognition based on classification with human id, the task is defined as a picture-level classification task of corresponding person. The type of the corresponding classification is finally regarded as the action type of the current stage. Therefore, on the basis of completing the training and deployment of the custom model, it is also necessary to convert the classification model results to the final action recognition results as output, and the displayed result of the visualization should be modified.
Please modify the [postprocessing function](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pphuman/action_infer.py#L509).
The core code are:
```python
# Get the highest score output of the classification model
cls_id_res=1
cls_score_res=-1.0
forcls_idinrange(len(cls_result[idx])):
score=cls_result[idx][cls_id]
ifscore>cls_score_res:
cls_id_res=cls_id
cls_score_res=score
# Current now, class 0 is positive, class 1 is negative.
ifcls_id_res==1or(cls_id_res==0and
cls_score_res<self.threshold):
# If the classification result is not the target action or its confidence does not reach the threshold,
# determine the action type of the current frame according to the historical results
At present, ID-based action recognition is displayed based on the results of action recognition and predefined category names. For the detail, please refer to [here](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pipeline.py#L1024-L1043). If the custom action needs to be modified to another display name, please modify it accordingly to output the corresponding result.
At this point, this model can be used in PP-Human.
At this point, this model can be used in PP-Human.
### Custom Action Output
In the model of action recognition based on detection with human id, the task is defined to detect target objects in images of corresponding person. When the target object is detected, the behavior type of the character in a certain period of time. The type of the corresponding classification is regarded as the action of the current period. Therefore, on the basis of completing the training and deployment of the custom model, it is also necessary to convert the detection model results to the final action recognition results as output, and the displayed result of the visualization should be modified.
#### Convert to Action Recognition Result
Please modify the [postprocessing function](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pphuman/action_infer.py#L338).
The core code are:
```python
# Parse the detection model output and filter out valid detection boxes with confidence higher than a threshold.
# Current now, class 0 is positive, class 1 is negative.
# When there is a valid detection frame, the category and score of the behavior recognition result are modified accordingly.
action_ret['class']=valid_boxes[0,0]
action_ret['score']=valid_boxes[0,1]
# Due to the continuity of the action, valid detection results can be reused for a certain number of frames.
self.result_history[
tracker_id]=[0,self.frame_life,valid_boxes[0,1]]
else:
# If there is no valid detection frame, the result of the current frame is determined according to the historical detection result.
...
```
#### Modify Visual Output
At present, ID-based action recognition is displayed based on the results of action recognition and predefined category names. For the detail, please refer to [here](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pipeline.py#L1024-L1043). If the custom action needs to be modified to another display name, please modify it accordingly to output the corresponding result.
vertex_nums:17# Corresponding to V dimension, please set it accordingly to the number of keypoints
vertex_nums:17# Corresponding to V dimension, please set it accordingly to the number of keypoints
person_nums:1# Corresponding to M dimension
person_nums:1# Corresponding to M dimension
```
```
### Custom Action Output
In the skeleton-based action recognition, the classification result of the model represents the behavior type of the character in a certain period of time. The type of the corresponding classification is regarded as the action of the current period. Therefore, on the basis of completing the training and deployment of the custom model, the model output is directly used as the final result, and the displayed result of the visualization should be modified.
#### Modify Visual Output
At present, ID-based action recognition is displayed based on the results of action recognition and predefined category names. For the detail, please refer to [here](https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/pipeline/pipeline.py#L1024-L1043). If the custom action needs to be modified to another display name, please modify it accordingly to output the corresponding result.