提交 2ace5663 编写于 作者: L Liangliang Zhang

Perception: clean up folder and added disjoint set.

上级 bb9df884
......@@ -117,6 +117,7 @@ function generate_build_targets() {
`bazel query //modules/localization/msf/local_tool/local_visualization/...`
//modules/map/...
//modules/monitor/...
//modules/perception/common/...
//modules/perception/proto/...
//modules/planning/...
//modules/prediction/...
......@@ -214,6 +215,7 @@ function cibuild() {
//modules/localization/rtk/...
`bazel query //modules/localization/msf/... except //modules/localization/msf/local_tool/...`
`bazel query //modules/localization/msf/local_tool/local_visualization/...`
//modules/perception/common/...
//modules/perception/proto/...
//modules/planning/...
//modules/prediction/...
......
load("//tools:cpplint.bzl", "cpplint")
package(default_visibility = ["//visibility:public"])
cc_library(
name = "perception_lib",
srcs = ["perception.cc"],
hdrs = [
"perception.h",
],
deps = [
"//modules/common",
"//modules/common:apollo_app",
"//modules/perception/common",
"//modules/perception/lib/base",
"//modules/perception/obstacle/onboard:camera_subnode",
"//modules/perception/obstacle/onboard:cipv_subnode",
"//modules/perception/obstacle/onboard:fusion_subnode",
"//modules/perception/obstacle/onboard:lane_post_processing_subnode",
"//modules/perception/obstacle/onboard:lidar_subnode",
"//modules/perception/obstacle/onboard:perception_obstacle_shared_data",
"//modules/perception/obstacle/onboard:radar_subnode",
"//modules/perception/obstacle/onboard:ultrasonic_obstacle_subnode",
"//modules/perception/obstacle/onboard:visualization_subnode",
"//modules/perception/onboard",
"//modules/perception/proto:cnn_segmentation_config_lib_proto",
"//modules/perception/proto:perception_proto",
"//modules/perception/traffic_light/onboard",
"@com_google_protobuf//:protobuf",
"@ros//:ros_common",
],
)
cc_binary(
name = "perception",
srcs = ["main.cc"],
data = [
"//modules/perception/conf:perception_adapter_manager_config",
],
deps = [
":perception_lib",
"//external:gflags",
],
)
filegroup(
name = "perception_data",
srcs = glob([
"data/**/*",
]),
)
filegroup(
name = "perception_model",
srcs = glob([
"model/**/*",
]),
)
cpplint()
# Perception 2.5
## Introduction
In Apollo 2.5, the perception module incorporates the capability of using front camera and front radar to recognize obstacles and fuse their individual tracks to obtain final track list. The obstacle submodule detects, classifies and tracks obstacles. The submodule also predicts obstacle motion and position information (e.g., heading and velocity). For lane line, we construct lane instances by postprocessing lane parsing pixels and calculate out lane relative location to the ego-vehicle (L0, L1, R0, R1, etc)
See Also:
[Perception algorithms in Apollo 2.5](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/perception_apollo_2.5.md).
[Guideline of sensor installation for Apollo 2.5](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/Guideline_sensor_Installation_apollo_2.5.md).
## Input
The perception module inputs are:
- Radar data (ROS topic _/apollo/sensor/conti_radar_)
- Image data (ROS topic _/apollo/sensor/camera/obstacle/front_6mm_)
- Extrinsic parameters of radar sensor calibration (from YAML files)
- Extrinsic and Intrinsic parameters of front camera calibration (from YAML files)
- Velocity and Angular Velocity of host vehicle (ROS topic /apollo/localization/pose)
## Output
The perception module outputs are:
* The 3D obstacle tracks with the heading, velocity and classification information (ROS topic _/apollo/perception/obstacles_)
* The lane marker information with fitted curve paramter, spatial information(l0,r0, etc) as well as semantic information (lane type) (ROS topic _/apollo/perception/obstacles_)
- [ ] 1. Set up the general settings in the configuration file `modules/perception/conf/perception_lowcost.conf`.
- [ ] 2. Run the command `./scripts/bootstrap.sh` to launch the web GUI.
- [ ] 3. Select the vehicle model in the web GUI.
- [ ] 4. Launch the perception module using the command `./scripts/perception_lowcost.sh start` or by enabling the perception button on the *Module Controller* page of the web GUI. The command for stopping perception is `./scripts/perception_lowcost.sh stop`. Note: please do not try to use GUI to enable perception but use script to stop it, vice versa.
- [ ] 5. Download the demo data from the Apollo [Open Data Platform](http://data.apollo.auto).
## Function enable/disable
The perception framework is a directed acyclic graph (DAG). There are three components in DAG configuration, including sub-nodes, edges and shared data. Each function is implemented as a sub-node in DAG. The sub-nodes that share data have an edge from producer to customer.
A typical DAG configuration for a perception module is shown in the example below. The example DAG configuration features the following:
- Default obstacle perception that consists of "CameraProcessSubnode", "RadarProcessSubnode" and "FusionSubnode", as shown in *subnode_config*.
- The "CameraProcessSubnode" and "RadarProcessSubnode" that receive sensor data and output obstacle data independently, i.e., the "CameraObjectData" and "RadarObjectData" in *data_config*.
- The "FusionSubnode" that both subscribes the obstacle data and publishes the final results.
- The "LanePostProcessSubnode" processes the lane parsing output from camera detection module and generates lane instances and attributes
- The edge and data configuration that define the links.
- Each function can be disabled by removing the corresponding sub-node, edge, and shared data configuration. However you must ensure that all the input and output configurations are correct.
``` protobuf
# Nvidia Driver and CUDA are required for these 2 subnodes
subnode_config {
# Camera node
subnodes {
id: 3
name: "CameraProcessSubnode"
reserve: "device_id:camera;"
type: SUBNODE_IN
}
subnodes {
id: 4
name: "MotionService"
reserve: "device_id:motion_service;"
type: SUBNODE_IN
}
subnodes {
id: 5
name: "LanePostProcessingSubnode"
reserve: "device_id:camera;motion_event_id:1021"
type: SUBNODE_NORMAL
}
subnodes {
id: 2
name: "RadarProcessSubnode"
reserve: "device_id:radar_front;"
type: SUBNODE_IN
}
subnodes {
id: 31
name: "FusionSubnode"
reserve: "pub_driven_event_id:1009;lane_event_id:1010;camera_event_id:1009;radar_event_id:1013;"
type: SUBNODE_OUT
}
}
###################################################################
# Define all edges which link nodes.
edge_config {
# CameraProcessSubnode -> LanePostProcessingSubnode
edges {
id: 106
from_node: 3
to_node: 5
events {
id: 1004
name: "lane_postprocessing"
}
}
# CameraProcessSubnode -> FusionSubnode
edges {
id: 109
from_node: 3
to_node: 31
events {
id: 1009
name: "camera_fusion"
}
}
# LanePostProcessingSubnode -> FusionSubnode
edges {
id: 110
from_node: 5
to_node: 31
events {
id: 1010
name: "lane_fusion"
}
}
# RadarSubnode -> FusionSubnode
edges {
id: 113
from_node: 2
to_node: 31
events {
id: 1013
name: "radar_fusion"
}
}
}
# Shared Data
data_config {
datas {
id: 5
name: "CameraObjectData"
}
datas {
id: 7
name: "CameraSharedData"
}
datas {
id: 8
name: "LaneSharedData"
}
datas {
id: 9
name: "FusionSharedData"
}
datas {
id: 10
name: "RadarObjectData"
}
}
**Note**: Nvidia GPU and CUDA are required to run the perception module with Caffe. Apollo provides the CUDA and Caffe libraries in the release docker image. However, the Nvidia GPU driver is not installed in the dev docker image.
To run the perception module with CUDA acceleration, install the exact same version of the Nvidia driver in the docker that is installed in your host machine, and then build Apollo with the GPU option (i.e., using `./apollo.sh build_gpu` or `./apollo.sh build_opt_gpu`).
See [How to Run Perception Module on Your Local Computer](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_run_perception_module_on_your_local_computer.md).
**Note**
This module contains a redistribution in binary form of a modified version of caffe(https://github.com/BVLC/caffe).
A copy of the caffe's original copyright statement is included below:
COPYRIGHT
All contributions by the University of California:
Copyright (c) 2014-2017 The Regents of the University of California (Regents)
All rights reserved.
All other contributions:
Copyright (c) 2014-2017, the respective contributors
All rights reserved.
Caffe uses a shared copyright model: each contributor holds copyright over
their contributions to Caffe. The project versioning records all such
contribution and copyright details. If a contributor wants to further mark
their specific copyright on a particular contribution, they should indicate
their copyright solely in the commit message of the change when it is
committed.
LICENSE
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
CONTRIBUTION AGREEMENT
By contributing to the BVLC/caffe repository through pull-request, comment,
or otherwise, the contributor releases their content to the
license and copyright terms herein.
# Perception 3.0
## Introduction
Apollo 3.0 has following new features:
* **CIPV(Closest In-Path Vehicle) detection and Tailgaiting**: The vehicle in front of the ego-car is detected and its trajectory is estimated for more efficient tailgating and lane keeping when lane detection is unreliable.
* **Asynchronous sensor fusion**: unlike the previous version, Perception in Apollo 3.0 is capable of consolidating all the information and data points by asynchronously fusing LiDAR, Radar and Camera data. Such conditions allow for more comprehensive data capture and reflect more practical sensor environments.
* **Online pose estimation**: This new feature estimates the pose of an ego-vehicle for every single frame. This feature helps to drive through bumps or slopes on the road with more accurate 3D scene understanding.
* **Ultrasonic sensors**: Perception in Apollo 3.0 now works with ultrasonic sensors. The output can be used for Automated Emergency Brake (AEB) and vertical/perpendicular parking.
* **Whole lane line**: Unlike previous lane line segments, this whole lane line feature will provide more accurate and long range detection of lane lines.
* **Visual localization**: Camera's are currently being tested to aide and enhance localization
* **16 beam LiDAR support**
The perception module incorporates the capability of using a front camera and a front radar to recognize obstacles and fuse their individual tracks to obtain a final track list. The obstacle sub-module detects, classifies and tracks obstacles. This sub-module also predicts obstacle motion and position information (e.g., heading and velocity). For lane line, we construct lane instances by postprocessing lane parsing pixels and calculate the lane relative location to the ego-vehicle (L0, L1, R0, R1, etc.).
See Also:
[Perception algorithms in Apollo 3.0](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/perception_apollo_3.0.md).
[Guideline of sensor installation for Apollo 3.0](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/Guideline_sensor_Installation_apollo_3.0.md).
## Input
The perception module inputs are:
- Radar data (ROS topic _/apollo/sensor/conti_radar_)
- Image data (ROS topic _/apollo/sensor/camera/obstacle/front_6mm_)
- Extrinsic parameters of radar sensor calibration (from YAML files)
- Extrinsic and Intrinsic parameters of front camera calibration (from YAML files)
- Velocity and Angular Velocity of host vehicle (ROS topic /apollo/localization/pose)
## Output
The perception module outputs are:
* The 3D obstacle tracks with the heading, velocity and classification information (ROS topic _/apollo/perception/obstacles_)
* The lane marker information with fitted curve parameter, spatial information(L0, R0, etc.) as well as semantic information (lane type) (ROS topic _/apollo/perception/obstacles_)
## Setup Instructions
1. Set up the general settings in the configuration file `modules/perception/conf/perception_lowcost.conf`.
2. Run the command `./scripts/bootstrap.sh` to launch the web GUI.
3. Select the vehicle model in the web GUI.
4. Launch the perception module using the command `./scripts/perception_lowcost_vis.sh start` or by enabling the perception button on the *Module Controller* page of the web GUI. The command for stopping perception is `./scripts/perception_lowcost_vis.sh stop`. Note: please do not try to use GUI to enable perception but use script to stop it, vice versa.
5. Download the demo data from the Apollo [Open Data Platform](http://data.apollo.auto).
```
Note:
If you are redirected to the Baidu Cloud login page, complete the login and Repeat step 5 (click on the Open Data Platform link)
```
## Function enable/disable
The perception framework is a directed acyclic graph (DAG). There are three components in the DAG configuration, including sub-nodes, edges and shared data. Each function is implemented as a sub-node in the DAG. The sub-nodes that share data have an edge from producer to customer.
A typical DAG configuration for a perception module is shown in the example below. The example DAG configuration features the following:
- Default obstacle perception that consists of "CameraProcessSubnode", "RadarProcessSubnode" and "FusionSubnode", as shown in *subnode_config*.
- The "CameraProcessSubnode" and "RadarProcessSubnode" that receive sensor data and output obstacle data independently, i.e., the "CameraObjectData" and "RadarObjectData" in *data_config*.
- The "FusionSubnode" that both subscribes the obstacle data and publishes the final results.
- The "LanePostProcessSubnode" processes the lane parsing output from camera detection module and generates lane instances and attributes.
- The edge and data configuration that define the links.
- Each function can be disabled by removing the corresponding sub-node, edge, and shared data configuration. However you must ensure that all the input and output configurations are correct.
```protobuf
# Nvidia Driver and CUDA are required for these 2 subnodes
subnode_config {
# Camera node
subnodes {
id: 3
name: "CameraProcessSubnode"
reserve: "device_id:camera;"
type: SUBNODE_IN
}
subnodes {
id: 4
name: "MotionService"
reserve: "device_id:motion_service;"
type: SUBNODE_IN
}
subnodes {
id: 5
name: "LanePostProcessingSubnode"
reserve: "device_id:camera;motion_event_id:1021"
type: SUBNODE_NORMAL
}
subnodes {
id: 2
name: "RadarProcessSubnode"
reserve: "device_id:radar_front;"
type: SUBNODE_IN
}
subnodes {
id: 31
name: "FusionSubnode"
reserve: "pub_driven_event_id:1009;lane_event_id:1010;camera_event_id:1009;radar_event_id:1013;"
type: SUBNODE_OUT
}
}
###################################################################
# Define all edges which link nodes.
edge_config {
# CameraProcessSubnode -> LanePostProcessingSubnode
edges {
id: 106
from_node: 3
to_node: 5
events {
id: 1004
name: "lane_postprocessing"
}
}
# CameraProcessSubnode -> FusionSubnode
edges {
id: 109
from_node: 3
to_node: 31
events {
id: 1009
name: "camera_fusion"
}
}
# LanePostProcessingSubnode -> FusionSubnode
edges {
id: 110
from_node: 5
to_node: 31
events {
id: 1010
name: "lane_fusion"
}
}
# RadarSubnode -> FusionSubnode
edges {
id: 113
from_node: 2
to_node: 31
events {
id: 1013
name: "radar_fusion"
}
}
}
# Shared Data
data_config {
datas {
id: 5
name: "CameraObjectData"
}
datas {
id: 7
name: "CameraSharedData"
}
datas {
id: 8
name: "LaneSharedData"
}
datas {
id: 9
name: "FusionSharedData"
}
datas {
id: 10
name: "RadarObjectData"
}
}
```
### Note
1. Nvidia GPU and CUDA are required to run the perception module with Caffe. Apollo provides the CUDA and Caffe libraries in the release docker image. However, the Nvidia GPU driver is not installed in the dev docker image.
2. To run the perception module with CUDA acceleration, install the exact same version of the Nvidia driver in the docker image that is installed on your host machine, and then build Apollo with the GPU option (i.e., using `./apollo.sh build_gpu` or `./apollo.sh build_opt_gpu`).
See [How to Run Perception Module on Your Local Computer](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_run_perception_module_on_your_local_computer.md).
3. This module contains a redistribution in binary form of a modified version of [caffe](https://github.com/BVLC/caffe).
A copy of the caffe's original copyright statement is included below:
```
COPYRIGHT
All contributions by the University of California:
Copyright (c) 2014-2017 The Regents of the University of California (Regents)
All rights reserved.
All other contributions:
Copyright (c) 2014-2017, the respective contributors
All rights reserved.
Caffe uses a shared copyright model: each contributor holds copyright over their contributions to Caffe. The project versioning records all such contribution and copyright details. If a contributor wants to further mark their specific copyright on a particular contribution, they should indicate their copyright solely in the commit message of the change when it is committed.
LICENSE
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
CONTRIBUTION AGREEMENT
By contributing to the BVLC/caffe repository through pull-request, comment, or otherwise, the contributor releases their content to the license and copyright terms herein.
```
# 感知3.0
## 介绍
Apollo3.0有以下新特点:
- CIPV检测
- 尾随
- 整车道线
- 在线姿态估计
- 异步传感器融合
- 视觉定位
- 超声波传感器
- 16束激光雷达
感知模块结合了使用前摄像头和前雷达识别障碍物的能力,并融合其各自的轨迹以获得最终的轨迹列表。障碍子模块检测、分类和跟踪障碍物。子模块还预测障碍物运动和位置信息(例如,航向和速度)。对于车道线,我们通过后处理车道分析像素来构造车道实例,并计算出与车辆(L0,L1,R0,R1等)的车道相对位置。
参见:
[Apollo3.0中的感知算法](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/perception_apollo_3.0.md)
[Apollo3.0传感器安装指南](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/Guideline_sensor_Installation_apollo_3.0.md).
## 输入
感知模块的输入是:
- 雷达数据(ROS topic /apollo/sensor/conti_radar)
- 图像数据(R ROS topic /apollo/sensor/camera/obstacle/front_6mm)
- 雷达传感器校准的外部参数(来自YAML文件)
- 前摄像机标定的外部和内在参数(来自YAML文件)
- 主车辆的速度和角速度(ROS主题/阿波罗/定位/姿态)
## 输出
感知模块输出为:
* 具有航向、速度和分类信息的三维障碍物轨迹(ROS topic /apollo/perception/obstacles)
* 车道标志信息具有拟合曲线参数、空间信息(L0、R0等)以及语义信息(车道类型)(ROS topic /apollo/perception/obstacles)
- [ ] 1. 在配置文件`modules/perception/conf/perception_lowcost.conf`中设置一般设置。
- [ ] 2. 运行命令`./scripts/bootstrap.sh`以启动Web GUI。
- [ ] 3. 在Web GUI中选择车辆模型。
- [ ] 4. 使用`./scripts/perception_lowcost_vis.sh`启动或通过启用Web GUI的模块控制器(Module Controller)页面上的感知按钮启动感知模块。停止感知的命令是./scripts/perception_lowcost_vis.sh stop。注意:请不要尝试使用GUI来启用感知,然后使用脚本来停止它,反之亦然。
- [ ] 5. 从Apollo开放数据平台下载演示数据。
## 函数启用/禁用
感知框架是一个有向无环图(DAG)。在DAG配置中有三个组件,包括子节点、边缘和共享数据。每个功能在DAG中实现为子节点。共享数据的子节点具有从生产者到客户的边缘。
在下面的示例中显示了感知模块的典型DAG配置。示例DAG配置如下:
- 默认的障碍感知由“CameraProcessSubnode”、“ RadarProcessSubnode”和“FusionSubnode”组成,如subnode_config中所示。
- “CameraProcessSubnode”和“RadarProcessSubnode”独立接收传感器数据和输出障碍数据,即data_config中的“CameraObjectData”和“RadarObjectData”。
- “FusionSubnode”,它既订阅障碍数据又发布最终结果。
- “LanePostProcessSubnode”处理来自相机检测模块的车道解析输出,并生成车道实例和属性。
- 边缘和数据的配置定义了两点间的联系。
- 可以通过移除相应的子节点、边缘和共享数据配置来禁用每个函数。但是,必须确保所有输入和输出配置都是正确的。
``` protobuf
# Nvidia Driver and CUDA are required for these 2 subnodes
subnode_config {
# Camera node
subnodes {
id: 3
name: "CameraProcessSubnode"
reserve: "device_id:camera;"
type: SUBNODE_IN
}
subnodes {
id: 4
name: "MotionService"
reserve: "device_id:motion_service;"
type: SUBNODE_IN
}
subnodes {
id: 5
name: "LanePostProcessingSubnode"
reserve: "device_id:camera;motion_event_id:1021"
type: SUBNODE_NORMAL
}
subnodes {
id: 2
name: "RadarProcessSubnode"
reserve: "device_id:radar_front;"
type: SUBNODE_IN
}
subnodes {
id: 31
name: "FusionSubnode"
reserve: "pub_driven_event_id:1009;lane_event_id:1010;camera_event_id:1009;radar_event_id:1013;"
type: SUBNODE_OUT
}
}
###################################################################
# Define all edges which link nodes.
edge_config {
# CameraProcessSubnode -> LanePostProcessingSubnode
edges {
id: 106
from_node: 3
to_node: 5
events {
id: 1004
name: "lane_postprocessing"
}
}
# CameraProcessSubnode -> FusionSubnode
edges {
id: 109
from_node: 3
to_node: 31
events {
id: 1009
name: "camera_fusion"
}
}
# LanePostProcessingSubnode -> FusionSubnode
edges {
id: 110
from_node: 5
to_node: 31
events {
id: 1010
name: "lane_fusion"
}
}
# RadarSubnode -> FusionSubnode
edges {
id: 113
from_node: 2
to_node: 31
events {
id: 1013
name: "radar_fusion"
}
}
}
# Shared Data
data_config {
datas {
id: 5
name: "CameraObjectData"
}
datas {
id: 7
name: "CameraSharedData"
}
datas {
id: 8
name: "LaneSharedData"
}
datas {
id: 9
name: "FusionSharedData"
}
datas {
id: 10
name: "RadarObjectData"
}
}
**Note**: Nvidia GPU and CUDA are required to run the perception module with Caffe. Apollo provides the CUDA and Caffe libraries in the release docker image. However, the Nvidia GPU driver is not installed in the dev docker image.
To run the perception module with CUDA acceleration, install the exact same version of the Nvidia driver in the docker that is installed in your host machine, and then build Apollo with the GPU option (i.e., using `./apollo.sh build_gpu` or `./apollo.sh build_opt_gpu`).
See [How to Run Perception Module on Your Local Computer](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_run_perception_module_on_your_local_computer.md).
**Note**
This module contains a redistribution in binary form of a modified version of caffe(https://github.com/BVLC/caffe).
A copy of the caffe's original copyright statement is included below:
COPYRIGHT
All contributions by the University of California:
Copyright (c) 2014-2017 The Regents of the University of California (Regents)
All rights reserved.
All other contributions:
Copyright (c) 2014-2017, the respective contributors
All rights reserved.
Caffe uses a shared copyright model: each contributor holds copyright over
their contributions to Caffe. The project versioning records all such
contribution and copyright details. If a contributor wants to further mark
their specific copyright on a particular contribution, they should indicate
their copyright solely in the commit message of the change when it is
committed.
LICENSE
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
CONTRIBUTION AGREEMENT
By contributing to the BVLC/caffe repository through pull-request, comment,
or otherwise, the contributor releases their content to the
license and copyright terms herein.
load("//tools:cpplint.bzl", "cpplint")
package(default_visibility = ["//visibility:public"])
cc_library(
name = "common",
srcs = [
"geometry_util.cc",
"graph_util.cc",
"perception_gflags.cc",
],
hdrs = [
"geometry_util.h",
"graph_util.h",
"perception_gflags.h",
],
deps = [
":pcl_util",
"//modules/common",
"//modules/common/configs:config_gflags",
"//modules/common/math:geometry",
"@eigen",
],
)
cc_library(
name = "pcl_util",
srcs = [],
hdrs = ["pcl_types.h"],
deps = [
"@pcl",
],
)
cc_library(
name = "convex_hullxy",
srcs = [],
hdrs = ["convex_hullxy.h"],
linkopts = [
"-lqhull",
],
deps = [
"@pcl",
],
)
cc_test(
name = "convex_hullxy_test",
size = "small",
srcs = [
"convex_hullxy_test.cc",
],
data = [
"//modules/perception:perception_data",
],
deps = [
":convex_hullxy",
":pcl_util",
"//framework:cybertron",
"@eigen",
"@gtest//:main",
],
)
cc_test(
name = "common_test",
size = "small",
srcs = [
"geometry_util_test.cc",
"graph_util_test.cc",
],
data = [
"//modules/perception:perception_data",
],
deps = [
":common",
"//modules/perception/obstacle/lidar/object_builder/min_box",
"@eigen",
"@gtest//:main",
],
)
cpplint()
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
/*
* Software License Agreement (BSD License)
*
* Point Cloud Library (PCL) - www.pointclouds.org
* Copyright (c) 2010-2012, Willow Garage, Inc.
*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials provided
* with the distribution.
* * Neither the name of Willow Garage, Inc. nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
* FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
* COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
* ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*
*/
#ifndef MODULES_PERCEPTION_OBSTACLE_COMMON_CONVEX_HULLXY_H_
#define MODULES_PERCEPTION_OBSTACLE_COMMON_CONVEX_HULLXY_H_
#include <algorithm>
#include <string>
#include <utility>
#include <vector>
#include "pcl/common/centroid.h"
#include "pcl/surface/convex_hull.h"
#include "pcl/surface/qhull.h"
namespace apollo {
namespace perception {
template <typename PointInT>
class ConvexHull2DXY : public pcl::ConvexHull<PointInT> {
public:
typedef boost::shared_ptr<pcl::ConvexHull<PointInT>> Ptr;
typedef boost::shared_ptr<const pcl::ConvexHull<PointInT>> ConstPtr;
typedef pcl::PointCloud<PointInT> PointCloud;
typedef typename PointCloud::Ptr PointCloudPtr;
typedef typename PointCloud::ConstPtr PointCloudConstPtr;
using pcl::ConvexHull<PointInT>::reconstruct;
using pcl::ConvexHull<PointInT>::compute_area_;
using pcl::ConvexHull<PointInT>::total_area_;
using pcl::ConvexHull<PointInT>::total_volume_;
using pcl::ConvexHull<PointInT>::qhull_flags;
ConvexHull2DXY() = default;
virtual ~ConvexHull2DXY() = default;
void Reconstruct2dxy(PointCloudPtr hull,
std::vector<pcl::Vertices> *polygons) {
hull->header = input_->header;
if (!initCompute() || input_->points.empty() || indices_->empty()) {
hull->points.clear();
return;
}
PerformReconstruction2dxy(hull, polygons, true);
hull->width = static_cast<uint32_t>(hull->points.size());
hull->height = 1;
hull->is_dense = true;
deinitCompute();
}
void PerformReconstruction2dxy(PointCloudPtr hull,
std::vector<pcl::Vertices> *polygons,
bool fill_polygon_data = false) {
int dimension = 2;
// True if qhull should free points in qh_freeqhull() or reallocation
boolT ismalloc = True;
// output from qh_produce_output(), use NULL to skip qh_produce_output()
FILE *outfile = NULL;
#ifndef HAVE_QHULL_2011
if (compute_area_) {
outfile = stderr;
}
#endif
// option flags for qhull, see qh_opt.htm
const char *flags = qhull_flags.c_str();
// error messages from qhull code
FILE *errfile = stderr;
// Array of coordinates for each point
coordT *points = reinterpret_cast<coordT *>(
calloc(indices_->size() * dimension, sizeof(coordT)));
if (points == NULL) {
hull->points.resize(0);
hull->width = hull->height = 0;
polygons->resize(0);
return;
}
// Build input data, using appropriate projection
int j = 0;
for (size_t i = 0; i < indices_->size(); ++i, j += dimension) {
points[j + 0] = static_cast<coordT>(input_->points[(*indices_)[i]].x);
points[j + 1] = static_cast<coordT>(input_->points[(*indices_)[i]].y);
}
// Compute convex hull
int exitcode =
qh_new_qhull(dimension, static_cast<int>(indices_->size()), points,
ismalloc, const_cast<char *>(flags), outfile, errfile);
#ifdef HAVE_QHULL_2011
if (compute_area_) {
qh_prepare_output();
}
#endif
// 0 if no error from qhull or it doesn't find any vertices
if (exitcode != 0 || qh num_vertices == 0) {
PCL_ERROR(
"[pcl::%s::performReconstrution2D] "
"ERROR: qhull was unable to compute "
"a convex hull for the given point "
"cloud (%lu)!\n",
getClassName().c_str(), indices_->size());
hull->points.resize(0);
hull->width = hull->height = 0;
polygons->resize(0);
qh_freeqhull(!qh_ALL);
int curlong, totlong;
qh_memfreeshort(&curlong, &totlong);
return;
}
// Qhull returns the area in volume for 2D
if (compute_area_) {
total_area_ = qh totvol;
total_volume_ = 0.0;
}
int num_vertices = qh num_vertices;
hull->points.resize(num_vertices);
memset(&hull->points[0], static_cast<int>(hull->points.size()),
sizeof(PointInT));
vertexT *vertex;
int i = 0;
std::vector<std::pair<int, Eigen::Vector4f>,
Eigen::aligned_allocator<std::pair<int, Eigen::Vector4f>>>
idx_points(num_vertices);
idx_points.resize(hull->points.size());
memset(&idx_points[0], static_cast<int>(hull->points.size()),
sizeof(std::pair<int, Eigen::Vector4f>));
FORALLvertices {
hull->points[i] = input_->points[(*indices_)[qh_pointid(vertex->point)]];
idx_points[i].first = qh_pointid(vertex->point);
++i;
}
// Sort
Eigen::Vector4f centroid;
pcl::compute3DCentroid(*hull, centroid);
for (size_t j = 0; j < hull->points.size(); j++) {
idx_points[j].second[0] = hull->points[j].x - centroid[0];
idx_points[j].second[1] = hull->points[j].y - centroid[1];
}
std::sort(idx_points.begin(), idx_points.end(), pcl::comparePoints2D);
polygons->resize(1);
(*polygons)[0].vertices.resize(hull->points.size());
for (int j = 0; j < static_cast<int>(hull->points.size()); j++) {
hull->points[j] = input_->points[(*indices_)[idx_points[j].first]];
(*polygons)[0].vertices[j] = static_cast<unsigned int>(j);
}
qh_freeqhull(!qh_ALL);
int curlong, totlong;
qh_memfreeshort(&curlong, &totlong);
hull->width = static_cast<uint32_t>(hull->points.size());
hull->height = 1;
hull->is_dense = false;
return;
}
std::string getClassName() const { return ("ConvexHull2DXY"); }
protected:
using pcl::PCLBase<PointInT>::input_;
using pcl::PCLBase<PointInT>::indices_;
using pcl::PCLBase<PointInT>::initCompute;
using pcl::PCLBase<PointInT>::deinitCompute;
};
} // namespace perception
} // namespace apollo
#endif // MODULES_PERCEPTION_OBSTACLE_COMMON_CONVEX_HULLXY_H_
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#include "modules/perception/common/convex_hullxy.h"
#include <fstream>
#include "boost/algorithm/string.hpp"
#include "gtest/gtest.h"
#include "cybertron/common/log.h"
#include "modules/perception/common/pcl_types.h"
namespace apollo {
namespace perception {
using pcl_util::Point;
using pcl_util::PointCloud;
using pcl_util::PointCloudPtr;
using pcl_util::PointDCloud;
static bool ConstructPointCloud(std::vector<pcl_util::PointCloudPtr>* clouds) {
std::string pcd_data(
"modules/perception/data/obstacle_common_test/"
"QB9178_3_1461381834_1461382134_30651.pcd");
std::ifstream cluster_ifs(pcd_data.c_str(), std::ifstream::in);
std::string point_buf;
while (cluster_ifs.good()) {
getline(cluster_ifs, point_buf);
std::vector<std::string> point_strs;
boost::algorithm::split(point_strs, point_buf,
boost::algorithm::is_any_of(" "));
if (point_strs.size() <= 1 || (point_strs.size() - 1) % 4 != 0) {
continue;
}
std::stringstream ss;
ss << point_buf;
int point_num = 0;
ss >> point_num;
int exact_point_num = (point_strs.size() - 1) / 4;
if (point_num != exact_point_num) {
continue;
}
uint64_t intensity;
pcl_util::PointCloudPtr cluster_cloud(new pcl_util::PointCloud);
for (int i = 0; i < exact_point_num; ++i) {
pcl_util::Point p;
ss >> p.x >> p.y >> p.z >> intensity;
p.intensity = static_cast<uint8_t>(intensity);
cluster_cloud->points.push_back(p);
}
clouds->push_back(cluster_cloud);
}
return true;
}
TEST(ConvexHull2DXYTest, Reconstruct2dxy) {
std::vector<pcl_util::PointCloudPtr> clouds;
ConstructPointCloud(&clouds);
ASSERT_EQ(5, clouds.size());
ConvexHull2DXY<pcl_util::Point> convex_hull;
EXPECT_EQ(convex_hull.getClassName(), "ConvexHull2DXY");
std::vector<pcl::Vertices> poly_vt;
PointCloudPtr plane_hull(new PointCloud);
// case 0
pcl_util::PointCloudPtr cloud(new pcl_util::PointCloud);
pcl_util::Point pt;
pt.x = 1.0;
pt.y = 0.0;
pt.z = 0.0;
cloud->push_back(pt);
pt.x = 1.0;
pt.y = 1.0;
pt.z = 0.0;
cloud->push_back(pt);
pt.x = 0.0;
pt.y = 0.0;
pt.z = 0.0;
cloud->push_back(pt);
poly_vt.clear();
plane_hull->clear();
convex_hull.setInputCloud(cloud);
convex_hull.setDimension(2);
convex_hull.Reconstruct2dxy(plane_hull, &poly_vt);
EXPECT_EQ(3, poly_vt[0].vertices.size());
// case 1
poly_vt.clear();
plane_hull->clear();
convex_hull.setInputCloud(clouds[0]);
convex_hull.setDimension(2);
convex_hull.Reconstruct2dxy(plane_hull, &poly_vt);
EXPECT_EQ(1, poly_vt.size());
EXPECT_EQ(3, poly_vt[0].vertices.size());
// case 2
poly_vt.clear();
plane_hull->clear();
convex_hull.setInputCloud(clouds[1]);
convex_hull.setDimension(2);
convex_hull.Reconstruct2dxy(plane_hull, &poly_vt);
EXPECT_EQ(1, poly_vt.size());
EXPECT_EQ(5, poly_vt[0].vertices.size());
// case 3
poly_vt.clear();
plane_hull->clear();
convex_hull.setInputCloud(clouds[2]);
convex_hull.setDimension(2);
convex_hull.Reconstruct2dxy(plane_hull, &poly_vt);
EXPECT_EQ(1, poly_vt.size());
EXPECT_EQ(4, poly_vt[0].vertices.size());
// case 4
poly_vt.clear();
plane_hull->clear();
convex_hull.setInputCloud(clouds[3]);
convex_hull.setDimension(2);
convex_hull.Reconstruct2dxy(plane_hull, &poly_vt);
EXPECT_EQ(1, poly_vt.size());
EXPECT_EQ(6, poly_vt[0].vertices.size());
// case 5
poly_vt.clear();
plane_hull->clear();
convex_hull.setInputCloud(clouds[4]);
convex_hull.setDimension(2);
convex_hull.Reconstruct2dxy(plane_hull, &poly_vt);
EXPECT_EQ(1, poly_vt.size());
EXPECT_EQ(6, poly_vt[0].vertices.size());
}
} // namespace perception
} // namespace apollo
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#include "modules/perception/common/geometry_util.h"
#include "modules/common/math/math_utils.h"
namespace apollo {
namespace perception {
using pcl_util::Point;
using pcl_util::PointCloud;
using pcl_util::PointCloudPtr;
/*
* Transform point cloud methods
*/
void TransformPointCloud(pcl_util::PointCloudPtr cloud,
const std::vector<int>& indices,
pcl_util::PointDCloud* trans_cloud) {
if (trans_cloud->size() != indices.size()) {
trans_cloud->resize(indices.size());
}
for (size_t i = 0; i < indices.size(); ++i) {
const Point& p = cloud->at(indices[i]);
Eigen::Vector3d v(p.x, p.y, p.z);
pcl_util::PointD& tp = trans_cloud->at(i);
tp.x = v.x();
tp.y = v.y();
tp.z = v.z();
tp.intensity = p.intensity;
}
}
void TransformPointCloud(pcl_util::PointCloudPtr cloud,
const Eigen::Matrix4d& pose_velodyne,
pcl_util::PointDCloudPtr trans_cloud) {
Eigen::Matrix4d pose = pose_velodyne;
if (trans_cloud->size() != cloud->size()) {
trans_cloud->resize(cloud->size());
}
for (size_t i = 0; i < cloud->points.size(); ++i) {
const Point& p = cloud->at(i);
Eigen::Vector4d v(p.x, p.y, p.z, 1);
v = pose * v;
pcl_util::PointD& tp = trans_cloud->at(i);
tp.x = v.x();
tp.y = v.y();
tp.z = v.z();
tp.intensity = p.intensity;
}
}
/*
* Vector & Matrix related methods
*/
void TransAffineToMatrix4(const Eigen::Vector3d& translation,
const Eigen::Vector4d& rotation,
Eigen::Matrix4d* trans_matrix) {
const double t_x = translation(0);
const double t_y = translation(1);
const double t_z = translation(2);
const double qx = rotation(0);
const double qy = rotation(1);
const double qz = rotation(2);
const double qw = rotation(3);
(*trans_matrix) << 1 - 2 * (qy * qy + qz * qz), 2 * (qx * qy - qz * qw),
2 * (qx * qz + qy * qw), t_x, 2 * (qx * qy + qz * qw),
1 - 2 * (qx * qx + qz * qz), 2 * (qy * qz - qx * qw), t_y,
2 * (qx * qz - qy * qw), 2 * (qy * qz + qx * qw),
1 - 2 * (qx * qx + qy * qy), t_z, 0, 0, 0, 1;
}
void ComputeMostConsistentBboxDirection(const Eigen::Vector3f& previous_dir,
Eigen::Vector3f* current_dir) {
float dot_val_00 =
previous_dir(0) * (*current_dir)(0) + previous_dir(1) * (*current_dir)(1);
float dot_val_01 =
previous_dir(0) * (*current_dir)(1) - previous_dir(1) * (*current_dir)(0);
if (fabs(dot_val_00) >= fabs(dot_val_01)) {
if (dot_val_00 < 0) {
(*current_dir) = -(*current_dir);
}
} else {
if (dot_val_01 < 0) {
(*current_dir) =
Eigen::Vector3f((*current_dir)(1), -(*current_dir)(0), 0);
} else {
(*current_dir) =
Eigen::Vector3f(-(*current_dir)(1), (*current_dir)(0), 0);
}
}
}
double VectorCosTheta2dXy(const Eigen::Vector3f& v1,
const Eigen::Vector3f& v2) {
double v1_len = sqrt((v1.head(2).cwiseProduct(v1.head(2))).sum());
double v2_len = sqrt((v2.head(2).cwiseProduct(v2.head(2))).sum());
double cos_theta =
(v1.head(2).cwiseProduct(v2.head(2))).sum() / (v1_len * v2_len);
return cos_theta;
}
double VectorTheta2dXy(const Eigen::Vector3f& v1, const Eigen::Vector3f& v2) {
double v1_len = sqrt((v1.head(2).cwiseProduct(v1.head(2))).sum());
double v2_len = sqrt((v2.head(2).cwiseProduct(v2.head(2))).sum());
double cos_theta =
(v1.head(2).cwiseProduct(v2.head(2))).sum() / (v1_len * v2_len);
double sin_theta = (v1(0) * v2(1) - v1(1) * v2(0)) / (v1_len * v2_len);
cos_theta = common::math::Clamp(cos_theta, 1.0, -1.0);
double theta = acos(cos_theta);
if (sin_theta < 0) {
theta = -theta;
}
return theta;
}
} // namespace perception
} // namespace apollo
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#ifndef MODULES_PERCEPTION_OBSTACLE_COMMON_GEOMETRY_UTIL_H_
#define MODULES_PERCEPTION_OBSTACLE_COMMON_GEOMETRY_UTIL_H_
#include <cfloat>
#include <algorithm>
#include <vector>
#include "Eigen/Core"
#include "cybertron/common/log.h"
#include "modules/perception/common/pcl_types.h"
namespace apollo {
namespace perception {
template <typename PointT>
void TransformPointCloud(const Eigen::Matrix4d& trans_mat,
pcl::PointCloud<PointT>* cloud_in_out) {
for (std::size_t i = 0; i < cloud_in_out->size(); ++i) {
PointT& p = cloud_in_out->at(i);
Eigen::Vector4d v(p.x, p.y, p.z, 1);
v = trans_mat * v;
p.x = v.x();
p.y = v.y();
p.z = v.z();
}
}
template <typename PointT>
void TransformPointCloud(const Eigen::Matrix4d& trans_mat,
typename pcl::PointCloud<PointT>::Ptr cloud_in_out) {
CHECK_NOTNULL(cloud_in_out.get());
return TransformPointCloud(trans_mat, cloud_in_out.get());
}
template <typename PointType>
void TransformPoint(const PointType& point_in, const Eigen::Matrix4d& trans_mat,
PointType* point_out) {
Eigen::Vector4d v(point_in.x, point_in.y, point_in.z, 1);
v = trans_mat * v;
*point_out = point_in;
point_out->x = v.x();
point_out->y = v.y();
point_out->z = v.z();
}
template <typename PointType>
void TransformPointCloud(const pcl::PointCloud<PointType>& cloud_in,
const Eigen::Matrix4d& trans_mat,
pcl::PointCloud<PointType>* cloud_out) {
if (cloud_out->points.size() < cloud_in.points.size()) {
cloud_out->points.resize(cloud_in.points.size());
}
for (std::size_t i = 0; i < cloud_in.size(); ++i) {
const PointType& p = cloud_in.at(i);
Eigen::Vector4d v(p.x, p.y, p.z, 1);
v = trans_mat * v;
PointType& pd = cloud_out->points[i];
pd.x = v.x();
pd.y = v.y();
pd.z = v.z();
}
}
void TransformPointCloud(pcl_util::PointCloudPtr cloud,
const std::vector<int>& indices,
pcl_util::PointDCloud* trans_cloud);
void TransformPointCloud(pcl_util::PointCloudPtr cloud,
const Eigen::Matrix4d& pose_velodyne,
typename pcl_util::PointDCloudPtr trans_cloud);
/*
* Other point cloud related methods
* */
template <typename PointT>
void GetCloudMinMax3D(typename pcl::PointCloud<PointT>::Ptr cloud,
Eigen::Vector4f* min_point, Eigen::Vector4f* max_point) {
Eigen::Vector4f& min_pt = *min_point;
Eigen::Vector4f& max_pt = *max_point;
min_pt[0] = min_pt[1] = min_pt[2] = FLT_MAX;
max_pt[0] = max_pt[1] = max_pt[2] = -FLT_MAX;
if (cloud->is_dense) {
for (size_t i = 0; i < cloud->points.size(); ++i) {
min_pt[0] = std::min(min_pt[0], cloud->points[i].x);
max_pt[0] = std::max(max_pt[0], cloud->points[i].x);
min_pt[1] = std::min(min_pt[1], cloud->points[i].y);
max_pt[1] = std::max(max_pt[1], cloud->points[i].y);
min_pt[2] = std::min(min_pt[2], cloud->points[i].z);
max_pt[2] = std::max(max_pt[2], cloud->points[i].z);
}
} else {
for (size_t i = 0; i < cloud->points.size(); ++i) {
if (!pcl_isfinite(cloud->points[i].x) ||
!pcl_isfinite(cloud->points[i].y) ||
!pcl_isfinite(cloud->points[i].z)) {
continue;
}
min_pt[0] = std::min(min_pt[0], cloud->points[i].x);
max_pt[0] = std::max(max_pt[0], cloud->points[i].x);
min_pt[1] = std::min(min_pt[1], cloud->points[i].y);
max_pt[1] = std::max(max_pt[1], cloud->points[i].y);
min_pt[2] = std::min(min_pt[2], cloud->points[i].z);
max_pt[2] = std::max(max_pt[2], cloud->points[i].z);
}
}
}
template <typename PointT>
void ComputeBboxSizeCenter(typename pcl::PointCloud<PointT>::Ptr cloud,
const Eigen::Vector3d& direction,
Eigen::Vector3d* size, Eigen::Vector3d* center) {
Eigen::Vector3d dir(direction[0], direction[1], 0);
dir.normalize();
Eigen::Vector3d ortho_dir(-dir[1], dir[0], 0.0);
Eigen::Vector3d z_dir(dir.cross(ortho_dir));
Eigen::Vector3d min_pt(DBL_MAX, DBL_MAX, DBL_MAX);
Eigen::Vector3d max_pt(-DBL_MAX, -DBL_MAX, -DBL_MAX);
Eigen::Vector3d loc_pt;
for (std::size_t i = 0; i < cloud->size(); i++) {
Eigen::Vector3d pt = Eigen::Vector3d(cloud->points[i].x, cloud->points[i].y,
cloud->points[i].z);
loc_pt[0] = pt.dot(dir);
loc_pt[1] = pt.dot(ortho_dir);
loc_pt[2] = pt.dot(z_dir);
for (int j = 0; j < 3; j++) {
min_pt[j] = std::min(min_pt[j], loc_pt[j]);
max_pt[j] = std::max(max_pt[j], loc_pt[j]);
}
}
*size = max_pt - min_pt;
*center = dir * ((max_pt[0] + min_pt[0]) * 0.5) +
ortho_dir * ((max_pt[1] + min_pt[1]) * 0.5) + z_dir * min_pt[2];
}
template <typename PointT>
Eigen::Vector3d GetCloudBarycenter(
typename pcl::PointCloud<PointT>::Ptr cloud) {
int point_num = cloud->points.size();
Eigen::Vector3d barycenter(0, 0, 0);
for (int i = 0; i < point_num; i++) {
const PointT& pt = cloud->points[i];
barycenter[0] += pt.x;
barycenter[1] += pt.y;
barycenter[2] += pt.z;
}
if (point_num > 0) {
barycenter[0] /= point_num;
barycenter[1] /= point_num;
barycenter[2] /= point_num;
}
return barycenter;
}
void TransAffineToMatrix4(const Eigen::Vector3d& translation,
const Eigen::Vector4d& rotation,
Eigen::Matrix4d* trans_matrix);
void ComputeMostConsistentBboxDirection(const Eigen::Vector3f& previous_dir,
Eigen::Vector3f* current_dir);
double VectorCosTheta2dXy(const Eigen::Vector3f& v1, const Eigen::Vector3f& v2);
double VectorTheta2dXy(const Eigen::Vector3f& v1, const Eigen::Vector3f& v2);
} // namespace perception
} // namespace apollo
#endif // MODULES_PERCEPTION_OBSTACLE_COMMON_GEOMETRY_UTIL_H_
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#include "modules/perception/common/geometry_util.h"
#include <cfloat>
#include <fstream>
#include <string>
#include <vector>
#include "boost/algorithm/string.hpp"
#include "gtest/gtest.h"
#include "cybertron/common/log.h"
#include "modules/perception/obstacle/lidar/object_builder/min_box/min_box.h"
namespace apollo {
namespace perception {
using pcl_util::PointCloud;
using pcl_util::PointCloudPtr;
using pcl_util::PointDCloud;
namespace {
const double EPSILON = 1e-6;
}
static bool ConstructPointCloud(std::vector<pcl_util::PointCloudPtr>* clouds) {
std::string pcd_data(
"modules/perception/data/obstacle_common_test/"
"QB9178_3_1461381834_1461382134_30651.pcd");
std::ifstream cluster_ifs(pcd_data.c_str(), std::ifstream::in);
std::string point_buf;
while (cluster_ifs.good()) {
getline(cluster_ifs, point_buf);
std::vector<std::string> point_strs;
boost::algorithm::split(point_strs, point_buf,
boost::algorithm::is_any_of(" "));
if (point_strs.size() <= 1 || (point_strs.size() - 1) % 4 != 0) {
continue;
}
std::stringstream ss;
ss << point_buf;
int point_num = 0;
ss >> point_num;
int exact_point_num = (point_strs.size() - 1) / 4;
if (point_num != exact_point_num) {
continue;
}
uint64_t intensity;
pcl_util::PointCloudPtr cluster_cloud(new pcl_util::PointCloud);
for (int i = 0; i < exact_point_num; ++i) {
pcl_util::Point p;
ss >> p.x >> p.y >> p.z >> intensity;
p.intensity = static_cast<uint8_t>(intensity);
cluster_cloud->points.push_back(p);
}
clouds->push_back(cluster_cloud);
}
return true;
}
class GeometryUtilTest : public testing::Test {
protected:
GeometryUtilTest() {}
virtual ~GeometryUtilTest() {}
void SetUp() {
ConstructPointCloud(&clouds_);
ASSERT_EQ(5, clouds_.size());
Eigen::Vector3d translation(-0.0189, 1.0061, 1);
Eigen::Vector4d rotation(0.0099, -0.0029, 0.6989, 0.7151);
TransAffineToMatrix4(translation, rotation, &trans_matrix_);
}
void TearDown() {}
protected:
std::vector<pcl_util::PointCloudPtr> clouds_;
Eigen::Matrix4d trans_matrix_;
};
TEST_F(GeometryUtilTest, TransformPointCloud1) {
pcl_util::PointCloudPtr in_out_cloud(new pcl_util::PointCloud);
pcl::copyPointCloud(*clouds_[0], *in_out_cloud);
TransformPointCloud<pcl_util::Point>(trans_matrix_, in_out_cloud);
EXPECT_NEAR(64.184380, in_out_cloud->at(0).x, EPSILON);
EXPECT_NEAR(13.708398, in_out_cloud->at(0).y, EPSILON);
EXPECT_NEAR(1.715922, in_out_cloud->at(0).z, EPSILON);
}
TEST_F(GeometryUtilTest, TransformPointCloud2) {
pcl_util::Point point_in, point_out;
point_in.x = 1.0;
point_in.y = 1.0;
point_in.z = 1.0;
TransformPoint(point_in, trans_matrix_, &point_out);
EXPECT_NEAR(-0.985772, point_out.x, EPSILON);
EXPECT_NEAR(2.010278, point_out.y, EPSILON);
EXPECT_NEAR(2.027878, point_out.z, EPSILON);
pcl_util::PointCloudPtr in_out_cloud(new pcl_util::PointCloud);
pcl::copyPointCloud(*clouds_[0], *in_out_cloud);
TransformPointCloud<pcl_util::Point>(trans_matrix_, in_out_cloud);
EXPECT_NEAR(64.184380, in_out_cloud->at(0).x, EPSILON);
EXPECT_NEAR(13.708398, in_out_cloud->at(0).y, EPSILON);
EXPECT_NEAR(1.715922, in_out_cloud->at(0).z, EPSILON);
pcl_util::PointCloudPtr out_cloud(new pcl_util::PointCloud);
TransformPointCloud(*clouds_[0], trans_matrix_, out_cloud.get());
EXPECT_NEAR(64.184380, out_cloud->at(0).x, EPSILON);
EXPECT_NEAR(13.708398, out_cloud->at(0).y, EPSILON);
EXPECT_NEAR(1.715922, out_cloud->at(0).z, EPSILON);
}
TEST_F(GeometryUtilTest, TransformPointCloud3) {
pcl_util::PointCloudPtr in_cloud(new pcl_util::PointCloud);
for (int i = 0; i < 10; ++i) {
pcl_util::Point pt;
pt.x = static_cast<float>(i);
pt.y = static_cast<float>(i);
pt.z = static_cast<float>(i);
pt.intensity = 123.0f;
in_cloud->push_back(pt);
}
pcl_util::PointDCloud out_cloud;
std::vector<int> indices{0, 2, 5, 9};
TransformPointCloud(in_cloud, indices, &out_cloud);
EXPECT_EQ(out_cloud.points.size(), 4);
EXPECT_NEAR(out_cloud.points[0].x, 0.0, 1e-6);
EXPECT_NEAR(out_cloud.points[1].y, 2.0, 1e-6);
EXPECT_NEAR(out_cloud.points[2].z, 5.0, 1e-6);
EXPECT_EQ(out_cloud.points[3].intensity, 123u);
}
TEST_F(GeometryUtilTest, TransformPointCloud) {
pcl_util::PointDCloudPtr trans_cloud(new pcl_util::PointDCloud);
TransformPointCloud(clouds_[0], trans_matrix_, trans_cloud);
EXPECT_NEAR(64.184377, trans_cloud->at(0).x, EPSILON);
EXPECT_NEAR(13.708398, trans_cloud->at(0).y, EPSILON);
EXPECT_NEAR(1.715922, trans_cloud->at(0).z, EPSILON);
}
TEST_F(GeometryUtilTest, GetCloudMinMax3D) {
pcl_util::PointCloudPtr in_cloud(new pcl_util::PointCloud);
in_cloud->is_dense = true;
for (int i = 0; i < 10; ++i) {
pcl_util::Point pt;
pt.x = static_cast<float>(i);
pt.y = static_cast<float>(i);
pt.z = static_cast<float>(i);
pt.intensity = 123.0f;
in_cloud->push_back(pt);
}
Eigen::Vector4f min_point;
Eigen::Vector4f max_point;
GetCloudMinMax3D<pcl_util::Point>(in_cloud, &min_point, &max_point);
EXPECT_NEAR(min_point.x(), 0.0, EPSILON);
EXPECT_NEAR(min_point.y(), 0.0, EPSILON);
EXPECT_NEAR(min_point.z(), 0.0, EPSILON);
EXPECT_NEAR(max_point.x(), 9.0, EPSILON);
EXPECT_NEAR(max_point.y(), 9.0, EPSILON);
EXPECT_NEAR(max_point.z(), 9.0, EPSILON);
in_cloud->is_dense = false;
pcl_util::Point pt;
pt.x = NAN;
pt.y = NAN;
pt.z = NAN;
in_cloud->push_back(pt);
GetCloudMinMax3D<pcl_util::Point>(in_cloud, &min_point, &max_point);
EXPECT_NEAR(min_point.x(), 0.0, EPSILON);
EXPECT_NEAR(min_point.y(), 0.0, EPSILON);
EXPECT_NEAR(min_point.z(), 0.0, EPSILON);
EXPECT_NEAR(max_point.x(), 9.0, EPSILON);
EXPECT_NEAR(max_point.y(), 9.0, EPSILON);
EXPECT_NEAR(max_point.z(), 9.0, EPSILON);
}
TEST_F(GeometryUtilTest, ComputeBboxSizeCenter) {
std::vector<std::shared_ptr<Object>> objects;
for (size_t i = 0; i < clouds_.size(); ++i) {
std::shared_ptr<Object> object(new Object);
object->cloud = clouds_[i];
objects.push_back(object);
}
EXPECT_EQ(5, objects.size());
ObjectBuilderOptions options;
MinBoxObjectBuilder min_box_object_builder;
EXPECT_TRUE(min_box_object_builder.Build(options, &objects));
Eigen::Vector3d old_dir = objects[0]->direction;
Eigen::Vector3d old_size = Eigen::Vector3d(
objects[0]->length, objects[0]->width, objects[0]->height);
Eigen::Vector3d old_center = objects[0]->center;
Eigen::Vector3d new_size = old_size;
Eigen::Vector3d new_center = old_center;
ComputeBboxSizeCenter<pcl_util::Point>(objects[0]->cloud, old_dir, &new_size,
&new_center);
EXPECT_NEAR(0.149998, new_size[0], EPSILON);
EXPECT_NEAR(0.056100, new_size[1], EPSILON);
EXPECT_NEAR(0.732072, new_size[2], EPSILON);
EXPECT_NEAR(14.186950, new_center[0], EPSILON);
EXPECT_NEAR(-63.964300, new_center[1], EPSILON);
EXPECT_NEAR(0.374468, new_center[2], EPSILON);
}
TEST_F(GeometryUtilTest, GetCloudBarycenter) {
Eigen::Vector3f barycenter;
// case 1
barycenter =
GetCloudBarycenter<apollo::perception::pcl_util::Point>(clouds_[0])
.cast<float>();
EXPECT_NEAR(14.188400, barycenter[0], EPSILON);
// case 2
barycenter =
GetCloudBarycenter<apollo::perception::pcl_util::Point>(clouds_[1])
.cast<float>();
EXPECT_NEAR(15.661365, barycenter[0], EPSILON);
// case 3
barycenter =
GetCloudBarycenter<apollo::perception::pcl_util::Point>(clouds_[2])
.cast<float>();
EXPECT_NEAR(29.376224, barycenter[0], EPSILON);
// case 4
barycenter =
GetCloudBarycenter<apollo::perception::pcl_util::Point>(clouds_[3])
.cast<float>();
EXPECT_NEAR(13.144600, barycenter[0], EPSILON);
// case 5
barycenter =
GetCloudBarycenter<apollo::perception::pcl_util::Point>(clouds_[4])
.cast<float>();
EXPECT_NEAR(14.668054, barycenter[0], EPSILON);
}
TEST_F(GeometryUtilTest, TransAffineToMatrix4) {
Eigen::Vector3d translation(-0.0189, 1.0061, 1);
Eigen::Vector4d rotation(0.0099, -0.0029, 0.6989, 0.7151);
Eigen::Matrix4d trans_matrix_;
TransAffineToMatrix4(translation, rotation, &trans_matrix_);
AINFO << "trans_matrix: " << trans_matrix_;
}
TEST_F(GeometryUtilTest, ComputeMostConsistentBboxDirection) {
Eigen::Vector3f previous_dir(1.0, 0.0, 0.0);
Eigen::Vector3f current_dir(0.0, 1.0, 0.0);
ComputeMostConsistentBboxDirection(previous_dir, &current_dir);
EXPECT_NEAR(-1.0, current_dir[0], EPSILON);
EXPECT_NEAR(0.0, current_dir[1], EPSILON);
EXPECT_NEAR(0.0, current_dir[2], EPSILON);
}
TEST_F(GeometryUtilTest, VectorCosTheta2dXy) {
// case 1
Eigen::Vector3f track_motion_dir(1.0, 1.0, 0.0);
Eigen::Vector3f anchor_point_shift_dir(1.0, 1.0, 0.0);
track_motion_dir.normalize();
anchor_point_shift_dir.normalize();
double cos_theta =
VectorCosTheta2dXy(track_motion_dir, anchor_point_shift_dir);
EXPECT_NEAR(1.0, cos_theta, EPSILON);
// case 2
track_motion_dir << 2.0, 2.2, 3.1;
anchor_point_shift_dir << 1.1, 2.0, 3.5;
cos_theta = VectorCosTheta2dXy(track_motion_dir, anchor_point_shift_dir);
track_motion_dir.normalize();
anchor_point_shift_dir.normalize();
EXPECT_NEAR(0.972521, cos_theta, EPSILON);
// case 3
track_motion_dir << 25.0, 72.2, 3.24;
anchor_point_shift_dir << 31.1, 98.0, 24.5;
cos_theta = VectorCosTheta2dXy(track_motion_dir, anchor_point_shift_dir);
track_motion_dir.normalize();
anchor_point_shift_dir.normalize();
EXPECT_NEAR(0.999661, cos_theta, EPSILON);
}
TEST_F(GeometryUtilTest, VectorTheta2dXy) {
// case 1
Eigen::Vector3f coord_dir(0, 1.0, 0);
Eigen::Vector3f dir_velo3(1.0, 0.0, 0.0);
double theta = VectorCosTheta2dXy(coord_dir, dir_velo3);
EXPECT_NEAR(0.0, theta, EPSILON);
// case 2
coord_dir << 1.0, 0.0, 0.0;
dir_velo3 << 1.0, 0.0, 0.0;
theta = VectorCosTheta2dXy(coord_dir, dir_velo3);
EXPECT_NEAR(1.0, theta, EPSILON);
// case 3
coord_dir << 0.3, 0.4, -3.2;
coord_dir.normalize();
dir_velo3 << 2.3, -21.4, 4.2;
dir_velo3.normalize();
theta = VectorCosTheta2dXy(coord_dir, dir_velo3);
EXPECT_NEAR(-0.731302, theta, EPSILON);
// case 4
coord_dir << 2.0, 3.0, -4.6;
coord_dir.normalize();
dir_velo3 << -1.0, 2.4, -3.2;
dir_velo3.normalize();
theta = VectorCosTheta2dXy(coord_dir, dir_velo3);
EXPECT_NEAR(0.554700, theta, EPSILON);
}
} // namespace perception
} // namespace apollo
......@@ -3,20 +3,24 @@ load("//tools:cpplint.bzl", "cpplint")
package(default_visibility = ["//visibility:public"])
cc_library(
name = "motion",
name = "disjoint_set",
srcs = [
"plane_motion.cc",
"disjoint_set.cc",
],
hdrs = [
"plane_motion.h",
"disjoint_set.h",
],
copts = [
"-Wno-deprecated",
)
cc_test(
name = "disjoint_set_test",
size = "small",
srcs = [
"disjoint_set_test.cc",
],
deps = [
"//modules/common",
"//framework:cybertron",
"//modules/perception/obstacle/base",
":disjoint_set",
"@gtest//:main",
],
)
......
......@@ -14,25 +14,62 @@
* limitations under the License.
*****************************************************************************/
#include "modules/perception/obstacle/camera/lane_post_process/common/type.h"
#include "modules/perception/common/graph/disjoint_set.h"
namespace apollo {
namespace perception {
namespace common {
void LaneObjectsToLaneMarkerProto(const LaneObjects& lane_objects,
LaneMarkers* lane_markers) {
for (auto& lane_obj : lane_objects) {
if (lane_obj.spatial == SpatialLabelType::L_0) {
lane_obj.ToLaneMarkerProto(lane_markers->mutable_left_lane_marker());
} else if (lane_obj.spatial == SpatialLabelType::R_0) {
lane_obj.ToLaneMarkerProto(lane_markers->mutable_right_lane_marker());
} else if (lane_obj.spatial == SpatialLabelType::L_1) {
lane_obj.ToLaneMarkerProto(lane_markers->add_next_left_lane_marker());
} else if (lane_obj.spatial == SpatialLabelType::R_1) {
lane_obj.ToLaneMarkerProto(lane_markers->add_next_right_lane_marker());
Universe::Universe(const int elements_num)
: elts_(elements_num), sets_num_(elements_num) {
for (int i = 0; i < elements_num; ++i) {
elts_[i].rank = 0;
elts_[i].size = 1;
elts_[i].p = i;
}
}
void Universe::Reset(const int elements_num) {
sets_num_ = elements_num;
elts_.resize(elements_num);
for (int i = 0; i < elements_num; ++i) {
elts_[i].rank = 0;
elts_[i].size = 1;
elts_[i].p = i;
}
}
int Universe::Find(const int x) {
int y = x;
while (y != elts_[y].p) {
y = elts_[y].p;
}
int w = x;
while (true) {
const int z = elts_[w].p;
if (z == w) {
break;
}
elts_[w].p = y;
w = z;
}
return y;
}
void Universe::Join(const int x, const int y) {
if (elts_[x].rank > elts_[y].rank) {
elts_[y].p = x;
elts_[x].size += elts_[y].size;
} else {
elts_[x].p = y;
elts_[y].size += elts_[x].size;
if (elts_[x].rank == elts_[y].rank) {
++elts_[y].rank;
}
}
--sets_num_;
}
} // namespace common
} // namespace perception
} // namespace apollo
......@@ -14,43 +14,53 @@
* limitations under the License.
*****************************************************************************/
// Tracker which uses deep learning ROI features from detection
#ifndef MODULES_PERCEPTION_COMMON_GRAPH_DISJOINT_SET_H_
#define MODULES_PERCEPTION_COMMON_GRAPH_DISJOINT_SET_H_
#ifndef MODULES_PERCEPTION_OBSTACLE_CAMERA_TRACKER_DLF_AFFINITY_TRACKER_H_
#define MODULES_PERCEPTION_OBSTACLE_CAMERA_TRACKER_DLF_AFFINITY_TRACKER_H_
#include <limits>
#include <unordered_map>
#include <vector>
#include "modules/perception/obstacle/camera/tracker/base_affinity_tracker.h"
namespace apollo {
namespace perception {
namespace common {
// Note: a partition of a set U is defined to be a collection P of subsets of U
// such that U is the union of all sets in P and P is pairwise disjoint. U is
// called the universe of P and P is a partition of U
class Universe {
struct Element {
int rank = 0;
int p = 0;
int size = 1;
};
class DLFAffinityTracker : public BaseAffinityTracker {
public:
DLFAffinityTracker() : BaseAffinityTracker() {}
Universe() = default;
explicit Universe(const int elements_num);
~Universe() = default;
// @brief: reset each element
void Reset(const int elements_num);
virtual ~DLFAffinityTracker() {}
// @brief: find and return input element's parent
int Find(const int x);
bool Init() override;
// @brief: join the two elements into one set
void Join(const int x, const int y);
bool GetAffinityMatrix(
const cv::Mat &img, const std::vector<Tracked> &tracked,
const std::vector<Detected> &detected,
std::vector<std::vector<float>> *affinity_matrix) override;
// @brief: get size of input element
int GetSize(const int x) const { return elts_[x].size; }
bool UpdateTracked(const cv::Mat &img, const std::vector<Detected> &detected,
std::vector<Tracked> *tracked) override;
// @brief: get sets number
int GetSetsNum() const { return sets_num_; }
private:
// Thresholds are fine-tuned and detector-dependant
const float kConfThreshold_ = 1.0f;
const float kFilterThreshold_ = 0.0f;
std::vector<Element> elts_;
int sets_num_ = 0;
};
} // namespace common
} // namespace perception
} // namespace apollo
#endif // MODULES_PERCEPTION_OBSTACLE_CAMERA_TRACKER_DLF_AFFINITY_TRACKER_H_
#endif // MODULES_PERCEPTION_COMMON_GRAPH_DISJOINT_SET_H_
......@@ -14,19 +14,57 @@
* limitations under the License.
*****************************************************************************/
#include "modules/perception/obstacle/camera/converter/geometry_camera_converter.h"
#include "modules/perception/common/graph/disjoint_set.h"
#include "gtest/gtest.h"
#include <memory>
#include "modules/perception/common/perception_gflags.h"
#include "gtest/gtest.h"
namespace apollo {
namespace perception {
namespace common {
class DisjointSetTest : public testing::Test {
protected:
void SetUp() { universe_ = std::shared_ptr<Universe>(new Universe(16)); }
std::shared_ptr<Universe> universe_;
};
TEST_F(DisjointSetTest, test_get_sets_num) {
EXPECT_EQ(16, universe_->GetSetsNum());
}
TEST_F(DisjointSetTest, test_get_size) {
EXPECT_EQ(1, universe_->GetSize(0));
EXPECT_EQ(1, universe_->GetSize(5));
EXPECT_EQ(1, universe_->GetSize(12));
}
TEST_F(DisjointSetTest, test_reset) {
universe_->Reset(10);
EXPECT_EQ(10, universe_->GetSetsNum());
EXPECT_EQ(1, universe_->GetSize(7));
}
TEST(GeometryCameraConverterTest, TestLoadProto) {
GeometryCameraConverter gcc;
EXPECT_TRUE(gcc.Init());
TEST_F(DisjointSetTest, test_find_join) {
EXPECT_EQ(6, universe_->Find(6));
EXPECT_EQ(9, universe_->Find(9));
universe_->Join(6, 9);
EXPECT_EQ(9, universe_->Find(6));
EXPECT_EQ(2, universe_->GetSize(9));
EXPECT_EQ(15, universe_->GetSetsNum());
universe_->Join(9, 1);
EXPECT_EQ(9, universe_->Find(1));
EXPECT_EQ(3, universe_->GetSize(9));
EXPECT_EQ(14, universe_->GetSetsNum());
universe_->Join(2, 4);
universe_->Join(4, 9);
EXPECT_EQ(9, universe_->Find(4));
EXPECT_EQ(5, universe_->GetSize(9));
EXPECT_EQ(12, universe_->GetSetsNum());
}
} // namespace common
} // namespace perception
} // namespace apollo
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#include "modules/perception/common/graph_util.h"
#include <cstdio>
#include <queue>
namespace apollo {
namespace perception {
void ConnectedComponentAnalysis(const std::vector<std::vector<int>>& graph,
std::vector<std::vector<int>>* components) {
int no_item = graph.size();
std::vector<int> visited;
visited.resize(no_item, 0);
std::queue<int> que;
std::vector<int> component;
components->clear();
for (int i = 0; i < no_item; ++i) {
if (visited[i]) {
continue;
}
component.push_back(i);
que.push(i);
visited[i] = 1;
while (!que.empty()) {
int id = que.front();
que.pop();
for (size_t j = 0; j < graph[id].size(); ++j) {
int nb_id = graph[id][j];
if (visited[nb_id] == 0) {
component.push_back(nb_id);
que.push(nb_id);
visited[nb_id] = 1;
}
}
}
components->push_back(component);
component.clear();
}
}
} // namespace perception
} // namespace apollo
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#ifndef MODULES_PERCEPTION_OBSTACLE_COMMON_GRAPH_UTIL_H_
#define MODULES_PERCEPTION_OBSTACLE_COMMON_GRAPH_UTIL_H_
#include <vector>
namespace apollo {
namespace perception {
// bfs based component analysis
void ConnectedComponentAnalysis(const std::vector<std::vector<int>>& graph,
std::vector<std::vector<int>>* components);
} // namespace perception
} // namespace apollo
#endif // MODULES_PERCEPTION_OBSTACLE_COMMON_GRAPH_UTIL_H_
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#include "modules/perception/common/graph_util.h"
#include "Eigen/Core"
#include "gtest/gtest.h"
#include "cybertron/common/log.h"
namespace apollo {
namespace perception {
class GraphUtilTest : public testing::Test {
protected:
GraphUtilTest() = default;
virtual ~GraphUtilTest() = default;
void SetUp() {}
void TearDown() {}
};
TEST_F(GraphUtilTest, ConnectedComponentAnalysis) {
Eigen::MatrixXf association_mat(3, 4);
association_mat << 0.3, 1.2, 4.0, 3.0, 0.9, 2.0, 3.0, 8.0, 4.0, 3.0, 0.3, 0.1;
const float connected_threshold = 2.1;
// Compute connected components within given threshold
int no_track = association_mat.rows();
int no_obj = association_mat.cols();
std::vector<std::vector<int>> nb_graph;
nb_graph.resize(no_track + no_obj);
for (int i = 0; i < no_track; i++) {
for (int j = 0; j < no_obj; j++) {
if (association_mat(i, j) <= connected_threshold) {
nb_graph[i].push_back(no_track + j);
nb_graph[j + no_track].push_back(i);
}
}
}
std::vector<std::vector<int>> components;
ConnectedComponentAnalysis(nb_graph, &components);
EXPECT_EQ(2, components.size());
EXPECT_EQ(4, components[0].size());
EXPECT_EQ(0, components[0][0]);
EXPECT_EQ(3, components[0][1]);
EXPECT_EQ(4, components[0][2]);
EXPECT_EQ(1, components[0][3]);
EXPECT_EQ(3, components[1].size());
EXPECT_EQ(2, components[1][0]);
EXPECT_EQ(5, components[1][1]);
EXPECT_EQ(6, components[1][2]);
}
} // namespace perception
} // namespace apollo
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#ifndef MODULES_PERCEPTION_COMMON_PCL_TYPES_H_
#define MODULES_PERCEPTION_COMMON_PCL_TYPES_H_
#include "pcl/common/time.h"
#include "pcl/common/transforms.h"
#include "pcl/kdtree/kdtree_flann.h"
#include "pcl/point_cloud.h"
#include "pcl/point_types.h"
#include "pcl/search/impl/kdtree.hpp"
namespace apollo {
namespace perception {
namespace pcl_util {
typedef pcl::PointIndices PointIndices;
typedef pcl::PointIndices::Ptr PointIndicesPtr;
typedef pcl::PointXY Point2d;
typedef pcl::PointCloud<Point2d> PointCloud2d;
typedef pcl::PointCloud<Point2d>::Ptr PointCloud2dPtr;
typedef pcl::PointCloud<Point2d>::ConstPtr PointCloud2dConstPtr;
struct PointXYZIH;
typedef PointXYZIH Point;
typedef pcl::PointCloud<Point> PointCloud;
typedef pcl::PointCloud<Point>::Ptr PointCloudPtr;
typedef pcl::PointCloud<Point>::ConstPtr PointCloudConstPtr;
typedef pcl::PointXYZRGB CPoint;
typedef pcl::PointCloud<CPoint> CPointCloud;
typedef pcl::PointCloud<CPoint>::Ptr CPointCloudPtr;
typedef pcl::PointCloud<CPoint>::ConstPtr CPointCloudConstPtr;
typedef pcl::KdTreeFLANN<Point> KdTree;
struct BoundingCube {
float x; // center of box
float y; // center of box
float z; // center of box
float length;
float width;
float height;
float yaw;
float trans_x; // center of points
float trans_y; // center of points
float trans_z; // center of points
};
// using double type to define x, y, z.
struct PointD {
double x;
double y;
double z;
uint8_t intensity;
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
} EIGEN_ALIGN16;
typedef ::pcl::PointCloud<PointD> PointDCloud;
typedef ::pcl::PointCloud<PointD>::Ptr PointDCloudPtr;
typedef ::pcl::PointCloud<PointD>::ConstPtr PointDCloudConstPtr;
struct PointXYZIH {
PCL_ADD_POINT4D;
float intensity;
float h; // height from ground
PointXYZIH() {
x = 0.0f;
y = 0.0f;
z = 0.0f;
h = 0.0f;
intensity = 0.0f;
data[3] = 1.0f;
}
explicit PointXYZIH(float intensity_) {
x = 0.0f;
y = 0.0f;
z = 0.0f;
data[3] = 1.0f;
intensity = intensity_;
}
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
} EIGEN_ALIGN16;
struct PointXYZIRT {
float x;
float y;
float z;
unsigned char intensity;
unsigned char ring;
double timestamp;
EIGEN_MAKE_ALIGNED_OPERATOR_NEW // make sure our new allocators are aligned
} EIGEN_ALIGN16; // enforce SSE padding for correct memory alignment
struct PointXYZIT {
float x;
float y;
float z;
unsigned char intensity;
double timestamp;
EIGEN_MAKE_ALIGNED_OPERATOR_NEW // make sure our new allocators are aligned
} EIGEN_ALIGN16; // enforce SSE padding for correct memory alignment
struct RawPointXYZIT {
PCL_ADD_POINT4D
uint8_t intensity;
double timestamp;
EIGEN_MAKE_ALIGNED_OPERATOR_NEW // make sure our new allocators are aligned
} EIGEN_ALIGN16; // enforce SSE padding for correct memory alignment
struct PointXYZIRTd {
double x;
double y;
double z;
unsigned char intensity;
unsigned char ring;
double timestamp;
EIGEN_MAKE_ALIGNED_OPERATOR_NEW // make sure our new allocators are aligned
} EIGEN_ALIGN16; // enforce SSE padding for correct memory alignment
struct PointXYZITd {
double x;
double y;
double z;
unsigned char intensity;
double timestamp;
EIGEN_MAKE_ALIGNED_OPERATOR_NEW // make sure our new allocators are aligned
} EIGEN_ALIGN16; // enforce SSE padding for correct memory alignment
} // namespace pcl_util
} // namespace perception
} // namespace apollo
POINT_CLOUD_REGISTER_POINT_STRUCT(
apollo::perception::pcl_util::PointD,
(double, x, x)(double, y, y)(double, z, z)(uint8_t, intensity, intensity))
POINT_CLOUD_REGISTER_POINT_STRUCT(apollo::perception::pcl_util::PointXYZIH,
(float, x, x)(float, y, y)(float, z, z)(
float, intensity, intensity)(float, h, h))
POINT_CLOUD_REGISTER_POINT_STRUCT(apollo::perception::pcl_util::PointXYZIT,
(float, x, x)(float, y, y)(float, z, z)(
uint8_t, intensity,
intensity)(double, timestamp, timestamp))
POINT_CLOUD_REGISTER_POINT_STRUCT(apollo::perception::pcl_util::RawPointXYZIT,
(float, x, x)(float, y, y)(float, z, z)(
uint8_t, intensity,
intensity)(double, timestamp, timestamp))
POINT_CLOUD_REGISTER_POINT_STRUCT(
apollo::perception::pcl_util::PointXYZIRT,
(float, x, x)(float, y, y)(float, z, z)(uint8_t, intensity, intensity)(
uint8_t, ring, ring)(double, timestamp, timestamp))
POINT_CLOUD_REGISTER_POINT_STRUCT(apollo::perception::pcl_util::PointXYZITd,
(double, x, x)(double, y, y)(double, z, z)(
uint8_t, intensity,
intensity)(double, timestamp, timestamp))
POINT_CLOUD_REGISTER_POINT_STRUCT(
apollo::perception::pcl_util::PointXYZIRTd,
(double, x, x)(double, y, y)(double, z, z)(uint8_t, intensity, intensity)(
uint8_t, ring, ring)(double, timestamp, timestamp))
#endif // MODULES_PERCEPTION_COMMON_PCL_TYPES_H_
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#include "modules/perception/common/perception_gflags.h"
DEFINE_string(perception_adapter_config_filename,
"modules/perception/conf/adapter.conf",
"The adapter config filename");
/// obstacle/base/object.cc
DEFINE_bool(is_serialize_point_cloud, false,
"serialize and output object cloud");
/// obstacle/onboard/hdmap_input.cc
DEFINE_double(map_radius, 60.0, "get map radius of car center");
DEFINE_int32(map_sample_step, 1, "step for sample road boundary points");
/// obstacle/onboard/lidar_process.cc
DEFINE_bool(enable_hdmap_input, false, "enable hdmap input for roi filter");
DEFINE_string(onboard_roi_filter, "DummyROIFilter", "onboard roi filter");
DEFINE_string(onboard_segmentor, "DummySegmentation", "onboard segmentation");
DEFINE_string(onboard_object_builder, "DummyObjectBuilder",
"onboard object builder");
DEFINE_string(onboard_object_filter, "DummyObjectFilter",
"onboard object filter");
DEFINE_string(onboard_tracker, "DummyTracker", "onboard tracker");
DEFINE_string(onboard_type_fuser, "DummyTypeFuser", "onboard type fuser");
DEFINE_int32(tf2_buff_in_ms, 10, "the tf2 buff size in ms");
DEFINE_int32(localization_buffer_size, 40, "localization buffer size");
DEFINE_string(lidar_tf2_frame_id, "novatel", "the tf2 transform frame id");
DEFINE_string(lidar_tf2_child_frame_id, "velodyne64",
"the tf2 transform child frame id");
DEFINE_string(camera_tf2_frame_id, "velodyne64", "the tf2 transform frame id");
DEFINE_string(camera_tf2_child_frame_id, "camera",
"the tf2 transform child frame id");
DEFINE_string(obstacle_module_name, "perception_obstacle",
"perception obstacle module name");
DEFINE_bool(enable_visualization, false, "enable visualization for debug");
/// obstacle/perception.cc
/* dag streaming config for Apollo 2.0 */
DEFINE_string(dag_config_path, "modules/perception/conf/dag_streaming.config",
"Onboard DAG Streaming config.");
/// obstacle/onboard/radar_process_subnode.cc
DEFINE_string(onboard_radar_detector, "DummyRadarDetector",
"onboard radar detector");
DEFINE_string(radar_tf2_frame_id, "novatel", "the tf2 transform frame id");
DEFINE_string(radar_tf2_child_frame_id, "radar",
"the tf2 transform child frame id");
DEFINE_double(front_radar_forward_distance, 120.0,
"get front radar forward distance");
DEFINE_string(radar_extrinsic_file,
"modules/perception/data/params/radar_extrinsics.yaml",
"radar extrinsic file");
DEFINE_string(short_camera_extrinsic_file,
"modules/perception/data/params/short_camera_extrinsics.yaml",
"short_camera extrinsic file");
/// obstacle/onboard/camera_process_subnode.cc
// Ex: /apollo/modules/perception/data/yolo_camera_detector_test/test.jpg
DEFINE_string(image_file_path, "", "Debug image file");
DEFINE_bool(image_file_debug, false, "Debug ROS to CV image");
/// modules/perception/lib/config_manager/calibration_config_manager.cc
DEFINE_string(front_camera_extrinsics_file,
"modules/perception/data/params/front_camera_extrinsics.yaml",
"front_camera extrinsic file");
DEFINE_string(front_camera_intrinsics_file,
"modules/perception/data/params/front_camera_intrinsics.yaml",
"front_camera intrinsic file");
/// obstacle/onboard/fusion_subnode.cc
DEFINE_string(onboard_fusion, "ProbabilisticFusion",
"fusion name which enabled onboard");
DEFINE_double(query_signal_range, 100.0, "max distance to front signals");
DEFINE_bool(output_raw_img, false, "write raw image to disk");
DEFINE_bool(output_debug_img, false, "write debug image to disk");
/// Temporarily change Kalman motion fusion to config here.
DEFINE_double(q_matrix_coefficient_amplifier, 0.5,
"Kalman filter matrix Q coefficients");
DEFINE_double(r_matrix_amplifier, 1, "Kalman filter matrix r coefficients");
DEFINE_double(p_matrix_amplifier, 1, "Kalman filter matrix p coefficients");
DEFINE_double(a_matrix_covariance_coeffcient_1, 0.05,
"Kalman filter matrix a coefficients, a_matrix_(0, 2)");
DEFINE_double(a_matrix_covariance_coeffcient_2, 0.05,
"Kalman filter matrix a coefficients, a_matrix_(1, 3)");
/// calibration_config_manager.cc
DEFINE_int32(obs_camera_detector_gpu, 0, "device id for camera detector");
// obstacle/onboard/lane_post_processing_subnode.cc
DEFINE_string(onboard_lane_post_processor, "CCLanePostProcessor",
"onboard lane post-processing algorithm name");
/// visualization
DEFINE_bool(show_radar_objects, false, "");
DEFINE_bool(show_camera_objects2d, false, "");
DEFINE_bool(show_camera_objects, false, "");
DEFINE_bool(show_camera_parsing, false, "");
DEFINE_bool(show_fused_objects, false, "");
DEFINE_bool(show_fusion_association, false, "");
DEFINE_bool(capture_screen, false, "");
DEFINE_string(screen_output_dir, "./", "");
DEFINE_string(frame_visualizer, "GLFusionVisualizer", "");
DEFINE_bool(async_fusion, false, "use distance angle ");
DEFINE_bool(use_distance_angle_fusion, true,
"use distance angle prob distance in fusion");
DEFINE_bool(publish_fusion_event, false, "publish fusion event");
DEFINE_bool(bag_mode, false, "run perception in bag mode");
DEFINE_bool(show_motion, false, "visualize motion and object trajectories");
DEFINE_bool(skip_camera_frame, false, "skip camera frame");
DEFINE_int32(camera_hz, 30, "camera hz");
DEFINE_string(fusion_publish_sensor_id, "velodyne_64", "fusion publish id");
DEFINE_int32(pbf_fusion_assoc_distance_percent, 20, "fusion distance percent");
DEFINE_double(pbf_distance_speed_cos_diff, 0.5, "fusion velocity cosine diff");
DEFINE_string(cc_lane_post_processor_config_file,
"modules/perception/model/camera/lane_post_process_config.pb.txt",
"The config file of cc_lane_post_processor.");
DEFINE_string(probabilistic_fusion_config_file,
"modules/perception/model/probabilistic_fusion_config.pb.txt",
"The config file of probabilistic_fusion.");
DEFINE_string(yolo_config_filename, "config.pt", "Yolo config filename.");
DEFINE_string(
yolo_camera_detector_config,
"modules/perception/model/camera/yolo_camera_detector_config.pb.txt",
"Yolo camera detector config filename.");
DEFINE_bool(use_whole_lane_line, false, "Use whole lane line model or not");
DEFINE_string(modest_radar_detector_config,
"modules/perception/model/modest_radar_detector_config.pb.txt",
"modest radar detector config filename.");
DEFINE_string(tracker_config, "modules/perception/model/tracker_config.pb.txt",
"tracker config filename.");
DEFINE_string(sequence_type_fuser_config,
"modules/perception/model/sequence_type_fuser_config.pb.txt",
"sequence_type_fuser config filename.");
DEFINE_string(async_fusion_config,
"modules/perception/model/async_fusion_config.pb.txt",
"async_fuser config filename.");
DEFINE_string(
geometry_camera_converter_config,
"modules/perception/model/camera/geometry_camera_converter_config.pb.txt",
"geometry_camera_converter config filename.");
DEFINE_string(cnn_segmentation_config,
"modules/perception/model/cnn_segmentation_config.pb.txt",
"cnn segmentation config filename.");
DEFINE_string(hdmap_roi_filter_config,
"modules/perception/model/hdmap_roi_filter_config.pb.txt",
"hdmap ROI filter config filename.");
DEFINE_string(low_object_filter_config,
"modules/perception/model/low_object_filter_config.pb.txt",
"low object filter config filename.");
DEFINE_string(traffic_light_multi_camera_projection_config,
"modules/perception/model/traffic_light/"
"multi_camera_projection_config.pb.txt",
"traffic light multi camera projection config filename.");
DEFINE_string(traffic_light_recognizer_config,
"modules/perception/model/traffic_light/"
"recognizer_config.pb.txt",
"traffic light recognizer config filename.");
DEFINE_string(traffic_light_preprocessor_config,
"modules/perception/model/traffic_light/"
"preprocessor_config.pb.txt",
"traffic light preprocessor config filename.");
DEFINE_string(traffic_light_rectifier_config,
"modules/perception/model/traffic_light/"
"rectifier_config.pb.txt",
"traffic light rectifier config filename.");
DEFINE_string(traffic_light_reviser_config,
"modules/perception/model/traffic_light/"
"reviser_config.pb.txt",
"traffic light reviser config filename.");
DEFINE_string(traffic_light_subnode_config,
"modules/perception/model/traffic_light/"
"subnode_config.pb.txt",
"traffic light subnode config filename.");
DEFINE_double(light_height_adjust, 0, " adjust height without chaning code");
DEFINE_string(traffic_light_rectifier, "",
"the rectifier enabled for traffic_light");
DEFINE_string(traffic_light_recognizer, "",
"the recognizer enabled for traffic_light");
DEFINE_string(traffic_light_reviser, "",
"the reviser enabled for traffic_light");
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#ifndef MODULES_PERCEPTION_COMMON_PERCEPTION_GFLAGS_H_
#define MODULES_PERCEPTION_COMMON_PERCEPTION_GFLAGS_H_
#include "gflags/gflags.h"
#include "modules/common/configs/config_gflags.h"
DECLARE_string(perception_adapter_config_filename);
/// obstacle/base/object.cc
DECLARE_bool(is_serialize_point_cloud);
/// obstacle/onboard/hdmap_input.cc
DECLARE_double(map_radius);
DECLARE_int32(map_sample_step);
/// obstacle/onboard/lidar_process_subnode.cc
DECLARE_bool(enable_hdmap_input);
DECLARE_string(onboard_roi_filter);
DECLARE_string(onboard_segmentor);
DECLARE_string(onboard_object_builder);
DECLARE_string(onboard_object_filter);
DECLARE_string(onboard_tracker);
DECLARE_string(onboard_type_fuser);
DECLARE_int32(tf2_buff_in_ms);
DECLARE_string(lidar_tf2_frame_id);
DECLARE_string(lidar_tf2_child_frame_id);
DECLARE_string(camera_tf2_frame_id);
DECLARE_string(camera_tf2_child_frame_id);
DECLARE_string(obstacle_module_name);
DECLARE_bool(enable_visualization);
/// obstacle/onboard/radar_process_subnode.cc
DECLARE_double(front_radar_forward_distance);
DECLARE_string(onboard_radar_detector);
DECLARE_int32(localization_buffer_size);
DECLARE_string(radar_tf2_frame_id);
DECLARE_string(radar_tf2_child_frame_id);
DECLARE_string(radar_extrinsic_file);
DECLARE_string(short_camera_extrinsic_file);
/// obstacle/onboard/camera_process_subnode.cc
DECLARE_string(image_file_path);
DECLARE_bool(image_file_debug);
/// camera config
DECLARE_string(front_camera_extrinsics_file);
DECLARE_string(front_camera_intrinsics_file);
/// obstacle/onboard/fusion_subnode.cc
DECLARE_string(onboard_fusion);
/// traffic_light/onboard/preprocessor.cc
DECLARE_double(query_signal_range);
DECLARE_bool(output_raw_img);
DECLARE_bool(output_debug_img);
/// perception.cc
DECLARE_string(dag_config_path);
/// pbf_kalman_motion_fusion.cc
DECLARE_double(q_matrix_coefficient_amplifier);
DECLARE_double(r_matrix_amplifier);
DECLARE_double(p_matrix_amplifier);
DECLARE_double(a_matrix_covariance_coeffcient_1);
DECLARE_double(a_matrix_covariance_coeffcient_2);
/// calibration_config_manager.cc
DECLARE_int32(obs_camera_detector_gpu);
// obstacle/onboard/lane_post_processing_subnode.cc
DECLARE_string(onboard_lane_post_processor);
/// visualization
DECLARE_bool(show_camera_objects);
DECLARE_bool(show_radar_objects);
DECLARE_bool(show_fused_objects);
DECLARE_bool(show_fusion_association);
DECLARE_bool(capture_screen);
DECLARE_string(screen_output_dir);
DECLARE_bool(show_camera_objects2d);
DECLARE_bool(show_camera_parsing);
DECLARE_string(frame_visualizer);
DECLARE_bool(show_motion);
// async fusion using imf
DECLARE_bool(async_fusion);
DECLARE_bool(use_distance_angle_fusion);
DECLARE_bool(publish_fusion_event);
DECLARE_bool(bag_mode);
DECLARE_bool(skip_camera_frame);
DECLARE_int32(camera_hz);
DECLARE_string(fusion_publish_sensor_id);
DECLARE_int32(pbf_fusion_assoc_distance_percent);
DECLARE_double(pbf_distance_speed_cos_diff);
DECLARE_string(cc_lane_post_processor_config_file);
DECLARE_string(probabilistic_fusion_config_file);
DECLARE_string(yolo_config_filename);
DECLARE_string(yolo_camera_detector_config);
DECLARE_bool(use_whole_lane_line);
DECLARE_string(modest_radar_detector_config);
DECLARE_string(tracker_config);
DECLARE_string(sequence_type_fuser_config);
DECLARE_string(async_fusion_config);
DECLARE_string(geometry_camera_converter_config);
DECLARE_string(cnn_segmentation_config);
DECLARE_string(hdmap_roi_filter_config);
DECLARE_string(low_object_filter_config);
DECLARE_string(traffic_light_multi_camera_projection_config);
DECLARE_string(traffic_light_recognizer_config);
DECLARE_string(traffic_light_preprocessor_config);
DECLARE_string(traffic_light_rectifier_config);
DECLARE_string(traffic_light_reviser_config);
DECLARE_string(traffic_light_subnode_config);
DECLARE_double(light_height_adjust);
DECLARE_string(traffic_light_rectifier);
DECLARE_string(traffic_light_recognizer);
DECLARE_string(traffic_light_reviser);
#endif // MODULES_PERCEPTION_COMMON_PERCEPTION_GFLAGS_H_
load("//tools:cpplint.bzl", "cpplint")
package(default_visibility = ["//visibility:public"])
cc_library(
name = "sequence_type_fuser",
srcs = [
"fuser_util.cc",
"sequence_type_fuser.cc",
],
hdrs = [
"base_type_fuser.h",
"fuser_util.h",
"sequence_type_fuser.h",
],
deps = [
":object_sequence",
"//modules/common",
"//framework:cybertron",
"//modules/perception/common",
"//modules/perception/common:pcl_util",
"//modules/perception/lib/base",
"//modules/perception/lib/config_manager",
"//modules/perception/obstacle/common",
"//modules/perception/proto:sequence_type_fuser_config_lib_proto",
"@eigen",
],
)
cc_library(
name = "object_sequence",
srcs = [
"object_sequence.cc",
],
hdrs = [
"object_sequence.h",
],
deps = [
"//modules/common",
"//framework:cybertron",
"//modules/perception/common",
"//modules/perception/common:pcl_util",
"//modules/perception/lib/base",
"//modules/perception/lib/config_manager",
"//modules/perception/obstacle/common",
"//modules/perception/obstacle/lidar/interface",
"@eigen",
],
)
cc_test(
name = "object_sequence_test",
size = "small",
srcs = [
"object_sequence_test.cc",
],
deps = [
":object_sequence",
"@gtest//:main",
],
)
cc_test(
name = "sequence_type_fuser_test",
size = "small",
srcs = [
"sequence_type_fuser_test.cc",
],
data = [
"//modules/perception:perception_data",
"//modules/perception:perception_model",
"//modules/perception/conf:perception_config",
],
linkopts = [
"-lqhull",
],
deps = [
":sequence_type_fuser",
"//framework:cybertron",
"//modules/perception/lib/config_manager",
"//modules/perception/obstacle/lidar/object_builder/min_box",
"//modules/perception/obstacle/lidar/tracker/hm_tracker",
"@gtest//:main",
"@pcl",
],
)
cpplint()
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#ifndef MODULES_PERCEPTION_COMMON_SEQUENCE_TYPE_FUSER_BASE_TYPE_FUSER_H_
#define MODULES_PERCEPTION_COMMON_SEQUENCE_TYPE_FUSER_BASE_TYPE_FUSER_H_
// SAMPLE CODE:
//
// class MyTypeFuser : public BaseTypeFuser {
// public:
// MyTypeFuser() : BaseTypeFuser() {}
// virtual ~MyTypeFuser() {}
//
// virtual bool Init() override {
// // Do something.
// return true;
// }
//
// virtual bool FuseType(const TypeFuserOptions& options,
// std::vector<std::shared_ptr<Object>>* objects)
// override {
// // Do something.
// return true;
// }
//
// virtual std::string name() const override {
// return "MyTypeFuser";
// }
//
// };
//
// // Register plugin.
// REGISTER_TYPEFUSER(MyTypeFuser);
////////////////////////////////////////////////////////
// USING CODE:
//
// BaseTypeFuser* type_fuser =
// BaseTypeFuserRegisterer::GetInstanceByName("MyTypeFuser");
// using type_fuser to do something.
#include <memory>
#include <string>
#include <vector>
#include "modules/common/macro.h"
#include "modules/perception/lib/base/registerer.h"
#include "modules/perception/obstacle/base/object.h"
namespace apollo {
namespace perception {
struct TypeFuserOptions {
double timestamp = 0.0;
};
class BaseTypeFuser {
public:
/**
* @brief Constructor
*/
BaseTypeFuser() {}
/**
* @ brief Destructor
*/
virtual ~BaseTypeFuser() {}
/**
* @brief Initialize configuration
* @return True if initialize successfully, false otherwise
*/
virtual bool Init() = 0;
/**
* @brief Fuse type for each object
* @param options Some algorithm options
* @param objects The objects with initial object type
* @return True if fuse type successfully, false otherwise
*/
virtual bool FuseType(const TypeFuserOptions& options,
std::vector<std::shared_ptr<Object>>* objects) = 0;
/**
* @brief Get module name
* @return Name of module
*/
virtual std::string name() const = 0;
private:
DISALLOW_COPY_AND_ASSIGN(BaseTypeFuser);
};
REGISTER_REGISTERER(BaseTypeFuser);
#define REGISTER_TYPEFUSER(name) REGISTER_CLASS(BaseTypeFuser, name)
} // namespace perception
} // namespace apollo
#endif // MODULES_PERCEPTION_COMMON_SEQUENCE_TYPE_FUSER_BASE_TYPE_FUSER_H_
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#include "modules/perception/common/sequence_type_fuser/fuser_util.h"
#include "cybertron/common/log.h"
namespace apollo {
namespace perception {
namespace fuser_util {
void FromStdVector(const std::vector<float>& src_prob, Vectord* dst_prob) {
(*dst_prob)(0) = src_prob[0];
for (std::size_t i = 3; i < static_cast<int>(ObjectType::MAX_OBJECT_TYPE);
++i) {
(*dst_prob)(i - 2) = static_cast<double>(src_prob[i]);
}
}
void FromEigenVector(const Vectord& src_prob, std::vector<float>* dst_prob) {
dst_prob->assign(static_cast<int>(ObjectType::MAX_OBJECT_TYPE), 0);
dst_prob->at(0) = src_prob(0);
for (std::size_t i = 3; i < static_cast<int>(ObjectType::MAX_OBJECT_TYPE);
++i) {
dst_prob->at(i) = static_cast<float>(src_prob(i - 2));
}
}
void ToLog(Vectord* prob) {
for (std::size_t i = 0; i < VALID_OBJECT_TYPE; ++i) {
(*prob)(i) = log((*prob)(i));
}
}
void ToExp(Vectord* prob) {
double min_value = prob->minCoeff();
for (std::size_t i = 0; i < VALID_OBJECT_TYPE; ++i) {
(*prob)(i) = exp((*prob)(i)-min_value);
}
}
void Normalize(Vectord* prob) {
double sum = prob->sum();
sum = sum < 1e-9 ? 1e-9 : sum;
*prob /= sum;
}
void NormalizeRow(Matrixd* prob) {
for (std::size_t row = 0; row < VALID_OBJECT_TYPE; ++row) {
double sum = 0.0;
for (std::size_t col = 0; col < VALID_OBJECT_TYPE; ++col) {
sum += (*prob)(row, col);
}
sum = sum < 1e-9 ? 1e-9 : sum;
for (std::size_t col = 0; col < VALID_OBJECT_TYPE; ++col) {
(*prob)(row, col) /= sum;
}
}
}
void PrintProbability(const std::vector<float>& prob, const std::string& name) {
ADEBUG << name << ": ";
float max_prob = -DBL_MAX;
std::size_t max_id = 0;
for (std::size_t i = 0; i < prob.size(); ++i) {
ADEBUG << std::setprecision(3) << prob[i] << " ";
if (prob[i] > max_prob) {
max_prob = prob[i];
max_id = i;
}
}
ADEBUG << " max_type: " << max_id << std::endl;
}
bool LoadSingleMatrix(std::ifstream& fin, Matrixd* matrix) {
for (std::size_t row = 0; row < VALID_OBJECT_TYPE; ++row) {
for (std::size_t col = 0; col < VALID_OBJECT_TYPE; ++col) {
fin >> (*matrix)(row, col);
}
}
return true;
}
bool LoadSingleMatrixFile(const std::string& filename, Matrixd* matrix) {
if (matrix == nullptr) {
return false;
}
std::ifstream fin(filename);
if (!fin.is_open()) {
AERROR << "Fail to open file: " << filename;
return false;
}
LoadSingleMatrix(fin, matrix);
fin.close();
return true;
}
bool LoadMultipleMatricesFile(
const std::string& filename,
std::unordered_map<std::string, Matrixd>* matrices) {
if (matrices == nullptr) {
return false;
}
std::ifstream fin(filename);
if (!fin.is_open()) {
AERROR << "Fail to open file: " << filename;
return false;
}
matrices->clear();
std::size_t num = 0;
fin >> num;
for (std::size_t i = 0; i < num; ++i) {
std::string name;
fin >> name;
Matrixd matrix;
LoadSingleMatrix(fin, &matrix);
matrices->emplace(name, matrix);
}
fin.close();
return true;
}
} // namespace fuser_util
} // namespace perception
} // namespace apollo
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#ifndef MODULES_PERCEPTION_COMMON_SEQUENCE_TYPE_FUSER_FUSER_UTIL_H_
#define MODULES_PERCEPTION_COMMON_SEQUENCE_TYPE_FUSER_FUSER_UTIL_H_
#include <fstream>
#include <iostream>
#include <string>
#include <unordered_map>
#include <vector>
#include "Eigen/Dense"
#include "modules/perception/obstacle/base/types.h"
namespace apollo {
namespace perception {
enum {
VALID_OBJECT_TYPE = static_cast<int>(ObjectType::MAX_OBJECT_TYPE) - 2,
};
typedef Eigen::Matrix<double, VALID_OBJECT_TYPE, 1> Vectord;
typedef Eigen::Matrix<int, VALID_OBJECT_TYPE, 1> Vectori;
typedef Eigen::Matrix<double, VALID_OBJECT_TYPE, VALID_OBJECT_TYPE> Matrixd;
namespace fuser_util {
/**
* @brief Convert probability format from std to eigen vector
* @param src_prob Input probability stored in std vector
* @param dst_prob Output probability stored in eigen vector
*/
void FromStdVector(const std::vector<float>& src_prob, Vectord* dst_prob);
/**
* @brief Convert probability format from eigen to std vector
* @param src_prob Input probability stored in eigen vector
* @param dst_prob Output probability stored in std vector
*/
void FromEigenVector(const Vectord& src_prob, std::vector<float>* dst_prob);
/**
* @brief Transform probability into that in log space
* @param In and out probability
*/
void ToLog(Vectord* prob);
/**
* @brief Transform probability into that in exp space
* @param In and out probability
*/
void ToExp(Vectord* prob);
/**
* @brief Normalize the probability
* @param In and out probability
*/
void Normalize(Vectord* prob);
/**
* @brief Normalize the probability stored as a row vector in eigen matrix
* @param In and out probability
*/
void NormalizeRow(Matrixd* prob);
/**
* @brief Print probability
* @param prob Probability to be printed
* @param name Name of probability to be printed
*/
void PrintProbability(const std::vector<float>& prob, const std::string& name);
/**
* @brief Load a matrix from input file stream
* @param fin The input file stream
* @param matrix The loaded Matrix
* @return True if load successfully, false otherwise
*/
bool LoadSingleMatrix(std::ifstream& fin, Matrixd* matrix);
/**
* @brief Load a matrix from file
* @param filename The file name
* @param matrix The loaded Matrix
* @return True if load successfully, false otherwise
*/
bool LoadSingleMatrixFile(const std::string& filename, Matrixd* matrix);
/**
* @brief Load multiple matrices from file
* @param filename The file name
* @param matrices The loaded Matrices
* @return True if load successfully, false otherwise
*/
bool LoadMultipleMatricesFile(
const std::string& filename,
std::unordered_map<std::string, Matrixd>* matrices);
} // namespace fuser_util
} // namespace perception
} // namespace apollo
#endif // MODULES_PERCEPTION_COMMON_SEQUENCE_TYPE_FUSER_FUSER_UTIL_H_
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#include "modules/perception/common/sequence_type_fuser/object_sequence.h"
#include <utility>
#include "cybertron/common/log.h"
namespace apollo {
namespace perception {
bool ObjectSequence::AddTrackedFrameObjects(
const std::vector<std::shared_ptr<Object>>& objects, double timestamp) {
std::lock_guard<std::mutex> lock(mutex_);
for (const auto& obj : objects) {
int& track_id = obj->track_id;
auto iter = sequence_.find(track_id);
if (iter == sequence_.end()) {
auto res = sequence_.insert(std::make_pair(
track_id, std::map<int64_t, std::shared_ptr<Object>>()));
if (!res.second) {
AERROR << "Fail to insert track.";
return false;
}
iter = res.first;
}
auto res =
iter->second.insert(std::make_pair(DoubleToMapKey(timestamp), obj));
if (!res.second) {
AERROR << "Fail to insert object.";
return false;
}
}
RemoveStaleTracks(timestamp);
current_ = timestamp;
return true;
}
bool ObjectSequence::GetTrackInTemporalWindow(
int track_id, std::map<int64_t, std::shared_ptr<Object>>* track,
double window_time) {
if (track == nullptr) {
return false;
}
track->clear();
std::lock_guard<std::mutex> lock(mutex_);
double start_time = current_ - window_time;
auto iter = sequence_.find(track_id);
if (iter == sequence_.end()) {
return false;
}
for (auto& tobj : iter->second) {
if (MapKeyToDouble(tobj.first) >= start_time) {
track->insert(tobj);
}
}
return true;
}
void ObjectSequence::RemoveStaleTracks(double current_stamp) {
for (auto outer_iter = sequence_.begin(); outer_iter != sequence_.end();) {
CHECK(outer_iter->second.size() > 0) << "Find empty tracks.";
auto& track = outer_iter->second;
if (current_stamp - MapKeyToDouble(track.rbegin()->first) >
s_max_time_out_) {
sequence_.erase(outer_iter++);
continue;
}
for (auto inner_iter = track.begin(); inner_iter != track.end();) {
if (current_stamp - MapKeyToDouble(inner_iter->first) > s_max_time_out_) {
track.erase(inner_iter++);
continue;
} else {
break;
}
}
if (track.size() == 0) { // all element removed
sequence_.erase(outer_iter++);
} else {
++outer_iter;
}
}
}
} // namespace perception
} // namespace apollo
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#ifndef MODULES_PERCEPTION_COMMON_SEQUENCE_TYPE_FUSER_OBJECT_SEQUENCE_H_
#define MODULES_PERCEPTION_COMMON_SEQUENCE_TYPE_FUSER_OBJECT_SEQUENCE_H_
#include <map>
#include <memory>
#include <mutex>
#include <vector>
#include "modules/perception/obstacle/base/object.h"
namespace apollo {
namespace perception {
class ObjectSequence {
public:
/**
* @brief Construct
*/
ObjectSequence() = default;
/**
* @brief Destruct
*/
~ObjectSequence() = default;
/**
* @brief Add objects of one single frame to the current sequence
* @param objects The frame objects to be added to the storaged sequence
* @param timestamp The frame timestamp
* @return True if add successfully, false otherwise
*/
bool AddTrackedFrameObjects(
const std::vector<std::shared_ptr<Object>>& objects, double timestamp);
/**
* @brief Get tracked objects in a time window
* @param track_id The track id of object sequence
* @param track The output tracked objects
* @param window_time The time interval
* @return True if get track successfully, false otherwise
*/
bool GetTrackInTemporalWindow(
int track_id, std::map<int64_t, std::shared_ptr<Object>>* track,
double window_time);
protected:
/**
* @brief Remove too old tracks
* @param current_stamp Current timestamp
*/
void RemoveStaleTracks(double current_stamp);
private:
const double kEps = 1e-9;
int64_t DoubleToMapKey(const double d) {
return static_cast<int64_t>(d / kEps);
}
double MapKeyToDouble(const int64_t key) { return key * kEps; }
double current_;
std::map<int, std::map<int64_t, std::shared_ptr<Object>>> sequence_;
std::mutex mutex_;
static constexpr double s_max_time_out_ = 5.0; // 5 seconds
};
} // namespace perception
} // namespace apollo
#endif // MODULES_PERCEPTION_COMMON_SEQUENCE_TYPE_FUSER_OBJECT_SEQUENCE_H_
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#include "modules/perception/common/sequence_type_fuser/object_sequence.h"
#include "gtest/gtest.h"
namespace apollo {
namespace perception {
class ObjectSequenceTest : public testing::Test {
protected:
ObjectSequenceTest() {}
virtual ~ObjectSequenceTest() {}
void SetUp() {}
void TearDown() {}
void BuildObjects();
protected:
ObjectSequence sequence_;
std::vector<std::vector<std::shared_ptr<Object>>> objects_;
std::vector<double> timestamps_;
};
void ObjectSequenceTest::BuildObjects() {
static const int num = 10;
objects_.resize(num);
timestamps_.resize(num);
for (int i = 0; i < num; ++i) {
timestamps_[i] = static_cast<double>(i) * 0.1;
objects_[i].resize(2);
objects_[i][0].reset(new Object);
objects_[i][0]->track_id = 0;
objects_[i][1].reset(new Object);
objects_[i][1]->track_id = i + 1;
}
timestamps_.back() = 5.8;
}
TEST_F(ObjectSequenceTest, TestAddAndGet) {
BuildObjects();
EXPECT_EQ(objects_.size(), timestamps_.size());
for (std::size_t i = 0; i < objects_.size() - 1; ++i) {
sequence_.AddTrackedFrameObjects(objects_[i], timestamps_[i]);
}
double window_time = 5.0;
std::map<int64_t, std::shared_ptr<Object>> tracked_objects;
sequence_.GetTrackInTemporalWindow(0, &tracked_objects, window_time);
EXPECT_EQ(tracked_objects.size(), 9);
sequence_.GetTrackInTemporalWindow(1, &tracked_objects, window_time);
EXPECT_EQ(tracked_objects.size(), 1);
sequence_.AddTrackedFrameObjects(objects_.back(), timestamps_.back());
sequence_.GetTrackInTemporalWindow(0, &tracked_objects, window_time);
EXPECT_EQ(tracked_objects.size(), 2);
// window_time = 9.5;
sequence_.GetTrackInTemporalWindow(10, &tracked_objects, window_time);
EXPECT_EQ(tracked_objects.size(), 1);
sequence_.GetTrackInTemporalWindow(9, &tracked_objects, window_time);
EXPECT_EQ(tracked_objects.size(), 1);
EXPECT_FALSE(
sequence_.GetTrackInTemporalWindow(8, &tracked_objects, window_time));
EXPECT_FALSE(
sequence_.GetTrackInTemporalWindow(1, &tracked_objects, window_time));
}
} // namespace perception
} // namespace apollo
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#include "modules/perception/common/sequence_type_fuser/sequence_type_fuser.h"
#include "cybertron/common/log.h"
#include "modules/common/util/file.h"
#include "modules/perception/common/perception_gflags.h"
namespace apollo {
namespace perception {
using apollo::common::util::GetAbsolutePath;
using apollo::common::util::GetProtoFromFile;
bool SequenceTypeFuser::Init() {
if (!GetProtoFromFile(FLAGS_sequence_type_fuser_config, &config_)) {
AERROR << "Cannot get config proto from file: " << FLAGS_tracker_config;
return false;
}
// get the transition matrix
const std::string& transition_property_file_path =
config_.transition_property_file_path();
if (!fuser_util::LoadSingleMatrixFile(transition_property_file_path,
&transition_matrix_)) {
AERROR << "Fail to load single matrix file from: "
<< transition_property_file_path;
return false;
}
transition_matrix_ += Matrixd::Ones() * 1e-6;
for (std::size_t i = 0; i < VALID_OBJECT_TYPE; ++i) {
fuser_util::NormalizeRow(&transition_matrix_);
}
ADEBUG << "transition matrix\n" << transition_matrix_;
for (std::size_t i = 0; i < VALID_OBJECT_TYPE; ++i) {
for (std::size_t j = 0; j < VALID_OBJECT_TYPE; ++j) {
transition_matrix_(i, j) = log(transition_matrix_(i, j));
}
}
ADEBUG << std::endl << transition_matrix_;
// get classifier property
const std::string& classifiers_property_file_path =
config_.classifiers_property_file_path();
if (!fuser_util::LoadMultipleMatricesFile(classifiers_property_file_path,
&smooth_matrices_)) {
AERROR << "Fail to load multiple matrices from file: "
<< classifiers_property_file_path;
return false;
}
for (auto& pair : smooth_matrices_) {
fuser_util::NormalizeRow(&pair.second);
pair.second.transposeInPlace();
ADEBUG << "Source: " << pair.first << "\n\n" << pair.second;
}
confidence_smooth_matrix_ = Matrixd::Identity();
auto iter = smooth_matrices_.find("Confidence");
if (iter != smooth_matrices_.end()) {
confidence_smooth_matrix_ = iter->second;
smooth_matrices_.erase(iter);
}
ADEBUG << "Confidence: \n" << confidence_smooth_matrix_;
return true;
}
bool SequenceTypeFuser::FuseType(
const TypeFuserOptions& options,
std::vector<std::shared_ptr<Object>>* objects) {
if (objects == nullptr) {
return false;
}
if (options.timestamp > 0.0) {
sequence_.AddTrackedFrameObjects(*objects, options.timestamp);
std::map<int64_t, std::shared_ptr<Object>> tracked_objects;
for (auto& object : *objects) {
if (object->is_background) {
object->type_probs.assign(static_cast<int>(ObjectType::MAX_OBJECT_TYPE),
0);
object->type = ObjectType::UNKNOWN_UNMOVABLE;
continue;
}
const int track_id = object->track_id;
sequence_.GetTrackInTemporalWindow(track_id, &tracked_objects,
config_.temporal_window());
if (tracked_objects.size() == 0) {
AERROR << "Find zero-length track, so skip.";
continue;
}
if (object != tracked_objects.rbegin()->second) {
AERROR << "There must exist some timestamp in disorder, so skip.";
continue;
}
if (!FuseWithCCRF(&tracked_objects)) {
AERROR << "Failed to fuse types, so break.";
break;
}
}
}
return true;
}
bool SequenceTypeFuser::FuseWithCCRF(
std::map<int64_t, std::shared_ptr<Object>>* tracked_objects) {
if (tracked_objects == nullptr || tracked_objects->size() == 0) {
return false;
}
/// rectify object type with smooth matrices
fused_oneshot_probs_.resize(tracked_objects->size());
std::size_t i = 0;
for (auto& pair : *tracked_objects) {
std::shared_ptr<Object>& object = pair.second;
if (!RectifyObjectType(object, &fused_oneshot_probs_[i++])) {
AERROR << "Failed to fuse one shot probs in sequence.";
return false;
}
}
/// use Viterbi algorithm to infer the state
std::size_t length = tracked_objects->size();
fused_sequence_probs_.resize(length);
state_back_trace_.resize(length);
fused_sequence_probs_[0] = fused_oneshot_probs_[0];
/// add prior knowledge to suppress the sudden-appeared object types.
fused_sequence_probs_[0] += transition_matrix_.row(0).transpose();
for (std::size_t i = 1; i < length; ++i) {
for (std::size_t right = 0; right < VALID_OBJECT_TYPE; ++right) {
double max_prob = -DBL_MAX;
std::size_t id = 0;
for (std::size_t left = 0; left < VALID_OBJECT_TYPE; ++left) {
const double prob = fused_sequence_probs_[i - 1](left) +
transition_matrix_(left, right) * s_alpha_ +
fused_oneshot_probs_[i](right);
if (prob > max_prob) {
max_prob = prob;
id = left;
}
}
fused_sequence_probs_[i](right) = max_prob;
state_back_trace_[i](right) = id;
}
}
std::shared_ptr<Object> object = tracked_objects->rbegin()->second;
RecoverFromLogProb(&fused_sequence_probs_.back(), &object->type_probs,
&object->type);
return true;
}
bool SequenceTypeFuser::RectifyObjectType(const std::shared_ptr<Object>& object,
Vectord* log_prob) {
if (object == nullptr || log_prob == nullptr) {
return false;
}
log_prob->setZero();
Vectord single_prob;
fuser_util::FromStdVector(object->type_probs, &single_prob);
auto iter = smooth_matrices_.find("CNNSegClassifier");
if (iter == smooth_matrices_.end()) {
AERROR << "Failed to find CNNSegmentation classifier property.";
return false;
}
static const Vectord epsilon = Vectord::Ones() * 1e-6;
single_prob = iter->second * single_prob + epsilon;
fuser_util::Normalize(&single_prob);
double conf = object->score;
single_prob = conf * single_prob +
(1.0 - conf) * confidence_smooth_matrix_ * single_prob;
fuser_util::ToLog(&single_prob);
*log_prob += single_prob;
return true;
}
bool SequenceTypeFuser::RecoverFromLogProb(Vectord* prob,
std::vector<float>* dst,
ObjectType* type) {
fuser_util::ToExp(prob);
fuser_util::Normalize(prob);
fuser_util::FromEigenVector(*prob, dst);
*type = static_cast<ObjectType>(
std::distance(dst->begin(), std::max_element(dst->begin(), dst->end())));
return true;
}
} // namespace perception
} // namespace apollo
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#ifndef MODULES_PERCEPTION_COMMON_SEQUENCE_TYPE_FUSER_SEQUENCE_TYPE_FUSER_H_
#define MODULES_PERCEPTION_COMMON_SEQUENCE_TYPE_FUSER_SEQUENCE_TYPE_FUSER_H_
#include <map>
#include <memory>
#include <string>
#include <unordered_map>
#include <vector>
#include "modules/perception/proto/sequence_type_fuser_config.pb.h"
#include "modules/perception/common/sequence_type_fuser/base_type_fuser.h"
#include "modules/perception/common/sequence_type_fuser/fuser_util.h"
#include "modules/perception/common/sequence_type_fuser/object_sequence.h"
namespace apollo {
namespace perception {
class SequenceTypeFuser : public BaseTypeFuser {
public:
/**
* @brief Constructor
*/
SequenceTypeFuser() {}
/**
* @ brief Destructor
*/
virtual ~SequenceTypeFuser() {}
/**
* @brief Initialize configuration
* @return True if initialize successfully, false otherwise
*/
bool Init() override;
/**
* @brief Fuse type over the sequence for each object
* @param options Some algorithm options declared in BaseTypeFuser
* @param objects The objects with initial object type
* @return True if fuse type successfully, false otherwise
*/
bool FuseType(const TypeFuserOptions& options,
std::vector<std::shared_ptr<Object>>* objects) override;
/**
* @brief Get module name
* @return Name of module
*/
std::string name() const override { return "SequenceTypeFuser"; }
protected:
/**
* @brief Fuse type over object sequence by a linear-chain CRF.
* The fusion problem is modeled as inferring the discrete state
* in a chain CRF. Note, log(P({X}|O)) = sigma_i{E_unary(X_i,O)} +
* sigma_ij{E_pairwise(X_i,X_j)} - logZ,
* E_unary(X_i,O) = sigma{logP(classifier)},
* E_pairwise(X_i,X_j) = log{Transition(X_i,X_j)}.
* Maximize the sequence probability P(X_t|{X}^opt,O) based on the
* Viterbi algorithm.
* @param tracked_objects The tracked objects as a sequence
* @return True if fuse successfully, false otherwise
*/
bool FuseWithCCRF(
std::map<int64_t, std::shared_ptr<Object>>* tracked_objects);
/**
* @brief Rectify the initial object type based on smooth matrices
* @param object The object with initial type probabilities
* @param log_prob The output rectified type probabilities
* @return True if rectify successfully, false otherwise
*/
bool RectifyObjectType(const std::shared_ptr<Object>& object,
Vectord* log_prob);
/**
* @brief Recover type probabilities and object type from the input
* log probabilities
* @param prob The input type probabilities in the log space
* @param dst The output normalized type probabilities
* @param type The output object type with the max probability
* @return True if recover successfully, false otherwise
*/
bool RecoverFromLogProb(Vectord* prob, std::vector<float>* dst,
ObjectType* type);
protected:
ObjectSequence sequence_;
Matrixd transition_matrix_;
Matrixd confidence_smooth_matrix_;
std::unordered_map<std::string, Matrixd> smooth_matrices_;
// Note all probabilities are in the log space
std::vector<Vectord> fused_oneshot_probs_;
std::vector<Vectord> fused_sequence_probs_;
std::vector<Vectori> state_back_trace_;
static constexpr double s_alpha_ = 1.8;
sequence_type_fuser_config::ModelConfigs config_;
private:
DISALLOW_COPY_AND_ASSIGN(SequenceTypeFuser);
};
REGISTER_TYPEFUSER(SequenceTypeFuser);
} // namespace perception
} // namespace apollo
#endif // MODULES_PERCEPTION_COMMON_SEQUENCE_TYPE_FUSER_SEQUENCE_TYPE_FUSER_H_
/******************************************************************************
* Copyright 2017 The Apollo Authors. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*****************************************************************************/
#include "modules/perception/common/sequence_type_fuser/sequence_type_fuser.h"
#include "gtest/gtest.h"
#include "modules/perception/common/perception_gflags.h"
namespace apollo {
namespace perception {
class SequenceTypeFuserTest : public testing::Test {
protected:
SequenceTypeFuserTest() {}
virtual ~SequenceTypeFuserTest() {}
void SetUp() {
RegisterFactorySequenceTypeFuser();
fuser_.reset(new SequenceTypeFuser);
}
void TearDown() {}
void BuildObjects();
std::size_t IdMap(size_t i);
void GenerateSmoothProb(std::vector<float>* prob, std::size_t id, float seed);
protected:
std::shared_ptr<SequenceTypeFuser> fuser_;
std::vector<std::vector<std::shared_ptr<Object>>> objects_;
std::vector<double> timestamps_;
static const std::size_t s_sequence_length_;
static const std::size_t s_object_num_;
};
const std::size_t SequenceTypeFuserTest::s_sequence_length_ = 10;
const std::size_t SequenceTypeFuserTest::s_object_num_ = 4;
std::size_t SequenceTypeFuserTest::IdMap(std::size_t i) {
if (i == 0) {
return 0;
} else {
return i + 2;
}
}
void SequenceTypeFuserTest::GenerateSmoothProb(std::vector<float>* prob,
std::size_t id, float seed) {
float p = (1.f - seed) / (VALID_OBJECT_TYPE - 1);
for (std::size_t i = 0; i < VALID_OBJECT_TYPE; ++i) {
prob->at(IdMap(i)) = p;
}
prob->at(IdMap(id)) = seed;
}
void SequenceTypeFuserTest::BuildObjects() {
objects_.resize(s_sequence_length_);
timestamps_.resize(s_sequence_length_);
for (size_t i = 0; i < s_sequence_length_; ++i) {
timestamps_[i] = static_cast<double>(i) * 0.1;
objects_[i].resize(s_object_num_);
for (size_t j = 0; j < s_object_num_; ++j) {
objects_[i][j].reset(new Object);
objects_[i][j]->track_id = static_cast<int>(j);
objects_[i][j]->score = 0.95;
std::vector<float>& type_probs = objects_[i][j]->type_probs;
type_probs.resize(static_cast<int>(ObjectType::MAX_OBJECT_TYPE), 0.0);
GenerateSmoothProb(&type_probs, j, 0.7);
std::cout << j << "th object prob: ";
for (size_t m = 0; m < type_probs.size(); ++m) {
std::cout << type_probs[m] << " ";
}
std::cout << std::endl;
}
}
}
TEST_F(SequenceTypeFuserTest, TestFuseType) {
BuildObjects();
EXPECT_TRUE(fuser_->Init());
EXPECT_STREQ(fuser_->name().c_str(), "SequenceTypeFuser");
TypeFuserOptions options;
for (size_t i = 0; i < s_sequence_length_; ++i) {
options.timestamp = timestamps_[i];
EXPECT_TRUE(fuser_->FuseType(options, &objects_[i]));
}
for (size_t j = 0; j < s_object_num_; ++j) {
EXPECT_EQ(static_cast<std::size_t>(objects_.back()[j]->type), IdMap(j));
}
}
} // namespace perception
} // namespace apollo
package(default_visibility = ["//visibility:public"])
filegroup(
name = "perception_adapter_manager_config",
srcs = [
"adapter.conf",
],
)
filegroup(
name = "perception_config",
srcs = [
"dag_camera_obstacle.config",
"dag_camera_obstacle_lane_vis.config",
"dag_camera_obstacle_vis.config",
"dag_streaming.config",
"dag_streaming_lowcost.config",
"perception.conf",
"perception_lowcost.conf",
],
)
config: {
type: POINT_CLOUD
mode: RECEIVE_ONLY
message_history_limit: 1
}
config: {
type: VLP16_POINT_CLOUD
mode: RECEIVE_ONLY
message_history_limit: 1
}
config: {
type: LOCALIZATION
mode: RECEIVE_ONLY
message_history_limit: 1
}
config {
type: CHASSIS
mode: RECEIVE_ONLY
message_history_limit: 1
}
config: {
type: CONTI_RADAR
mode: RECEIVE_ONLY
message_history_limit: 2
}
config: {
type: IMAGE_FRONT
mode: RECEIVE_ONLY
message_history_limit: 5
}
config {
type: PERCEPTION_OBSTACLES
mode: PUBLISH_ONLY
message_history_limit: 50
}
config {
type: PERCEPTION_LANE_MASK
mode: PUBLISH_ONLY
message_history_limit: 5
}
config: {
type: IMAGE_SHORT
mode: RECEIVE_ONLY
message_history_limit: 5
}
config {
type: IMAGE_LONG
mode: RECEIVE_ONLY
message_history_limit: 5
}
config {
type: TRAFFIC_LIGHT_DETECTION
mode: PUBLISH_ONLY
message_history_limit: 50
}
config {
type: RELATIVE_MAP
mode: RECEIVE_ONLY
message_history_limit: 1
}
is_ros: true
# Nvidia Driver on host is required for this config
subnode_config {
# Camera node. publish pb with non-zero int
subnodes {
id: 3
name: "CameraProcessSubnode"
reserve: "device_id:camera;pb_obj:1;pb_ln_msk_:0;"
type: SUBNODE_IN
}
}
###################################################################
# Define all edges which link nodes.
edge_config {
}
# Shared Data
data_config {
datas {
id: 5
name: "CameraObjectData"
}
datas {
id: 7
name: "CameraSharedData"
}
}
# Nvidia Driver on host is required for this config
subnode_config {
# Camera node. publish pb with non-zero int
subnodes {
id: 3
name: "CameraProcessSubnode"
reserve: "device_id:camera;pb_obj:0;pb_ln_msk_:0;"
type: SUBNODE_IN
}
subnodes {
id: 2
name: "MotionService"
reserve: "device_id:motion_service;"
type: SUBNODE_IN
}
subnodes {
id: 5
name: "LanePostProcessingSubnode"
# reserve: "device_id:camera;publish:2;"
reserve: "device_id:camera;publish:2;motion_event_id:1021"
type: SUBNODE_NORMAL
}
# subnodes {
# id: 31
# name: "AsyncFusionSubnode"
# reserve: "camera_event_id:1009;lane_event_id:1010"
# type: SUBNODE_NORMAL
# }
# Visualization node
subnodes {
id: 41
name: "VisualizationSubnode"
# reserve: "vis_driven_event_id:1012;camera_event_id:1008;lane_event_id:1012"
reserve: "vis_driven_event_id:1012;camera_event_id:1008;motion_event_id:1020;lane_event_id:1012"
type: SUBNODE_OUT
}
}
###################################################################
# Define all edges which link nodes.
edge_config {
# CameraProcessSubnode -> VisualizationSubnode
edges {
id: 106
from_node: 3
to_node: 5
events {
id: 1004
name: "lane_postprocessing"
}
}
edges {
id: 108
from_node: 3
to_node: 41
events {
id: 1008
name: "camera_visualization"
}
}
# edges {
# id: 109
# from_node: 3
# to_node: 31
# events {
# id: 1009
# name: "camera_fusion"
# }
# }
# edges {
# id: 110
# from_node: 5
# to_node: 31
# events {
# id: 1010
# name: "lane_fusion"
# }
# }
# edges {
# id: 111
# from_node: 31
# to_node: 41
# events {
# id: 1011
# name: "fusion_visualization"
# }
# }
edges {
id: 112
from_node: 5
to_node: 41
events {
id: 1012
name: "lane_visualization"
}
}
edges {
id: 120
from_node: 2
to_node: 41
events {
id: 1020
name: "motion_visualization"
}
}
# edges {
# id: 121
# from_node: 2
# to_node: 5
# events {
# id: 1021
# name: "motion_lane"
# }
# }
}
# Shared Data
data_config {
datas {
id: 5
name: "CameraObjectData"
}
datas {
id: 7
name: "CameraSharedData"
}
datas {
id: 8
name: "LaneSharedData"
}
# datas {
# id: 9
# name: "FusionSharedData"
# }
}
# Nvidia Driver on host is required for this config
subnode_config {
#Camera node. publish pb with non-zero int
subnodes {
id: 3
name : "CameraProcessSubnode"
type : SUBNODE_IN
}
#Lane Post Processing node
subnodes {
id: 5
name: "LanePostProcessingSubnode"
reserve : "device_id:camera;publish:2;"
type: SUBNODE_NORMAL
}
#Visualization node
subnodes {
id: 41
name : "VisualizationSubnode"
reserve : "vis_driven_event_id:1010;camera_event_id:1008;lane_event_id:1010"
}
}
################################################################## #
#Define all edges which link nodes.
edge_config {
# CameraProcessSubnode -> LanePostProcessingSubnode
edges {
id: 106
from_node: 3
to_node: 5
events {
id: 1004
name: "lane_postprocessing"
}
}
#CameraProcessSubnode->VisualizationSubnode
edges {
id: 108
from_node : 3
to_node : 41
events {
id: 1008
name : "camera_visualization"
}
}
# LanePostProcessingSubnode -> VisualizationSubnode
edges {
id: 109
from_node: 5
to_node: 41
events {
id: 1010
name: "lane_visualization"
}
}
}
#Shared Data
data_config {
datas {
id: 5
name : "CameraObjectData"
}
datas {
id: 8
name: "LaneSharedData"
}
datas {
id: 7
name : "CameraSharedData"
}
}
# Nvidia Driver on host is required for this config
subnode_config {
# Camera node. publish pb with non-zero int
subnodes {
id: 3
name: "CameraProcessSubnode"
reserve: "device_id:camera;pb_obj:0;pb_ln_msk_:0;"
type: SUBNODE_IN
}
subnodes {
id: 2
name: "MotionService"
reserve: "device_id:motion_service;"
type: SUBNODE_IN
}
subnodes {
id: 5
name: "LanePostProcessingSubnode"
reserve: "device_id:camera;publish:2;motion_event_id:1021"
type: SUBNODE_NORMAL
}
subnodes {
id: 4
name: "RadarProcessSubnode"
reserve: "device_id:radar_front;"
type: SUBNODE_IN
}
subnodes {
id: 31
name: "FusionSubnode"
reserve: "pub_driven_event_id:1009;lane_event_id:1010;camera_event_id:1009;radar_event_id:1013;motion_event_id:1022"
type: SUBNODE_NORMAL
}
# Visualization node
subnodes {
id: 41
name: "VisualizationSubnode"
reserve: "vis_driven_event_id:1011;motion_event_id:1020;fusion_event_id:1011;"
type: SUBNODE_OUT
}
}
###################################################################
# Define all edges which link nodes.
edge_config {
# CameraProcessSubnode -> VisualizationSubnode
edges {
id: 106
from_node: 3
to_node: 5
events {
id: 1004
name: "lane_postprocessing"
}
}
edges {
id: 109
from_node: 3
to_node: 31
events {
id: 1009
name: "camera_fusion"
}
}
edges {
id: 110
from_node: 5
to_node: 31
events {
id: 1010
name: "lane_fusion"
}
}
# RadarSubnode -> FusionSubnode
edges {
id: 113
from_node: 4
to_node: 31
events {
id: 1013
name: "radar_fusion"
}
}
edges {
id: 111
from_node: 31
to_node: 41
events {
id: 1011
name: "fusion_visualization"
}
}
edges {
id: 120
from_node: 2
to_node: 41
events {
id: 1020
name: "motion_visualization"
}
}
# edges {
# id: 120
# from_node: 2
# to_node: 31
# events {
# id: 1022
# name: "motion_fusion"
# }
# }
# edges {
# id: 121
# from_node: 2
# to_node: 5
# events {
# id: 1021
# name: "motion_lane"
# }
# }
}
# Shared Data
data_config {
datas {
id: 5
name: "CameraObjectData"
}
datas {
id: 7
name: "CameraSharedData"
}
datas {
id: 8
name: "LaneSharedData"
}
datas {
id: 9
name: "FusionSharedData"
}
datas {
id: 10
name: "RadarObjectData"
}
datas {
id: 11
name: "LidarObjectData"
}
datas {
id: 12
name: "SceneSharedData"
}
}
# Nvidia Driver and CUDA are required for these 2 subnodes
subnode_config {
# Camera node
subnodes {
id: 3
name: "CameraProcessSubnode"
reserve: "device_id:camera;"
type: SUBNODE_IN
}
subnodes {
id: 5
name: "LanePostProcessingSubnode"
type: SUBNODE_NORMAL
}
subnodes {
id: 2
name: "RadarProcessSubnode"
reserve: "device_id:radar_front;"
type: SUBNODE_IN
}
subnodes {
id: 31
name: "FusionSubnode"
reserve: "pub_driven_event_id:1009;lane_event_id:1010;camera_event_id:1009;radar_event_id:1013;"
type: SUBNODE_NORMAL
}
# Visualization node
subnodes {
id: 41
name: "VisualizationSubnode"
reserve: "vis_driven_event_id:1011;fusion_event_id:1011;"
type: SUBNODE_OUT
}
}
###################################################################
# Define all edges which link nodes.
edge_config {
# CameraProcessSubnode -> LanePostProcessingSubnode
edges {
id: 106
from_node: 3
to_node: 5
events {
id: 1004
name: "lane_postprocessing"
}
}
# CameraProcessSubnode -> VisualizationSubnode
edges {
id: 108
from_node: 3
to_node: 41
events {
id: 1008
name: "camera_visualization"
}
}
# CameraProcessSubnode -> FusionSubnode
edges {
id: 109
from_node: 3
to_node: 31
events {
id: 1009
name: "camera_fusion"
}
}
# LanePostProcessingSubnode -> FusionSubnode
edges {
id: 110
from_node: 5
to_node: 31
events {
id: 1010
name: "lane_fusion"
}
}
# FusionSubnode -> VisualizationSubnode
edges {
id: 111
from_node: 31
to_node: 41
events {
id: 1011
name: "fusion_visualization"
}
}
# RadarSubnode -> VisualizationSubnode
edges {
id: 112
from_node: 2
to_node: 41
events {
id: 1012
name: "radar_visualization"
}
}
# RadarSubnode -> FusionSubnode
edges {
id: 113
from_node: 2
to_node: 31
events {
id: 1013
name: "radar_fusion"
}
}
}
# Shared Data
data_config {
datas {
id: 5
name: "CameraObjectData"
}
datas {
id: 7
name: "CameraSharedData"
}
datas {
id: 8
name: "LaneSharedData"
}
datas {
id: 9
name: "FusionSharedData"
}
datas {
id: 10
name: "RadarObjectData"
}
}
# Nvidia Driver and CUDA are required for these 2 subnodes
subnode_config {
# Camera node
subnodes {
id: 3
name: "CameraProcessSubnode"
reserve: "device_id:camera;"
type: SUBNODE_IN
}
subnodes {
id: 5
name: "LanePostProcessingSubnode"
type: SUBNODE_NORMAL
}
subnodes {
id: 2
name: "RadarProcessSubnode"
reserve: "device_id:radar_front;"
type: SUBNODE_IN
}
subnodes {
id: 31
name: "FusionSubnode"
reserve: "pub_driven_event_id:1009;lane_event_id:1010;camera_event_id:1009;radar_event_id:1013;"
type: SUBNODE_NORMAL
}
# Visualization node
subnodes {
id: 41
name: "VisualizationSubnode"
reserve: "vis_driven_event_id:1011;fusion_event_id:1011;"
type: SUBNODE_OUT
}
}
###################################################################
# Define all edges which link nodes.
edge_config {
# CameraProcessSubnode -> LanePostProcessingSubnode
edges {
id: 106
from_node: 3
to_node: 5
events {
id: 1004
name: "lane_postprocessing"
}
}
# CameraProcessSubnode -> VisualizationSubnode
edges {
id: 108
from_node: 3
to_node: 41
events {
id: 1008
name: "camera_visualization"
}
}
# CameraProcessSubnode -> FusionSubnode
edges {
id: 109
from_node: 3
to_node: 31
events {
id: 1009
name: "camera_fusion"
}
}
# LanePostProcessingSubnode -> FusionSubnode
edges {
id: 110
from_node: 5
to_node: 31
events {
id: 1010
name: "lane_fusion"
}
}
# FusionSubnode -> VisualizationSubnode
edges {
id: 111
from_node: 31
to_node: 41
events {
id: 1011
name: "fusion_visualization"
}
}
# RadarSubnode -> VisualizationSubnode
edges {
id: 112
from_node: 2
to_node: 41
events {
id: 1012
name: "radar_visualization"
}
}
# RadarSubnode -> FusionSubnode
edges {
id: 113
from_node: 2
to_node: 31
events {
id: 1013
name: "radar_fusion"
}
}
# LanePostProcessingSubnode -> VisualizationSubnode
edges {
id: 114
from_node: 5
to_node: 41
events {
id: 1014
name: "lane_visualization"
}
}
}
# Shared Data
data_config {
datas {
id: 5
name: "CameraObjectData"
}
datas {
id: 7
name: "CameraSharedData"
}
datas {
id: 8
name: "LaneSharedData"
}
datas {
id: 9
name: "FusionSharedData"
}
datas {
id: 10
name: "RadarObjectData"
}
}
# Nvidia Driver on host is required for this config
subnode_config {
# Camera node. publish pb with non-zero int
subnodes {
id: 3
name: "CameraProcessSubnode"
reserve: "device_id:camera;pb_obj:1;pb_ln_msk_:0;"
type: SUBNODE_IN
}
# Visualization node with OpenGL
subnodes {
id: 41
name: "VisualizationSubnode"
reserve: "vis_driven_event_id:1008;camera_event_id:1008"
}
}
###################################################################
# Define all edges which link nodes.
edge_config {
# CameraProcessSubnode -> VisualizationSubnode
edges {
id: 108
from_node: 3
to_node: 41
events {
id: 1008
name: "camera_visualization"
}
}
}
# Shared Data
data_config {
datas {
id: 5
name: "CameraObjectData"
}
datas {
id: 7
name: "CameraSharedData"
}
}
# Define all nodes in DAG streaming.
subnode_config {
# 64-Lidar Input nodes.
subnodes {
id: 1
name: "Lidar64ProcessSubnode"
reserve: "device_id:velodyne64;"
type: SUBNODE_IN
}
# Front radar Input nodes.
subnodes {
id: 2
name: "RadarProcessSubnode"
reserve: "device_id:radar;"
type: SUBNODE_IN
}
# Fusion node.
subnodes {
id: 31
name: "FusionSubnode"
reserve: "pub_driven_event_id:1001;lidar_event_id:1001;radar_event_id:1002;"
type: SUBNODE_OUT
}
# TrafficLight Preprocess node.
subnodes {
id: 41
name: "TLPreprocessorSubnode"
type: SUBNODE_IN
}
# TrafficLight process node.
subnodes {
id: 42
name: "TLProcSubnode"
type: SUBNODE_OUT
}
}
###################################################################
# Define all edges linked nodes.
edge_config {
# Lidar64ProcessSubnode -> FusionSubnode
edges {
id: 101
from_node: 1
to_node: 31
events {
id: 1001
name: "lidar_fusion"
}
}
# Radar RadarProcessSubnode -> FusionSubnode
edges {
id: 102
from_node: 2
to_node: 31
events {
id: 1002
name: "radar_fusion"
}
}
# TLPreprocessorSubnode -> TLProcSubnode
edges {
id: 201
from_node: 41
to_node: 42
events {
id: 1003
name: "traffic_light"
}
}
}
data_config {
datas {
id: 1
name: "LidarObjectData"
}
datas {
id: 2
name: "RadarObjectData"
}
datas {
id: 3
name: "TLPreprocessingData"
}
datas {
id: 4
name: "SceneSharedData"
}
}
# Define all nodes in DAG streaming.
subnode_config {
# 64-Lidar Input nodes.
subnodes {
id: 1
name: "Lidar64ProcessSubnode"
reserve: "device_id:velodyne_64;"
type: SUBNODE_IN
}
# Front radar Input nodes.
#subnodes {
# id: 2
# name: "RadarProcessSubnode"
# reserve: "device_id:radar;"
# type: SUBNODE_IN
#}
# Camera node
subnodes {
id: 3
name: "CameraProcessSubnode"
reserve: "device_id:camera;pb_obj:0;pb_ln_msk_:1;"
type: SUBNODE_IN
}
# Fusion node.
subnodes {
id: 31
name: "FusionSubnode"
#reserve: "pub_driven_event_id:1001;lidar_event_id:1001;radar_event_id:1002;camera_event_id:1009"
reserve: "pub_driven_event_id:1001;lidar_event_id:1001;camera_event_id:1009"
type: SUBNODE_OUT
}
# Visualization node
subnodes {
id: 41
name: "VisualizationSubnode"
reserve: "vis_driven_event_id:1011;fusion_event_id:1011;"
type: SUBNODE_OUT
}
}
###################################################################
# Define all edges linked nodes.
edge_config {
# Lidar64ProcessSubnode -> FusionSubnode
edges {
id: 101
from_node: 1
to_node: 31
events {
id: 1001
name: "lidar_fusion"
}
}
# Radar RadarProcessSubnode -> FusionSubnode
#edges {
# id: 102
# from_node: 2
# to_node: 31
# events {
# id: 1002
# name: "radar_fusion"
# }
#}
# CameraProcessSubnode -> FusionSubnode
edges {
id: 109
from_node: 3
to_node: 31
events {
id: 1009
name: "camera_fusion"
}
}
# FusionSubnode -> VisualizationSubnode
edges {
id: 111
from_node: 31
to_node: 41
events {
id: 1011
name: "fusion_visualization"
}
}
}
data_config {
datas {
id: 5
name: "CameraObjectData"
}
datas {
id: 6
name: "LaneSharedData"
}
datas {
id: 7
name: "CameraSharedData"
}
datas {
id: 1
name: "LidarObjectData"
}
datas {
id: 2
name: "RadarObjectData"
}
datas {
id: 3
name: "TLPreprocessingData"
}
datas {
id: 4
name: "SceneSharedData"
}
datas {
id: 9
name: "FusionSharedData"
}
}
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
*.jpg filter=lfs diff=lfs merge=lfs -text
\ No newline at end of file
*.pcd filter=lfs diff=lfs merge=lfs -text
\ No newline at end of file
*.pcd filter=lfs diff=lfs merge=lfs -text
*.png filter=lfs diff=lfs merge=lfs -text
\ No newline at end of file
11989 588.419051 33.330463 45.010161 40.746964 -0.002165 -0.010956 -0.639926 0.768355
11990 588.519102 33.330382 45.010216 40.747129 -0.002152 -0.010966 -0.639940 0.768343
11991 588.619153 33.330347 45.010282 40.747194 -0.002133 -0.010945 -0.639944 0.768341
11992 588.719204 33.330361 45.010357 40.747331 -0.002134 -0.010937 -0.639935 0.768348
11993 588.819255 33.330328 45.010420 40.747546 -0.002154 -0.010940 -0.639938 0.768346
11994 588.919307 33.330290 45.010480 40.747756 -0.002180 -0.010939 -0.639941 0.768343
11995 589.019364 33.330300 45.010551 40.747892 -0.002173 -0.010936 -0.639935 0.768348
11996 589.119415 33.330241 45.010605 40.748109 -0.002156 -0.010970 -0.639944 0.768340
*.pcd filter=lfs diff=lfs merge=lfs -text
\ No newline at end of file
3 14.1913 -63.8893 1.10654 195 14.1589 -64.0393 0.374468 255 14.215 -64.0288 0.747298 235
76 16.9306 -61.465 1.0824 213 16.8195 -61.4375 1.08166 230 16.6193 -61.4629 1.0813 209 16.2218 -61.5197 1.08084 214 16.0306 -61.5781 1.08097 222 15.8243 -61.5776 1.08028 255 15.6305 -61.625 1.08028 229 15.4268 -61.633 1.07972 253 15.2285 -61.658 1.07957 255 15.0367 -61.7132 1.07971 228 17.4557 -61.4336 1.41825 255 14.8141 -61.6394 1.07802 255 17.252 -61.4495 1.41751 255 14.6329 -61.7381 1.07879 224 14.5319 -61.7434 1.07856 253 14.3331 -61.7672 1.07828 253 16.7467 -61.4999 1.41595 255 14.1343 -61.7867 1.07811 251 13.9433 -61.8443 1.07832 229 16.3453 -61.5441 1.41497 255 16.145 -61.5679 1.41446 255 15.9466 -61.5988 1.41411 255 15.7497 -61.6349 1.41386 255 15.5469 -61.6471 1.41316 255 15.3479 -61.6706 1.41285 255 15.1509 -61.7048 1.41261 255 17.4532 -61.4412 1.80751 213 14.9525 -61.7325 1.41225 255 17.2452 -61.4416 1.80612 201 14.7298 -61.6584 1.40992 255 17.0394 -61.4491 1.80494 202 14.6566 -61.7807 1.41188 255 16.9362 -61.4507 1.80432 201 14.4512 -61.7776 1.41095 255 16.7411 -61.4958 1.80414 203 14.2537 -61.8033 1.41075 255 14.0539 -61.8223 1.41028 255 16.3475 -61.5688 1.80354 185 16.1475 -61.5945 1.80291 185 15.9446 -61.608 1.80198 185 15.7453 -61.6344 1.8014 185 15.5478 -61.6679 1.80102 185 15.3498 -61.6952 1.80065 185 15.1494 -61.7158 1.79997 185 17.6189 -61.3696 2.22456 215 14.9495 -61.7376 1.79933 185 17.4256 -61.4226 2.22452 230 14.6915 -61.5137 1.79238 184 17.2187 -61.4268 2.2229 255 17.1202 -61.446 2.22267 231 14.4474 -61.7788 1.79749 200 16.9124 -61.4454 2.22092 255 14.2535 -61.82 1.79757 185 16.7193 -61.4945 2.22098 230 14.0558 -61.8487 1.79719 185 16.315 -61.5303 2.2188 255 16.1233 -61.5869 2.21902 231 15.9129 -61.5713 2.21689 255 15.7201 -61.6227 2.21697 231 15.5163 -61.6273 2.2157 255 15.3247 -61.6833 2.21597 231 15.12 -61.6863 2.21454 255 14.9291 -61.745 2.21494 233 14.72 -61.7272 2.21287 255 14.6236 -61.7521 2.21296 228 14.4211 -61.7569 2.21181 255 14.2299 -61.8134 2.21221 230 14.0256 -61.8128 2.21077 255 17.332 -61.4452 0.731885 235 14.7769 -61.7469 0.368853 255 17.1314 -61.4687 0.731805 235 16.932 -61.4988 0.731629 235 16.731 -61.5226 0.731403 235 16.5321 -61.5534 0.731243 235 13.9016 -61.9409 0.368936 255 13.9427 -61.8782 0.729348 235
4 29.7949 -55.0599 2.18182 207 29.3628 -54.6707 2.16454 182 29.2374 -54.6428 2.16191 201 29.1098 -54.8174 2.16491 206
7 13.2007 -53.6192 0.347656 255 13.0187 -53.6039 0.347528 255 13.5003 -53.2234 0.658881 235 13.3293 -53.2498 0.658768 235 13.1559 -53.266 0.658572 235 12.9868 -53.299 0.658524 235 12.8205 -53.3396 0.658683 235
11 15.1235 -53.3189 -0.318568 255 14.7625 -53.3181 -0.317596 255 15.0667 -53.3429 -0.0216523 255 14.8923 -53.3648 -0.0215248 255 14.3726 -53.4369 -0.0210415 255 14.1057 -53.4458 -0.0207527 255 15.0182 -53.372 0.347728 255 14.8459 -53.3978 0.347835 255 14.6635 -53.3899 0.347712 255 14.499 -53.4471 0.347762 255 13.9987 -53.0904 0.658651 235
*.pcd filter=lfs diff=lfs merge=lfs -text
\ No newline at end of file
04/09: front camera calibration is copied from MKZ8 as of now.
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册