提交 aa469fe5 编写于 作者: X Xiangquan Xiao

Robot: Fix typos.

上级 edf4c7ec
......@@ -707,7 +707,7 @@ function print_usage() {
${BLUE}build_control${NONE}: compile control and its dependencies.
${BLUE}build_prediction${NONE}: compile prediction and its dependencies.
${BLUE}build_pnc${NONE}: compile pnc and its dependencies.
${BLUE}build_no_perception [dbg|opt]${NONE}: run build, skip building perception module, useful when some perception dependencies are not satisified, e.g., CUDA, CUDNN, LIDAR, etc.
${BLUE}build_no_perception [dbg|opt]${NONE}: run build, skip building perception module, useful when some perception dependencies are not satisfied, e.g., CUDA, CUDNN, LIDAR, etc.
${BLUE}build_prof${NONE}: build for gprof support.
${BLUE}buildify${NONE}: fix style of BUILD files
${BLUE}check${NONE}: run build/lint/test, please make sure it passes before checking in new code
......
......@@ -56,7 +56,7 @@ function print_usage() {
${BLUE}version${NONE}: display current commit and date
${BLUE}push${NONE}: pushes the images to Docker hub
${BLUE}gen${NONE}: generate a docker release image
${BLUE}ota_gen${NONE}: generate a ota docker release image
${BLUE}ota_gen${NONE}: generate an ota docker release image
"
}
......
......@@ -50,7 +50,7 @@ As shown in figure below, three cameras' channel data on the button sections and
Place the mouse on the image of the camera channel, you can double-click the left button to highlight the corresponding data channel on the left menu bar. Right click on the image to bring up menu for deleting the camera channel.
Play and Pause buttons: when clicking the `Play` button, all channels will be showed. While when clicking the `Pause` button, all channels will stop showing on the tool.
Play and Pause buttons: when clicking the `Play` button, all channels will be shown. While when clicking the `Pause` button, all channels will stop showing on the tool.
## Cyber_monitor
......
Migration guide from Apollo ROS
================================
This article describes the essential changes for projects to migrate from Apollo ROS (Apollo 3.0 and before) to Apollo Cyber RT (Apollo 3.5 and after). We will be using the very first ROS project talker/listener as example to demostrate step by step migration instruction.
This article describes the essential changes for projects to migrate from Apollo ROS (Apollo 3.0 and before) to Apollo Cyber RT (Apollo 3.5 and after). We will be using the very first ROS project talker/listener as example to demonstrate step by step migration instruction.
## Build system
......
......@@ -53,7 +53,7 @@ Dag file is the config file of module topology. You can define components used a
## Launch files
The Launch file provides a easy way to start modules. By defining one or multiple dag files in the launch file, you can start multiple modules at the same time.
The Launch file provides an easy way to start modules. By defining one or multiple dag files in the launch file, you can start multiple modules at the same time.
## Record file
......
......@@ -49,7 +49,7 @@ As shown in figure below, three cameras' channel data on the button sections and
Place the mouse on the image of the camera channel, you can double-click the left button to highlight the corresponding data channel on the left menu bar. Right click on the image to bring up menu for deleting the camera channel.
Play and Pause buttons: when clicking the `Play` button, all channels will be showed. While when clicking the `Pause` button, all channels will stop showing on the tool.
Play and Pause buttons: when clicking the `Play` button, all channels will be shown. While when clicking the `Pause` button, all channels will stop showing on the tool.
## Cyber_monitor
......
# Migration guide from Apollo ROS
This article describes the essential changes for projects to migrate from Apollo ROS (Apollo 3.0 and before) to Apollo Cyber RT (Apollo 3.5 and after). We will be using the very first ROS project talker/listener as example to demostrate step by step migration instruction.
This article describes the essential changes for projects to migrate from Apollo ROS (Apollo 3.0 and before) to Apollo Cyber RT (Apollo 3.5 and after). We will be using the very first ROS project talker/listener as example to demonstrate step by step migration instruction.
## Build system
......
......@@ -52,7 +52,7 @@ Dag file is the config file of module topology. You can define components used a
## Launch files
The Launch file provides a easy way to start modules. By defining one or multiple dag files in the launch file, you can start multiple modules at the same time.
The Launch file provides an easy way to start modules. By defining one or multiple dag files in the launch file, you can start multiple modules at the same time.
## Record file
......
......@@ -10,7 +10,7 @@ Please follow the steps below to add a new GPS Receiver.
3. In `config.proto`, add the new data format for the new GPS receiver
4. In function `create_parser` from file data_parser.cpp, add the new parser instance for the new GPS receiver
Let's look at how to add the GPS Receiver using the above mentioned steps for Reciever: `u-blox`.
Let's look at how to add the GPS Receiver using the above-mentioned steps for Receiver: `u-blox`.
### Step 1
......
......@@ -43,7 +43,7 @@ class NewController : public Controller {
To add the new controller configuration complete the following steps:
1. Define a `proto` for the new controller configurations and parameters based on the algorithm requirements. A example `proto` definition of `LatController` can be found at: `modules/control/proto/lat_controller_conf.proto`
1. Define a `proto` for the new controller configurations and parameters based on the algorithm requirements. An example `proto` definition of `LatController` can be found at: `modules/control/proto/lat_controller_conf.proto`
2. After defining the new controller `proto`, e.g., `new_controller_conf.proto`, type the following:
```protobuf
......
......@@ -66,4 +66,4 @@ transform:
y: -0.235
z: 1.256
```
If angles are not zero, they need to be calibrated and represented in quarternion (see above stransformation->rotation).
If angles are not zero, they need to be calibrated and represented in quaternion (see above stransformation->rotation).
......@@ -31,7 +31,7 @@ This step involves two major tasks,
- Smooth the trajectory to get better riding comfort experience and to make it easier for the control module to track
- Ensure collision avoidance
The received raw trajectory is taken as an initial guess for optimization to iterate on. The generated result is a set of points that are not distributed evenly but are closer to eachother near the turning while those on a linear path are more spread-out.
The received raw trajectory is taken as an initial guess for optimization to iterate on. The generated result is a set of points that are not distributed evenly but are closer to each other near the turning while those on a linear path are more spread-out.
This not only ensures better turns, but as time/space is fixed, the nearer the points, the slower the speed of the ego-car. Which also means that velocity tracking in this step is possible but more reasonable acceleration, braking and steering.
![](images/os_step2.png)
......
......@@ -78,7 +78,7 @@ Firstly, please make sure you have already finished setting up the **Apollo Fuel
https://github.com/ApolloAuto/apollo/blob/master/modules/tools/fuel_proxy/README.md
This is **essential** before you can get enjoy control calibration or other Apollo Fuel based cloud service.
This is **essential** before you can get enjoy control calibration or other Apollo Fuel-based cloud service.
## Folder Structure Requirement
......
......@@ -2,12 +2,12 @@
## Introduction
Simulation is a vital part of autonomous driving especially in Apollo where most of the testing happens via our simulation platform. In order to have a more accurate approach to our testing environment, Apollo 5.0 introduces Dynamic model which is used to achieve Control-in-loop Simulation to represent the ego car's actual dynamic charateristics. It is possible to gauge the how the ego car would actually perform while driving in autonomous mode, but virtually using the Control-in-loop simulation making it secure and efficient. It can also accelerate your development cycle as you can maximize your testing and enhance your algorithms even before entering the vehicle.
Simulation is a vital part of autonomous driving especially in Apollo where most of the testing happens via our simulation platform. In order to have a more accurate approach to our testing environment, Apollo 5.0 introduces Dynamic model which is used to achieve Control-in-loop Simulation to represent the ego car's actual dynamic characteristics. It is possible to gauge the how the ego car would actually perform while driving in autonomous mode, but virtually using the Control-in-loop simulation making it secure and efficient. It can also accelerate your development cycle as you can maximize your testing and enhance your algorithms even before entering the vehicle.
The architecture diagram for how Dynamic model works is included below:
![](images/architecture.png)
The Control module recieves input via planning and the vehicle and uses it effectively to generate the output path which is then fed into the Dynamic model.
The Control module receives input via planning and the vehicle and uses it effectively to generate the output path which is then fed into the Dynamic model.
## Examples
......
......@@ -4,7 +4,7 @@ June 27, 2018
## Introduction
Apollo 3.0 introduced a production level solution for the low-cost, closed venue driving scenario that is used as the foundation for comercialized products. The Perception module introduced a few major features to provide more diverse functionalities and a more reliable, robust perception in AV performance, which are:
Apollo 3.0 introduced a production level solution for the low-cost, closed venue driving scenario that is used as the foundation for commercialized products. The Perception module introduced a few major features to provide more diverse functionalities and a more reliable, robust perception in AV performance, which are:
* **CIPV(Closest In-Path Vehicle) detection and Tailgaiting**: The vehicle in front of the ego-car is detected and its trajectory is estimated for more efficient tailgating and lane keeping when lane detection is unreliable.
* **Asynchronous sensor fusion**: unlike the previous version, Perception in Apollo 3.0 is capable of consolidating all the information and data points by asynchronously fusing LiDAR, Radar and Camera data. Such conditions allow for more comprehensive data capture and reflect more practical sensor environments.
......@@ -26,7 +26,7 @@ Apollo 3.0 *does not* support a high curvature road, roads without lane lines in
- ***Road without lane line marks***
- ***Intersections***
- ***Dotted lane lines***
- ***Public roads with alot of pedestrians or cars***
- ***Public roads with a lot of pedestrians or cars***
## Perception module
The flow chart of Apollo 3.0 Perception module:
......@@ -72,8 +72,8 @@ The figure above depicts visualization of the Perception output in Apollo 3.0. T
### Radar + Camera Output Fusion
Given multiple sensors, their output should be combined in a synergic fashion. Apollo 3.0. introduces a sensor set with a radar and a camera. For this process, both sensors need to be calibrated. Each sensor will be calibrated using the same method introduced in Apollo 2.0. After calibration, the output will be represented in a 3-D world coordinate system and each output will be fused by their similarity in location, size, time and the utility of each sensor. After learning the utility function of each sensor, the camera contributes more on lateral distance and the radar contributes more on longitudinal distance measurement. Asynchronous sensor fusion algorithm can also be used as an option.
### Psuedo Lane
All lane detection results will be combined spatially and temporarily to induce the psuedo lane which will be fed to Planning and Control modules. Some lane lines would be incorrect or missing in a certain frame. To provide the smooth lane line output, the history of lane lines using vehicle odometry is used. As the vehicle moves, the odometer of each frame is saved and lane lines in previous frames will be also saved in the history buffer. The detected lane line which does not match with the history lane lines will be removed and the history output will replace the lane line and be provided to the planning module.
### Pseudo Lane
All lane detection results will be combined spatially and temporarily to induce the pseudo lane which will be fed to Planning and Control modules. Some lane lines would be incorrect or missing in a certain frame. To provide the smooth lane line output, the history of lane lines using vehicle odometry is used. As the vehicle moves, the odometer of each frame is saved and lane lines in previous frames will be also saved in the history buffer. The detected lane line which does not match with the history lane lines will be removed and the history output will replace the lane line and be provided to the planning module.
### Ultrasonic Sensors
Apollo 3.0 supports ultrasonic sensors. Each ultrasonic sensor provides the distance of a detected object through the CANBus. The distance measurement from the ultrasonic sensor is then gathered and broadcasted as a ROS topic. In the future, after fusing ultrasonic sensor output, the map of objects and boundary will be published as a ROS output.
......
......@@ -98,7 +98,7 @@ message GaussianInfo {
optional double sigma_x = 1;
optional double sigma_y = 2;
optional double correlation = 3;
// Information of representive uncertainty area
// Information of representative uncertainty area
optional double area_probability = 4;
optional double ellipse_a = 5;
optional double ellipse_b = 6;
......
......@@ -41,7 +41,7 @@ std::string MPCControllerSubmodule::Name() const {
}
bool MPCControllerSubmodule::Init() {
// TODO(SHU): seprate common_control conf from controller conf
// TODO(SHU): separate common_control conf from controller conf
if (!cyber::common::GetProtoFromFile(FLAGS_mpc_controller_conf_file,
&mpc_controller_conf_)) {
AERROR << "Unable to load control conf file: " +
......
......@@ -36,7 +36,7 @@
You can set `gnss_mode` in `apollo/localization/conf/localization.conf` to decide which mode you want to use. The default mode is gnss best pose.
### LiDAR Localization Setting
We provide three modes of LiDAR localization: intensity, altitude, and fusion. You can set the parameter `lidar_localization_mode` in file localization.conf to choose the mode. Considering computing ability of different platforms, we provide `lidar_filter_size`, `lidar_thread_num` and `point_cloud_step` to adjust the computation cost. Futhermore, we provide three yaw optimization methods in LiDAR localization algorithm. You can set the parameter `lidar_yaw_align_mode` in file localization.conf to choose the mode.
We provide three modes of LiDAR localization: intensity, altitude, and fusion. You can set the parameter `lidar_localization_mode` in file localization.conf to choose the mode. Considering computing ability of different platforms, we provide `lidar_filter_size`, `lidar_thread_num` and `point_cloud_step` to adjust the computation cost. Furthermore, we provide three yaw optimization methods in LiDAR localization algorithm. You can set the parameter `lidar_yaw_align_mode` in file localization.conf to choose the mode.
## Generate Localization Map
Localization map is used for LiDAR-based localization, which is a grid-cell representation of the environment. Each cell stores the statistics of laser reflection intensity and altitude. The map is organized as a group of map nodes. For more information, please refer to `apollo/modules/localization/msf/local_map`.
......
......@@ -52,9 +52,9 @@ Note: The team is working to add additional driving scenarios into our planner.
In order to safely and smoothly pass through a traffic light, we created 3 driving scenarios:
- **Protected**: In this scenario, our ego car has to pass through a intersection with a clear traffic light indicator. A left arrow or right arrow in green for the corresponding turn.
- **Unprotected Left**: In this scenario, our ego car will have to make a left turn without a distinct light, meaning the car would need to yeild to oncoming traffic. Just like in the unprotected STOP scenario, our ego car would have to creep to ensure that it is safe to cross the intersection before safely moving through the lane.
- **Unprotected Right**: In this scenario, our ego car is expected to make an unprotected right turn while yeilding to oncoming traffic. Our ego car will need to creep slowly and gauge the traffic and then make a safe turn.
- **Protected**: In this scenario, our ego car has to pass through an intersection with a clear traffic light indicator. A left arrow or right arrow in green for the corresponding turn.
- **Unprotected Left**: In this scenario, our ego car will have to make a left turn without a distinct light, meaning the car would need to yield to oncoming traffic. Just like in the unprotected STOP scenario, our ego car would have to creep to ensure that it is safe to cross the intersection before safely moving through the lane.
- **Unprotected Right**: In this scenario, our ego car is expected to make an unprotected right turn while yielding to oncoming traffic. Our ego car will need to creep slowly and gauge the traffic and then make a safe turn.
As discussed above, based on the three driving scenarios, the following 3 steps are performed:
......
......@@ -98,7 +98,7 @@ Status LaneChangeDecider::Process(
}
return Status::OK();
} else if (prev_status->status() == ChangeLaneStatus::CHANGE_LANE_FAILED) {
// TODO(SHU): add a optimization_failure counter to enter
// TODO(SHU): add an optimization_failure counter to enter
// change_lane_failed status
if (now - prev_status->timestamp() < FLAGS_change_lane_fail_freeze_time) {
// RemoveChangeLane(reference_line_info);
......
......@@ -20,7 +20,7 @@ The Prediction module only predicts the behavior of obstacles and not the EGO ca
## Functionalities
Based on the figure below, the prediction module comprises of 4 main functionalities: Container, Scenario, Evaluator and Predictor. Container, Evaluator and Predictor existed in Apollo 3.0. In Apollo 3.5, we introduced the Scenario functionality as we have moved towards a more scenario-based approach for Apollo's autonomous driving capabilities.
Based on the figure below, the prediction module comprises 4 main functionalities: Container, Scenario, Evaluator and Predictor. Container, Evaluator and Predictor existed in Apollo 3.0. In Apollo 3.5, we introduced the Scenario functionality as we have moved towards a more scenario-based approach for Apollo's autonomous driving capabilities.
![](images/prediction.png)
### Container
......
......@@ -118,7 +118,7 @@ class Recorder(object):
"""
Save the full data into <disk>/data/bag/ReusedRecordsPool,
which will be cleared every time the smart recorder get started.
Meanwhile restore the messages we are interested in to <disk>/data/bag/<task_id> directory.
Meanwhile, restore the messages we are interested in to <disk>/data/bag/<task_id> directory.
"""
reuse_pool_dir = os.path.join(disk, 'data/bag', 'ReusedRecordsPool')
task_id = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S')
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册