PaddleDetection takes a rather principled approach to configuration management. We aim to automate the configuration workflow and to reduce configuration errors.
# Rationale
## Rationale
Presently, configuration in mainstream frameworks are usually dictionary based: the global config is simply a giant, loosely defined Python dictionary.
...
...
@@ -12,7 +14,7 @@ This approach is error prone, e.g., misspelled or displaced keys may lead to ser
To avoid the common pitfalls, with automation and static analysis in mind, we propose a configuration design that is user friendly, easy to maintain and extensible.
# Design
## Design
The design utilizes some of Python's reflection mechanism to extract configuration schematics from Python class definitions.
...
...
@@ -21,7 +23,7 @@ To be specific, it extracts information from class constructor arguments, includ
This approach advocates modular and testable design, leading to a unified and extensible code base.
## API
### API
Most of the functionality is exposed in `ppdet.core.workspace` module.
...
...
@@ -34,7 +36,7 @@ Most of the functionality is exposed in `ppdet.core.workspace` module.
-`load_config` and `merge_config`: Loading yaml file and merge config settings from command line.
## Example
### Example
Take the `RPNHead` module for example, it is composed of several PaddlePaddle operators. We first wrap those operators into classes, then pass in instances of these classes when instantiating the `RPNHead` module.
A small utility (`tools/configure.py`) is included to simplify the configuration process, it provides 4 commands to walk users through the configuration process:
...
...
@@ -190,7 +192,7 @@ A small utility (`tools/configure.py`) is included to simplify the configuration
```
# FAQ
## FAQ
**Q:** There are some configuration options that are used by multiple modules (e.g., `num_classes`), how do I avoid duplication in config files?
Alternating between training epoch and evaluation run is possible, simply pass
in `--eval` to do so and evaluate at each snapshot_iter. It can be modified at `snapshot_iter` of the configuration file. If evaluation dataset is large and
causes time-consuming in training, we suggest decreasing evaluation times or evaluating after training. When perform evaluation in training,
causes time-consuming in training, we suggest decreasing evaluation times or evaluating after training. When perform evaluation in training,
the best model with highest MAP is saved at each `snapshot_iter`. `best_model` has the same path as `model_final`.
The visualization files are saved in `output` by default, to specify a different
path, simply add a `--output_dir=` flag.
`--draw_threshold` is an optional argument. Default is 0.5. Different thresholds will produce different results depending on the calculation of [NMS](https://ieeexplore.ieee.org/document/1699659)
`--draw_threshold` is an optional argument. Default is 0.5. Different thresholds will produce different results depending on the calculation of [NMS](https://ieeexplore.ieee.org/document/1699659). If users want to infer according to customized model path, `-o weights` can be set for specified path.