提交 f3d6fdeb 编写于 作者: L linjintao

Merge branch 'polish_docs' into 'master'

Fix typo in docs

See merge request open-mmlab/mmaction-lite!386
......@@ -34,9 +34,9 @@ test_cfg = dict(average_clips=None) # Config for testing hyperparameters for TSN
dataset_type = 'RawframeDataset' # Type of dataset for training, valiation and testing
data_root = 'data/kinetics400/rawframes_train/' # Root path to data for training
data_root_val = 'data/kinetics400/rawframes_val/' # Root path to data for validation and testing
ann_file_train = 'data/kinetics400/kinetics_train_list.txt' # Path to the annotation file for training
ann_file_val = 'data/kinetics400/kinetics_val_list.txt' # Path to the annotation file for validation
ann_file_test = 'data/kinetics400/kinetics_val_list.txt' # Path to the annotation file for testing
ann_file_train = 'data/kinetics400/kinetics400_train_list_rawframes.txt' # Path to the annotation file for training
ann_file_val = 'data/kinetics400/kinetics400_val_list_rawframes.txt' # Path to the annotation file for validation
ann_file_test = 'data/kinetics400/kinetics400_val_list_rawframes.txt' # Path to the annotation file for testing
img_norm_cfg = dict( # Config of image normalition used in data pipeline
mean=[123.675, 116.28, 103.53], # Mean values of different channels to normalize
std=[58.395, 57.12, 57.375], # Std values of different channels to normalize
......
......@@ -223,7 +223,7 @@ We adopt distributed training for both single machine and multiple machines.
Supposing that the server has 8 GPUs, 8 processes will be started and each process runs on a single GPU.
Each process keeps an isolated model, data loader, and optimizer.
Model parameters are only synchronized once at the begining.
Model parameters are only synchronized once at the beginning.
After a forward and backward pass, gradients will be allreduced among all GPUs,
and the optimizer will update model parameters.
Since the gradients are allreduced, the model parameter stays the same for all processes after the iteration.
......@@ -294,7 +294,7 @@ You can check [slurm_train.sh](/tools/slurm_train.sh) for full arguments and env
If you have just multiple machines connected with ethernet, you can refer to
pytorch [launch utility](https://pytorch.org/docs/stable/distributed.html#launch-utility).
Usually it is slow if you do not have high speed networking like infiniband.
Usually it is slow if you do not have high speed networking like InfiniBand.
### Launch multiple jobs on a single machine
......
......@@ -55,7 +55,7 @@ you need to install the prebuilt PyTorch with CUDA 9.2.
conda install pytorch=1.3.1 cudatoolkit=9.2 torchvision=0.4.2 -c pytorch
```
If you build PyTorch from source instead of installing the prebuilt pacakge, you can use more CUDA versions such as 9.0.
If you build PyTorch from source instead of installing the prebuilt package, you can use more CUDA versions such as 9.0.
c. Clone the mmaction repository
......
......@@ -37,7 +37,7 @@ model = dict(
init_std=0.01))
```
Note that the `pretrained='torchvision://resnet50'` setting is used for initilizing backbone.
Note that the `pretrained='torchvision://resnet50'` setting is used for initializing backbone.
If you are training a new model from ImageNet-pretrained weights, this is for you.
However, this setting is not related to our task at hand.
What we need is `load_from`, which will be discussed later.
......
......@@ -94,7 +94,7 @@ There are two ways to work with custom datasets.
a pickle or json file, then you can simply use `RawframeDataset`, `VideoDataset` or `ActivityNetDataset`.
After the data pre-processing, the users need to further modify the config files to use the dataset.
Here is an example of using a custom dataset in Rawframe format.
Here is an example of using a custom dataset in rawframe format.
In `configs/task/method/my_custom_config.py`:
......
......@@ -51,7 +51,7 @@ mkdir /mnt/SSD/kinetics400_extracted_val/
ln -s /mnt/SSD/kinetics400_extracted_val/ ../../../data/kinetics400/rawframes_val/
```
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-comsuming), consider running the following script to extract **RGB-only** frames.
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames.
```shell
bash extract_rgb_frames.sh
......
......@@ -18,7 +18,7 @@ This part is **optional** if you only want to use the video loader.
Before extracting, please refer to [install.md](/docs/install.md) for installing [dense_flow](https://github.com/open-mmlab/denseflow).
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-comsuming), consider running the following script to extract **RGB-only** frames.
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames.
Fist, You can run the following script to soft link the extracted frames.
......
......@@ -25,7 +25,7 @@ mkdir /mnt/SSD/mmit_extracted/
ln -s /mnt/SSD/mmit_extracted/ ../../../data/mmit/rawframes
```
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-comsuming), consider running the following script to extract **RGB-only** frames.
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames.
```shell
bash extract_rgb_frames.sh
......
......@@ -33,7 +33,7 @@ mkdir /mnt/SSD/sthv1_extracted/
ln -s /mnt/SSD/sthv1_extracted/ ../../../data/sthv1/rawframes
```
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-comsuming), consider running the following script to extract **RGB-only** frames.
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames.
```shell
cd $MMACTION/tools/data/sthv1/
......
......@@ -32,7 +32,7 @@ mkdir /mnt/SSD/sthv2_extracted/
ln -s /mnt/SSD/sthv2_extracted/ ../../../data/sthv2/rawframes
```
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-comsuming), consider running the following script to extract **RGB-only** frames.
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames.
```shell
cd $MMACTION/tools/data/sthv2/
......
# Preparing THUMOS-14
# Preparing THUMOS'14
For basic dataset information, you can refer to the dataset [website](https://www.crcv.ucf.edu/THUMOS14/download.html).
Before we start, please make sure that the directory is located at `$MMACTION/tools/data/thumos14/`.
......@@ -37,7 +37,7 @@ mkdir /mnt/SSD/thumos14_extracted/
ln -s /mnt/SSD/thumos14_extracted/ ../data/thumos14/rawframes/
```
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-comsuming), consider running the following script to extract **RGB-only** frames.
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames.
```shell
cd $MMACTION/tools/data/thumos14/
......@@ -62,10 +62,10 @@ bash fetch_tag_proposals.sh
## Step 5. Check Directory Structure
After the whole data process for THUMOS-14 preparation,
you will get the rawframes (RGB + Flow), videos and annotation files for THUMOS-14.
After the whole data process for THUMOS'14 preparation,
you will get the rawframes (RGB + Flow), videos and annotation files for THUMOS'14.
In the context of the whole project (for THUMOS-14 only), the folder structure will look like:
In the context of the whole project (for THUMOS'14 only), the folder structure will look like:
```
mmaction
......@@ -103,4 +103,4 @@ mmaction
│ │ │ | ├── video_test_0000001
```
For training and evaluating on THUMOS-14, please refer to [getting_started.md](/docs/getting_started.md).
For training and evaluating on THUMOS'14, please refer to [getting_started.md](/docs/getting_started.md).
......@@ -35,7 +35,7 @@ mkdir /mnt/SSD/ucf101_extracted/
ln -s /mnt/SSD/ucf101_extracted/ ../../../data/ucf101/rawframes
```
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-comsuming), consider running the following script to extract **RGB-only** frames.
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames.
```shell
bash extract_rgb_frames.sh
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册