提交 51dc5ec4 编写于 作者: L linjintao

Fix args from _ to -

上级 7ae125a0
......@@ -28,7 +28,7 @@ python tools/train.py ${CONFIG_FILE} [optional arguments]
Example: train I3D model on Kinetics-400 dataset in a deterministic option with periodic validation.
```shell
python tools/train.py configs/recognition/i3d/i3d_r50_32x2x1_100e_kinetics400_rgb.py \
--work_dir work_dirs/i3d_r50_32x2x1_100e_kinetics400_rgb \
--work-dir work_dirs/i3d_r50_32x2x1_100e_kinetics400_rgb \
--validate --seed 0 --deterministic
```
......
......@@ -27,7 +27,7 @@ python tools/train.py ${CONFIG_FILE} [optional arguments]
Example: train R(2+1)D model on Kinetics-400 dataset in a deterministic option with periodic validation.
```shell
python tools/train.py configs/recognition/r2plus1d/r2plus1d_r34_8x8x1_180e_kinetics400_rgb.py \
--work_dir work_dirs/r2plus1d_r34_3d_8x8x1_180e_kinetics400_rgb \
--work-dir work_dirs/r2plus1d_r34_3d_8x8x1_180e_kinetics400_rgb \
--validate --seed 0 --deterministic
```
......@@ -43,7 +43,7 @@ Example: test R(2+1)D model on Kinetics-400 dataset and dump the result to a jso
```shell
python tools/test.py configs/recognition/r2plus1d/r2plus1d_r34_8x8x1_180e_kinetics400_rgb.py \
checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
--out result.json --average_clips=prob
--out result.json --average-clips=prob
```
For more details, you can refer to **Test a dataset** part in [getting_started](/docs/getting_started.md#test-a-dataset).
......@@ -27,7 +27,7 @@ python tools/train.py ${CONFIG_FILE} [optional arguments]
Example: train SlowFast model on Kinetics-400 dataset in a deterministic option with periodic validation.
```shell
python tools/train.py configs/recognition/slowfast/slowfast_r50_4x16x1_256e_kinetics400_rgb.py \
--work_dir work_dirs/slowfast_r50_4x16x1_256e_kinetics400_rgb \
--work-dir work_dirs/slowfast_r50_4x16x1_256e_kinetics400_rgb \
--validate --seed 0 --deterministic
```
......@@ -43,7 +43,7 @@ Example: test SlowFast model on Kinetics-400 dataset and dump the result to a js
```shell
python tools/test.py configs/recognition/slowfast/slowfast_r50_4x16x1_256e_kinetics400_rgb.py \
checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
--out result.json --average_clips=prob
--out result.json --average-clips=prob
```
For more details, you can refer to **Test a dataset** part in [getting_started](/docs/getting_started.md#test-a-dataset).
......@@ -29,7 +29,7 @@ python tools/train.py ${CONFIG_FILE} [optional arguments]
Example: train SlowOnly model on Kinetics-400 dataset in a deterministic option with periodic validation.
```shell
python tools/train.py configs/recognition/slowonly/slowonly_r50_4x16x1_256e_kinetics400_rgb.py \
--work_dir work_dirs/slowonly_r50_4x16x1_256e_kinetics400_rgb \
--work-dir work_dirs/slowonly_r50_4x16x1_256e_kinetics400_rgb \
--validate --seed 0 --deterministic
```
......@@ -45,7 +45,7 @@ Example: test SlowOnly model on Kinetics-400 dataset and dump the result to a js
```shell
python tools/test.py configs/recognition/slowonly/slowonly_r50_4x16x1_256e_kinetics400_rgb.py \
checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
--out result.json --average_clips=prob
--out result.json --average-clips=prob
```
For more details, you can refer to **Test a dataset** part in [getting_started](/docs/getting_started.md#test-a-dataset).
......@@ -42,7 +42,7 @@ python tools/train.py ${CONFIG_FILE} [optional arguments]
Example: train TSM model on Kinetics-400 dataset in a deterministic option with periodic validation.
```shell
python tools/train.py configs/recognition/tsm/tsm_r50_1x1x8_50e_kinetics400_rgb.py \
--work_dir work_dirs/tsm_r50_1x1x8_100e_kinetics400_rgb \
--work-dir work_dirs/tsm_r50_1x1x8_100e_kinetics400_rgb \
--validate --seed 0 --deterministic
```
......
......@@ -75,7 +75,7 @@ python tools/train.py ${CONFIG_FILE} [optional arguments]
Example: train TSN model on Kinetics-400 dataset in a deterministic option with periodic validation.
```shell
python tools/train.py configs/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb.py \
--work_dir work_dirs/tsn_r50_1x1x3_100e_kinetics400_rgb \
--work-dir work_dirs/tsn_r50_1x1x3_100e_kinetics400_rgb \
--validate --seed 0 --deterministic
```
......
......@@ -46,8 +46,8 @@ You can use the following command to extract frames.
```shell
python build_rawframes.py ${SRC_FOLDER} ${OUT_FOLDER} [--task ${TASK}] [--level ${LEVEL}] \
[--num_worker ${NUM_WORKER}] [--flow_type ${FLOW_TYPE}] [--out_format ${OUT_FORMAT}] \
[--ext ${EXT}] [--new_width ${NEW_WIDTH}] [--new_height ${NEW_HEIGHT}] [--new_short ${NEW_SHORT}]
[--num-worker ${NUM_WORKER}] [--flow-type ${FLOW_TYPE}] [--out-format ${OUT_FORMAT}] \
[--ext ${EXT}] [--new-width ${NEW_WIDTH}] [--new-height ${NEW_HEIGHT}] [--new-short ${NEW_SHORT}]
[--resume]
```
......@@ -79,9 +79,9 @@ We provide a convenient script to generate annotation file list. You can use the
```shell
cd $MMACTION
python tools/data/build_file_list.py ${DATASET} ${SRC_FOLDER} [--rgb_prefix ${RGB_PREFIX}] \
[--flow_x_prefix ${FLOW_X_PREFIX}] [--flow_y_prefix ${FLOW_Y_PREFIX}] [--num_split ${NUM_SPLIT}] \
[--subset ${SUBSET}] [--level ${LEVEL}] [--format ${FORMAT}] [--out_root_path ${OUT_ROOT_PATH}] \
python tools/data/build_file_list.py ${DATASET} ${SRC_FOLDER} [--rgb-prefix ${RGB_PREFIX}] \
[--flow-x-prefix ${FLOW_X_PREFIX}] [--flow-y-prefix ${FLOW_Y_PREFIX}] [--num-split ${NUM_SPLIT}] \
[--subset ${SUBSET}] [--level ${LEVEL}] [--format ${FORMAT}] [--out-root-path ${OUT_ROOT_PATH}] \
[--shuffle]
```
......
......@@ -46,12 +46,12 @@ You can use the following commands to test a dataset.
```shell
# single-gpu testing
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}] \
[--proc_per_gpu ${NUM_PROC_PER_GPU}] [--gpu_collect] [--tmpdir ${TMPDIR}] [--average_clips ${AVG_TYPE}] \
[--gpu-collect] [--tmpdir ${TMPDIR}] [--average-clips ${AVG_TYPE}] \
[--launcher ${JOB_LAUNCHER}] [--local_rank ${LOCAL_RANK}]
# multi-gpu testing
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}] \
[--proc_per_gpu ${NUM_PROC_PER_GPU}] [--gpu_collect] [--tmpdir ${TMPDIR}] [--average_clips ${AVG_TYPE}] \
[--gpu-collect] [--tmpdir ${TMPDIR}] [--average-clips ${AVG_TYPE}] \
[--launcher ${JOB_LAUNCHER}] [--local_rank ${LOCAL_RANK}]
```
......@@ -59,9 +59,8 @@ Optional arguments:
- `GPU_NUM`: Number of GPU used to test model. If not specified, it will be set to 1.
- `RESULT_FILE`: Filename of the output results. If not specified, the results will not be saved to a file.
- `EVAL_METRICS`: Items to be evaluated on the results. Allowed values depend on the dataset, e.g., `top_k_accuracy`, `mean_class_accuracy` are available for all datasets in recognition, `mean_average_precision` for Multi-Moments in Time, `AR@AN` for ActivityNet, etc.
- `NUM_PROC_PER_GPU`: Number of processes per GPU. If not specified, only one process will be assigned for a single gpu.
- `--gpu_collect`: If specified, recognition results will be collected using gpu communication. Otherwise, it will save the results on different gpus to `TMPDIR` and collect them by the rank 0 worker.
- `TMPDIR`: Temporary directory used for collecting results from multiple workers, available when `--gpu_collect` is not specified.
- `--gpu-collect`: If specified, recognition results will be collected using gpu communication. Otherwise, it will save the results on different gpus to `TMPDIR` and collect them by the rank 0 worker.
- `TMPDIR`: Temporary directory used for collecting results from multiple workers, available when `--gpu-collect` is not specified.
- `AVG_TYPE`: Items to average the test clips. If set to `prob`, it will apply softmax before averaging the clip scores. Otherwise, it will directly average the clip scores.
- `JOB_LAUNCHER`: Items for distributed job initialization launcher. Allowed choices are `none`, `pytorch`, `slurm`, `mpi`. Especially, if set to none, it will test in a non-distributed mode.
- `LOCAL_RANK`: ID for local rank. If not specified, it will be set to 0.
......@@ -247,7 +246,7 @@ According to the [Linear Scaling Rule](https://arxiv.org/abs/1706.02677), you ne
python tools/train.py ${CONFIG_FILE} [optional arguments]
```
If you want to specify the working directory in the command, you can add an argument `--work_dir ${YOUR_WORK_DIR}`.
If you want to specify the working directory in the command, you can add an argument `--work-dir ${YOUR_WORK_DIR}`.
### Train with multiple GPUs
......@@ -258,10 +257,10 @@ If you want to specify the working directory in the command, you can add an argu
Optional arguments are:
- `--validate` (**strongly recommended**): Perform evaluation at every k (default value is 5, which can be modified by changing the `interval` value in `evaluation` dict in each config file) epochs during the training.
- `--work_dir ${WORK_DIR}`: Override the working directory specified in the config file.
- `--resume_from ${CHECKPOINT_FILE}`: Resume from a previous checkpoint file.
- `--work-dir ${WORK_DIR}`: Override the working directory specified in the config file.
- `--resume-from ${CHECKPOINT_FILE}`: Resume from a previous checkpoint file.
- `--gpus ${GPU_NUM}`: Number of gpus to use, which is only applicable to non-distributed training.
- `--gpu_ids ${GPU_IDS}`: IDs of gpus to use, which is only applicable to non-distributed training.
- `--gpu-ids ${GPU_IDS}`: IDs of gpus to use, which is only applicable to non-distributed training.
- `--seed ${SEED}`: Seed id for random state in python, numpy and pytorch to generate random numbers.
- `--deterministic`: If specified, it will set deterministic options for CUDNN backend.
- `JOB_LAUNCHER`: Items for distributed job initialization launcher. Allowed choices are `none`, `pytorch`, `slurm`, `mpi`. Especially, if set to none, it will test in a non-distributed mode.
......@@ -274,7 +273,7 @@ Difference between `resume-from` and `load-from`:
Here is an example of using 8 GPUs to load TSN checkpoint.
```shell
./tools/dist_train.sh configs/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb.py 8 --resume_from work_dirs/tsn_r50_1x1x3_100e_kinetics400_rgb/latest.pth
./tools/dist_train.sh configs/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb.py 8 --resume-from work_dirs/tsn_r50_1x1x3_100e_kinetics400_rgb/latest.pth
```
### Train with multiple machines
......
......@@ -22,19 +22,19 @@ def parse_args():
parser.add_argument(
'src_folder', type=str, help='root directory for the frames or videos')
parser.add_argument(
'--rgb_prefix', type=str, default='img_', help='prefix of rgb frames')
'--rgb-prefix', type=str, default='img_', help='prefix of rgb frames')
parser.add_argument(
'--flow_x_prefix',
'--flow-x-prefix',
type=str,
default='flow_x_',
help='prefix of flow x frames')
parser.add_argument(
'--flow_y_prefix',
'--flow-y-prefix',
type=str,
default='flow_y_',
help='prefix of flow y frames')
parser.add_argument(
'--num_split',
'--num-split',
type=int,
default=3,
help='number of split to file list')
......@@ -57,7 +57,7 @@ def parse_args():
choices=['rawframes', 'videos'],
help='data format')
parser.add_argument(
'--out_root_path',
'--out-root-path',
type=str,
default='data/',
help='root path for output')
......
......@@ -84,18 +84,18 @@ def parse_args():
default=2,
help='directory level of data')
parser.add_argument(
'--num_worker',
'--num-worker',
type=int,
default=8,
help='number of workers to build rawframes')
parser.add_argument(
'--flow_type',
'--flow-type',
type=str,
default=None,
choices=[None, 'tvl1', 'warp_tvl1', 'farn', 'brox'],
help='flow type to be generated')
parser.add_argument(
'--out_format',
'--out-format',
type=str,
default='jpg',
choices=['jpg', 'h5', 'png'],
......@@ -107,15 +107,15 @@ def parse_args():
choices=['avi', 'mp4', 'webm'],
help='video file extensions')
parser.add_argument(
'--new_width', type=int, default=0, help='resize image width')
'--new-width', type=int, default=0, help='resize image width')
parser.add_argument(
'--new_height', type=int, default=0, help='resize image height')
'--new-height', type=int, default=0, help='resize image height')
parser.add_argument(
'--new_short',
'--new-short',
type=int,
default=0,
help='resize image short side length keeping ratio')
parser.add_argument('--num_gpu', type=int, default=8, help='number of GPU')
parser.add_argument('--num-gpu', type=int, default=8, help='number of GPU')
parser.add_argument(
'--resume',
action='store_true',
......
#! /usr/bin/bash env
cd ../
python build_rawframes.py ../../data/kinetics400/videos_train/ ../../data/kinetics400/rawframes_train/ --level 2 --flow_type tvl1 --ext mp4 --task both --new_width 340 --new_height 256
python build_rawframes.py ../../data/kinetics400/videos_train/ ../../data/kinetics400/rawframes_train/ --level 2 --flow-type tvl1 --ext mp4 --task both --new-width 340 --new-height 256
echo "Raw frames (RGB and tv-l1) Generated for train set"
python build_rawframes.py ../../data/kinetics400/videos_val/ ../../data/kinetics400/rawframes_val/ --level 2 --flow_type tvl1 --ext mp4 --task both --new_width 340 --new_height 256
python build_rawframes.py ../../data/kinetics400/videos_val/ ../../data/kinetics400/rawframes_val/ --level 2 --flow-type tvl1 --ext mp4 --task both --new-width 340 --new-height 256
echo "Raw frames (RGB and tv-l1) Generated for val set"
cd kinetics400/
#! /usr/bin/bash env
cd ../
python build_rawframes.py ../../data/kinetics400/videos_train/ ../../data/kinetics400/rawframes_train/ --level 2 --ext mp4 --task rgb --new_width 340 --new_height 256
python build_rawframes.py ../../data/kinetics400/videos_train/ ../../data/kinetics400/rawframes_train/ --level 2 --ext mp4 --task rgb --new-width 340 --new-height 256
echo "Raw frames (RGB only) generated for train set"
python build_rawframes.py ../../data/kinetics400/videos_val/ ../../data/kinetics400/rawframes_val/ --level 2 --ext mp4 --task rgb --new_width 340 --new_height 256
python build_rawframes.py ../../data/kinetics400/videos_val/ ../../data/kinetics400/rawframes_val/ --level 2 --ext mp4 --task rgb --new-width 340 --new-height 256
echo "Raw frames (RGB only) generated for val set"
cd kinetics400/
#! /usr/bin/bash env
cd ../../../
PYTHONPATH=. python tools/data/build_file_list.py kinetics400 data/kinetics400/rawframes_train/ --level 2 --format rawframes --num_split 1 --subset train --shuffle
PYTHONPATH=. python tools/data/build_file_list.py kinetics400 data/kinetics400/rawframes_train/ --level 2 --format rawframes --num-split 1 --subset train --shuffle
echo "Train filelist for rawframes generated."
PYTHONPATH=. python tools/data/build_file_list.py kinetics400 data/kinetics400/rawframes_val/ --level 2 --format rawframes --num_split 1 --subset val --shuffle
PYTHONPATH=. python tools/data/build_file_list.py kinetics400 data/kinetics400/rawframes_val/ --level 2 --format rawframes --num-split 1 --subset val --shuffle
echo "Val filelist for rawframes generated."
cd tools/data/kinetics400/
#! /usr/bin/bash env
cd ../../../
PYTHONPATH=. python tools/data/build_file_list.py kinetics400 data/kinetics400/videos_train/ --level 2 --format videos --num_split 1 --subset train --shuffle
PYTHONPATH=. python tools/data/build_file_list.py kinetics400 data/kinetics400/videos_train/ --level 2 --format videos --num-split 1 --subset train --shuffle
echo "Train filelist for video generated."
PYTHONPATH=. python tools/data/build_file_list.py kinetics400 data/kinetics400/videos_val/ --level 2 --format videos --num_split 1 --subset val --shuffle
PYTHONPATH=. python tools/data/build_file_list.py kinetics400 data/kinetics400/videos_val/ --level 2 --format videos --num-split 1 --subset val --shuffle
echo "Val filelist for video generated."
cd tools/data/kinetics400/
......@@ -64,7 +64,7 @@ bash extract_frames.sh
```
These two commands above can generate images with size 340x256, if you want to generate images with short edge 320 (320p),
you can change the args `--new_width 340 --new_height 256` to `--new_short 320`.
you can change the args `--new-width 340 --new-height 256` to `--new-short 320`.
More details can be found in [data_preparation](/docs/data_preparation.md)
## Step 4. Generate File List
......
#! /usr/bin/bash env
cd ../
python build_rawframes.py ../../data/mit/videos/training ../../data/mit/rawframes/training/ --level 2 --flow_type tvl1 --ext mp4 --task both
python build_rawframes.py ../../data/mit/videos/training ../../data/mit/rawframes/training/ --level 2 --flow-type tvl1 --ext mp4 --task both
echo "Raw frames (RGB and tv-l1) Generated for train set"
python build_rawframes.py ../../data/mit/vides/validation/ ../../data/mit/rawframes/validation/ --level 2 --flow_type tvl1 --ext mp4 --task both
python build_rawframes.py ../../data/mit/vides/validation/ ../../data/mit/rawframes/validation/ --level 2 --flow-type tvl1 --ext mp4 --task both
echo "Raw frames (RGB and tv-l1) Generated for val set"
cd mit/
#! /usr/bin/bash env
cd ../../../
PYTHONPATH=. python tools/data/build_file_list.py mit data/mit/rawframes/training/ --level 2 --format rawframes --num_split 1 --subset train --shuffle
PYTHONPATH=. python tools/data/build_file_list.py mit data/mit/rawframes/training/ --level 2 --format rawframes --num-split 1 --subset train --shuffle
echo "Train filelist for rawframes generated."
PYTHONPATH=. python tools/data/build_file_list.py mit data/mit/rawframes/validation/ --level 2 --format rawframes --num_split 1 --subset val --shuffle
PYTHONPATH=. python tools/data/build_file_list.py mit data/mit/rawframes/validation/ --level 2 --format rawframes --num-split 1 --subset val --shuffle
echo "Val filelist for rawframes generated."
cd tools/data/mit/
#! /usr/bin/bash env
cd ../../../
PYTHONPATH=. python tools/data/build_file_list.py mit data/mit/videos/training/ --level 2 --format videos --num_split 1 --subset train --shuffle
PYTHONPATH=. python tools/data/build_file_list.py mit data/mit/videos/training/ --level 2 --format videos --num-split 1 --subset train --shuffle
echo "Train filelist for videos generated."
PYTHONPATH=. python tools/data/build_file_list.py mit data/mit/videos/validation/ --level 2 --format videos --num_split 1 --subset val --shuffle
PYTHONPATH=. python tools/data/build_file_list.py mit data/mit/videos/validation/ --level 2 --format videos --num-split 1 --subset val --shuffle
echo "Val filelist for videos generated."
cd tools/data/mit/
#! /usr/bin/bash env
cd ../
python build_rawframes.py ../../data/mmit/videos/ ../../../data/mmit/rawframes/ --task both --level 2 --flow_type tvl1 --ext mp4
python build_rawframes.py ../../data/mmit/videos/ ../../../data/mmit/rawframes/ --task both --level 2 --flow-type tvl1 --ext mp4
echo "Raw frames (RGB and Flow) Generated"
cd mmit/
#! /usr/bin/bash env
cd ../../../
PYTHONPATH=. python tools/data/build_file_list.py mmit data/mmit/rawframes/ --level 2 --format rawframes --num_split 1 --subset train --shuffle
PYTHONPATH=. python tools/data/build_file_list.py mmit data/mmit/rawframes/ --level 2 --format rawframes --num-split 1 --subset train --shuffle
echo "Train filelist for rawframes generated."
PYTHONPATH=. python tools/data/build_file_list.py mmit data/mmit/rawframes/ --level 2 --format rawframes --num_split 1 --subset val --shuffle
PYTHONPATH=. python tools/data/build_file_list.py mmit data/mmit/rawframes/ --level 2 --format rawframes --num-split 1 --subset val --shuffle
echo "Val filelist for rawframes generated."
cd tools/data/mmit/
#! /usr/bin/bash env
cd ../../../
PYTHONPATH=. python tools/data/build_file_list.py mmit data/mmit/videos/ --level 2 --format videos --num_split 1 --subset train --shuffle
PYTHONPATH=. python tools/data/build_file_list.py mmit data/mmit/videos/ --level 2 --format videos --num-split 1 --subset train --shuffle
echo "Train filelist for videos generated."
PYTHONPATH=. python tools/data/build_file_list.py mmit data/mmit/videos/ --level 2 --format videos --num_split 1 --subset val --shuffle
PYTHONPATH=. python tools/data/build_file_list.py mmit data/mmit/videos/ --level 2 --format videos --num-split 1 --subset val --shuffle
echo "Val filelist for videos generated."
cd tools/data/mmit/
#! /usr/bin/bash env
cd ../
python build_rawframes.py ../../data/sthv1/videos/ ../../data/sthv1/rawframes/ --task both --level 1 --flow_type tvl1 --ext webm
python build_rawframes.py ../../data/sthv1/videos/ ../../data/sthv1/rawframes/ --task both --level 1 --flow-type tvl1 --ext webm
echo "Raw frames (RGB and tv-l1) Generated"
cd sthv1/
#! /usr/bin/bash env
cd ../../../
PYTHONPATH=. python tools/data/build_file_list.py sthv1 data/sthv1/rawframes/ --num_split 1 --level 1 --subset train --format rawframe --shuffle
PYTHONPATH=. python tools/data/build_file_list.py sthv1 data/sthv1/rawframes/ --num_split 1 --level 1 --subset val --format rawframe --shuffle
PYTHONPATH=. python tools/data/build_file_list.py sthv1 data/sthv1/rawframes/ --num-split 1 --level 1 --subset train --format rawframe --shuffle
PYTHONPATH=. python tools/data/build_file_list.py sthv1 data/sthv1/rawframes/ --num-split 1 --level 1 --subset val --format rawframe --shuffle
echo "Filelist for rawframes generated."
cd tools/data/sthv1/
#! /usr/bin/bash env
cd ../../../
PYTHONPATH=. python tools/data/build_file_list.py sthv1 data/sthv1/videos/ --num_split 1 --level 1 --subset train --format videos --shuffle
PYTHONPATH=. python tools/data/build_file_list.py sthv1 data/sthv1/videos/ --num_split 1 --level 1 --subset val --format videos --shuffle
PYTHONPATH=. python tools/data/build_file_list.py sthv1 data/sthv1/videos/ --num-split 1 --level 1 --subset train --format videos --shuffle
PYTHONPATH=. python tools/data/build_file_list.py sthv1 data/sthv1/videos/ --num-split 1 --level 1 --subset val --format videos --shuffle
echo "Filelist for videos generated."
cd tools/data/sthv1/
#! /usr/bin/bash env
cd ../
python build_rawframes.py ../../data/sthv2/videos/ ../../data/sthv2/rawframes/ --task both --level 1 --flow_type tvl1 --ext webm
python build_rawframes.py ../../data/sthv2/videos/ ../../data/sthv2/rawframes/ --task both --level 1 --flow-type tvl1 --ext webm
echo "Raw frames (RGB and tv-l1) Generated"
cd sthv2/
#! /usr/bin/bash env
cd ../../../
PYTHONPATH=. python tools/data/build_file_list.py sthv2 data/sthv2/rawframes/ --num_split 1 --level 1 --subset train --format rawframe --shuffle
PYTHONPATH=. python tools/data/build_file_list.py sthv2 data/sthv2/rawframes/ --num_split 1 --level 1 --subset val --format rawframe --shuffle
PYTHONPATH=. python tools/data/build_file_list.py sthv2 data/sthv2/rawframes/ --num-split 1 --level 1 --subset train --format rawframe --shuffle
PYTHONPATH=. python tools/data/build_file_list.py sthv2 data/sthv2/rawframes/ --num-split 1 --level 1 --subset val --format rawframe --shuffle
echo "Filelist for rawframes generated."
cd tools/data/sthv2/
#! /usr/bin/bash env
cd ../../../
PYTHONPATH=. python tools/data/build_file_list.py sthv2 data/sthv2/videos/ --num_split 1 --level 1 --subset train --format videos --shuffle
PYTHONPATH=. python tools/data/build_file_list.py sthv2 data/sthv2/videos/ --num_split 1 --level 1 --subset val --format videos --shuffle
PYTHONPATH=. python tools/data/build_file_list.py sthv2 data/sthv2/videos/ --num-split 1 --level 1 --subset train --format videos --shuffle
PYTHONPATH=. python tools/data/build_file_list.py sthv2 data/sthv2/videos/ --num-split 1 --level 1 --subset val --format videos --shuffle
echo "Filelist for videos generated."
cd tools/data/sthv2/
#! /usr/bin/bash env
cd ../
python build_rawframes.py ../../data/thumos14/videos/validation/ ../../data/thumos14/rawframes/validation/ --level 1 --flow_type tvl1 --ext mp4 --task both
python build_rawframes.py ../../data/thumos14/videos/validation/ ../../data/thumos14/rawframes/validation/ --level 1 --flow-type tvl1 --ext mp4 --task both
echo "Raw frames (RGB and tv-l1) Generated for val set"
python build_rawframes.py ../../data/thumos14/videos/test/ ../../data/thumos14/rawframes/test/ --level 1 --flow_type tvl1 --ext mp4 --task both
python build_rawframes.py ../../data/thumos14/videos/test/ ../../data/thumos14/rawframes/test/ --level 1 --flow-type tvl1 --ext mp4 --task both
echo "Raw frames (RGB and tv-l1) Generated for test set"
cd thumos14/
......@@ -3,6 +3,6 @@
FLOW_TYPE=$1
cd ../
python build_rawframes.py ../../data/ucf101/videos/ ../../data/ucf101/rawframes/ --task both --level 2 --flow_type ${FLOW_TYPE}
python build_rawframes.py ../../data/ucf101/videos/ ../../data/ucf101/rawframes/ --task both --level 2 --flow-type ${FLOW_TYPE}
echo "Raw frames (RGB and Flow) Generated"
cd ucf101/
......@@ -80,8 +80,8 @@ def extract_dense_flow(path,
and 'farneback'. Default: 'tvl1'.
"""
frames = []
if osp.exists(path):
frames = []
vid = cv2.VideoCapture(path)
flag, f = vid.read()
while flag:
......
......@@ -21,4 +21,4 @@ srun -p ${PARTITION} \
--cpus-per-task=${CPUS_PER_TASK} \
--kill-on-bad-exit=1 \
${SRUN_ARGS} \
python -u tools/train.py ${CONFIG} --work_dir=${WORK_DIR} --launcher="slurm" ${PY_ARGS}
python -u tools/train.py ${CONFIG} --work-dir=${WORK_DIR} --launcher="slurm" ${PY_ARGS}
......@@ -28,21 +28,16 @@ def parse_args():
' "top_k_accuracy", "mean_class_accuracy" for video dataset')
parser.add_argument('--show', action='store_true', help='show results')
parser.add_argument(
'--proc_per_gpu',
default=1,
type=int,
help='Number of processes per GPU')
parser.add_argument(
'--gpu_collect',
'--gpu-collect',
action='store_true',
help='whether to use gpu to collect results')
parser.add_argument(
'--tmpdir',
help='tmp directory used for collecting results from multiple '
'workers, available when gpu_collect is not specified')
'workers, available when gpu-collect is not specified')
parser.add_argument('--options', nargs='+', help='custom options')
parser.add_argument(
'--average_clips',
'--average-clips',
choices=['score', 'prob'],
default='score',
help='average type when averaging test clips')
......
......@@ -19,9 +19,9 @@ from mmaction.utils import collect_env, get_root_logger
def parse_args():
parser = argparse.ArgumentParser(description='Train a recognizer')
parser.add_argument('config', help='train config file path')
parser.add_argument('--work_dir', help='the dir to save logs and models')
parser.add_argument('--work-dir', help='the dir to save logs and models')
parser.add_argument(
'--resume_from', help='the checkpoint file to resume from')
'--resume-from', help='the checkpoint file to resume from')
parser.add_argument(
'--validate',
action='store_true',
......@@ -33,7 +33,7 @@ def parse_args():
help='number of gpus to use '
'(only applicable to non-distributed training)')
group_gpus.add_argument(
'--gpu_ids',
'--gpu-ids',
type=int,
nargs='+',
help='ids of gpus to use '
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册