From 0bd0372b725670a9393abdbb43efce60a70465ac Mon Sep 17 00:00:00 2001 From: Jintao Lin Date: Sun, 12 Jul 2020 23:27:32 +0800 Subject: [PATCH] Extract rgb frames without denseflow (#14) --- docs/data_preparation.md | 14 ++--- docs/getting_started.md | 2 +- .../data/activitynet/preparing_activitynet.md | 2 +- tools/data/build_rawframes.py | 57 +++++++++++++++---- .../kinetics400/extract_rgb_frames_opencv.sh | 10 ++++ .../data/kinetics400/preparing_kinetics400.md | 12 +++- tools/data/mit/extract_rgb_frames_opencv.sh | 10 ++++ tools/data/mit/preparing_mit.md | 15 +++-- tools/data/mmit/extract_rgb_frames_opencv.sh | 8 +++ tools/data/mmit/preparing_mmit.md | 10 +++- tools/data/sthv1/extract_rgb_frames_opencv.sh | 7 +++ tools/data/sthv1/preparing_sthv1.md | 25 +++++--- tools/data/sthv2/extract_rgb_frames_opencv.sh | 7 +++ tools/data/sthv2/preparing_sthv2.md | 23 +++++--- .../thumos14/extract_rgb_frames_opencv.sh | 10 ++++ tools/data/thumos14/preparing_thumos14.md | 21 ++++--- .../data/ucf101/extract_rgb_frames_opencv.sh | 7 +++ tools/data/ucf101/preparing_ucf101.md | 10 +++- 18 files changed, 195 insertions(+), 55 deletions(-) create mode 100644 tools/data/kinetics400/extract_rgb_frames_opencv.sh create mode 100644 tools/data/mit/extract_rgb_frames_opencv.sh create mode 100644 tools/data/mmit/extract_rgb_frames_opencv.sh create mode 100644 tools/data/sthv1/extract_rgb_frames_opencv.sh create mode 100644 tools/data/sthv2/extract_rgb_frames_opencv.sh create mode 100644 tools/data/thumos14/extract_rgb_frames_opencv.sh create mode 100644 tools/data/ucf101/extract_rgb_frames_opencv.sh diff --git a/docs/data_preparation.md b/docs/data_preparation.md index 0d93355..0c5f5e1 100644 --- a/docs/data_preparation.md +++ b/docs/data_preparation.md @@ -11,7 +11,7 @@ To make video decoding faster, we support several efficient video loading librar ## Supported Datasets The supported datasets are listed below. -We provide shell scripts for data preparation under the path `$MMACTION/tools/data/`. +We provide shell scripts for data preparation under the path `$MMACTION2/tools/data/`. To ease usage, we provide tutorials of data deployment for each dataset. - [UCF101](https://www.crcv.ucf.edu/data/UCF101.php): See [preparing_ucf101.md](/tools/data/ucf101/preparing_ucf101.md). @@ -28,7 +28,7 @@ Now, you can switch to [getting_started.md](getting_started.md) to train and tes ## Getting Data The following guide is helpful when you want to experiment with custom dataset. -Similar to the datasets stated above, it is recommended organizing in `$MMACTION/data/$DATASET`. +Similar to the datasets stated above, it is recommended organizing in `$MMACTION2/data/$DATASET`. ### Prepare videos @@ -67,10 +67,10 @@ python build_rawframes.py ${SRC_FOLDER} ${OUT_FOLDER} [--task ${TASK}] [--level The recommended practice is 1. set `$OUT_FOLDER` to be a folder located in SSD. -2. symlink the link `$OUT_FOLDER` to `$MMACTION/data/$DATASET/rawframes`. +2. symlink the link `$OUT_FOLDER` to `$MMACTION2/data/$DATASET/rawframes`. ```shell -ln -s ${YOUR_FOLDER} $MMACTION/data/$DATASET/rawframes +ln -s ${YOUR_FOLDER} $MMACTION2/data/$DATASET/rawframes ``` ### Generate file list @@ -78,7 +78,7 @@ ln -s ${YOUR_FOLDER} $MMACTION/data/$DATASET/rawframes We provide a convenient script to generate annotation file list. You can use the following command to extract frames. ```shell -cd $MMACTION +cd $MMACTION2 python tools/data/build_file_list.py ${DATASET} ${SRC_FOLDER} [--rgb-prefix ${RGB_PREFIX}] \ [--flow-x-prefix ${FLOW_X_PREFIX}] [--flow-y-prefix ${FLOW_Y_PREFIX}] [--num-split ${NUM_SPLIT}] \ [--subset ${SUBSET}] [--level ${LEVEL}] [--format ${FORMAT}] [--out-root-path ${OUT_ROOT_PATH}] \ @@ -87,8 +87,8 @@ python tools/data/build_file_list.py ${DATASET} ${SRC_FOLDER} [--rgb-prefix ${RG - `DATASET`: Dataset to be prepared, e.g., `ucf101`, `kinetics400`, `thumos14`, `sthv1`, `sthv2`, etc. - `SRC_FOLDER`: Folder of the corresponding data format: - - "$MMACTION/data/$DATASET/rawframes" if `--format rawframes`. - - "$MMACTION/data/$DATASET/videos" if `--format videos`. + - "$MMACTION2/data/$DATASET/rawframes" if `--format rawframes`. + - "$MMACTION2/data/$DATASET/videos" if `--format videos`. - `RGB_PREFIX`: Name prefix of rgb frames. - `FLOW_X_PREFIX`: Name prefix of x flow frames. - `FLOW_Y_PREFIX`: Name prefix of y flow frames. diff --git a/docs/getting_started.md b/docs/getting_started.md index a51491b..a8e9e5b 100644 --- a/docs/getting_started.md +++ b/docs/getting_started.md @@ -5,7 +5,7 @@ For installation instructions, please see [install.md](install.md). ## Datasets -It is recommended to symlink the dataset root to `$MMACTION/data`. +It is recommended to symlink the dataset root to `$MMACTION2/data`. If your folder structure is different, you may need to change the corresponding paths in config files. ``` diff --git a/tools/data/activitynet/preparing_activitynet.md b/tools/data/activitynet/preparing_activitynet.md index 62e72fb..0b9ab21 100644 --- a/tools/data/activitynet/preparing_activitynet.md +++ b/tools/data/activitynet/preparing_activitynet.md @@ -2,7 +2,7 @@ For basic dataset information, please refer to the official [website](http://activity-net.org/). Here, we use the ActivityNet rescaled feature provided in this [repo](https://github.com/wzmsltw/BSN-boundary-sensitive-network#code-and-data-preparation). -Before we start, please make sure that current working directory is `$MMACTION/tools/data/activitynet/`. +Before we start, please make sure that current working directory is `$MMACTION2/tools/data/activitynet/`. ## Step 1. Download Annotations First of all, you can run the following script to download annotation files. diff --git a/tools/data/build_rawframes.py b/tools/data/build_rawframes.py index e46c035..b4666b0 100644 --- a/tools/data/build_rawframes.py +++ b/tools/data/build_rawframes.py @@ -3,8 +3,12 @@ import glob import os import os.path as osp import sys +import warnings from multiprocessing import Pool +import mmcv +import numpy as np + def extract_frame(vid_item, dev_id=0): """Generate optical flow using dense flow. @@ -19,21 +23,50 @@ def extract_frame(vid_item, dev_id=0): """ full_path, vid_path, vid_id, method, task = vid_item if ('/' in vid_path): - vid_name = vid_path.split('.')[0].split('/')[0] - out_full_path = osp.join(args.out_dir, vid_name) + act_name = osp.basename(osp.dirname(vid_path)) + out_full_path = osp.join(args.out_dir, act_name) else: out_full_path = args.out_dir if task == 'rgb': - if args.new_short == 0: - cmd = osp.join( - f"denseflow '{full_path}' -b=20 -s=0 -o='{out_full_path}'" - f' -nw={args.new_width} -nh={args.new_height} -v') + if args.use_opencv: + # Not like using denseflow, + # Use OpenCV will not make a sub directory with the video name + video_name = osp.splitext(osp.basename(vid_path))[0] + out_full_path = osp.join(out_full_path, video_name) + + vr = mmcv.VideoReader(full_path) + for i in range(len(vr)): + if vr[i] is not None: + w, h, c = np.shape(vr[i]) + if args.new_short == 0: + out_img = mmcv.imresize(vr[i], (args.new_width, + args.new_height)) + else: + if min(h, w) == h: + new_h = args.new_short + new_w = int((new_h / h) * w) + else: + new_w = args.new_short + new_h = int((new_w / w) * h) + out_img = mmcv.imresize(vr[i], (new_h, new_w)) + mmcv.imwrite(out_img, + f'{out_full_path}/img_{i + 1:05d}.jpg') + else: + warnings.warn( + 'Length inconsistent!' + f'Early stop with {i + 1} out of {len(vr)} frames.') + break else: - cmd = osp.join( - f"denseflow '{full_path}' -b=20 -s=0 -o='{out_full_path}'" - f' -ns={args.new_short} -v') - os.system(cmd) + if args.new_short == 0: + cmd = osp.join( + f"denseflow '{full_path}' -b=20 -s=0 -o='{out_full_path}'" + f' -nw={args.new_width} -nh={args.new_height} -v') + else: + cmd = osp.join( + f"denseflow '{full_path}' -b=20 -s=0 -o='{out_full_path}'" + f' -ns={args.new_short} -v') + os.system(cmd) elif task == 'flow': if args.new_short == 0: cmd = osp.join( @@ -106,6 +139,10 @@ def parse_args(): default='avi', choices=['avi', 'mp4', 'webm'], help='video file extensions') + parser.add_argument( + '--use-opencv', + action='store_true', + help='Whether to use opencv to extract rgb frames') parser.add_argument( '--new-width', type=int, default=0, help='resize image width') parser.add_argument( diff --git a/tools/data/kinetics400/extract_rgb_frames_opencv.sh b/tools/data/kinetics400/extract_rgb_frames_opencv.sh new file mode 100644 index 0000000..57f90e8 --- /dev/null +++ b/tools/data/kinetics400/extract_rgb_frames_opencv.sh @@ -0,0 +1,10 @@ +#! /usr/bin/bash env + +cd ../ +python build_rawframes.py ../../data/kinetics400/videos_train/ ../../data/kinetics400/rawframes_train/ --level 2 --ext mp4 --task rgb --new-width 340 --new-height 256 --use-opencv +echo "Raw frames (RGB only) generated for train set" + +python build_rawframes.py ../../data/kinetics400/videos_val/ ../../data/kinetics400/rawframes_val/ --level 2 --ext mp4 --task rgb --new-width 340 --new-height 256 --use-opencv +echo "Raw frames (RGB only) generated for val set" + +cd kinetics400/ diff --git a/tools/data/kinetics400/preparing_kinetics400.md b/tools/data/kinetics400/preparing_kinetics400.md index 6d1edeb..072c033 100644 --- a/tools/data/kinetics400/preparing_kinetics400.md +++ b/tools/data/kinetics400/preparing_kinetics400.md @@ -1,7 +1,7 @@ # Preparing Kinetics-400 For basic dataset information, please refer to the official [website](https://deepmind.com/research/open-source/open-source-datasets/kinetics/). -Before we start, please make sure that the directory is located at `$MMACTION/tools/data/kinetics400/`. +Before we start, please make sure that the directory is located at `$MMACTION2/tools/data/kinetics400/`. ## Step 1. Prepare Annotations @@ -51,19 +51,25 @@ mkdir /mnt/SSD/kinetics400_extracted_val/ ln -s /mnt/SSD/kinetics400_extracted_val/ ../../../data/kinetics400/rawframes_val/ ``` -If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames. +If you only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames using denseflow. ```shell bash extract_rgb_frames.sh ``` +If you didn't install denseflow, you can still extract RGB frames using OpenCV by the following script, but it will keep the original size of the images. + +```shell +bash extract_rgb_frames_opencv.sh +``` + If both are required, run the following script to extract frames. ```shell bash extract_frames.sh ``` -These two commands above can generate images with size 340x256, if you want to generate images with short edge 320 (320p), +These three commands above can generate images with size 340x256, if you want to generate images with short edge 320 (320p), you can change the args `--new-width 340 --new-height 256` to `--new-short 320`. More details can be found in [data_preparation](/docs/data_preparation.md) diff --git a/tools/data/mit/extract_rgb_frames_opencv.sh b/tools/data/mit/extract_rgb_frames_opencv.sh new file mode 100644 index 0000000..0eb784c --- /dev/null +++ b/tools/data/mit/extract_rgb_frames_opencv.sh @@ -0,0 +1,10 @@ +#! /usr/bin/bash env + +cd ../ +python build_rawframes.py ../../data/mit/videos/training ../../data/mit/rawframes/training/ --level 2 --ext mp4 --task rgb --use-opencv +echo "Raw frames (RGB only) generated for train set" + +python build_rawframes.py ../../data/mit/videos/validation ../../data/mit/rawframes/validation/ --level 2 --ext mp4 --task rgb --use-opencv +echo "Raw frames (RGB only) generated for val set" + +cd mit/ diff --git a/tools/data/mit/preparing_mit.md b/tools/data/mit/preparing_mit.md index 015df71..29766b6 100644 --- a/tools/data/mit/preparing_mit.md +++ b/tools/data/mit/preparing_mit.md @@ -1,7 +1,7 @@ # Preparing Moments in Time For basic dataset information, you can refer to the dataset [website](http://moments.csail.mit.edu/). -Before we start, please make sure that the directory is located at `$MMACTION/tools/data/mit/`. +Before we start, please make sure that the directory is located at `$MMACTION2/tools/data/mit/`. ## Step 1. Prepare Annotations and Videos @@ -17,10 +17,7 @@ This part is **optional** if you only want to use the video loader. Before extracting, please refer to [install.md](/docs/install.md) for installing [dense_flow](https://github.com/open-mmlab/denseflow). - -If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames. - -Fist, You can run the following script to soft link the extracted frames. +If you have plenty of SSD space, then we recommend extracting frames there for better I/O performance. And you can run the following script to soft link the extracted frames. ```shell # execute these two line (Assume the SSD is mounted at "/mnt/SSD/") @@ -28,10 +25,18 @@ mkdir /mnt/SSD/mit_extracted/ ln -s /mnt/SSD/mit_extracted/ ../../../data/mit/rawframes ``` +If you only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames using denseflow. + ```shell bash extract_rgb_frames.sh ``` +If you didn't install denseflow, you can still extract RGB frames using OpenCV by the following script, but it will keep the original size of the images. + +```shell +bash extract_rgb_frames_opencv.sh +``` + If both are required, run the following script to extract frames. ```shell diff --git a/tools/data/mmit/extract_rgb_frames_opencv.sh b/tools/data/mmit/extract_rgb_frames_opencv.sh new file mode 100644 index 0000000..acaed71 --- /dev/null +++ b/tools/data/mmit/extract_rgb_frames_opencv.sh @@ -0,0 +1,8 @@ +#! /usr/bin/bash env + +cd ../ +python build_rawframes.py ../../data/mmit/videos/ ../../data/mmit/rawframes/ --task rgb --level 2 --ext mp4 --use-opencv + +echo "Genearte raw frames (RGB only)" + +cd mmit/ diff --git a/tools/data/mmit/preparing_mmit.md b/tools/data/mmit/preparing_mmit.md index a7f890e..dc03c87 100644 --- a/tools/data/mmit/preparing_mmit.md +++ b/tools/data/mmit/preparing_mmit.md @@ -1,7 +1,7 @@ # Preparing Multi-Moments in Time For basic dataset information, you can refer to the dataset [website](moments.csail.mit.edu). -Before we start, please make sure that the directory is located at `$MMACTION/tools/data/mmit/`. +Before we start, please make sure that the directory is located at `$MMACTION2/tools/data/mmit/`. ## Step 1. Prepare Annotations and Videos @@ -25,12 +25,18 @@ mkdir /mnt/SSD/mmit_extracted/ ln -s /mnt/SSD/mmit_extracted/ ../../../data/mmit/rawframes ``` -If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames. +If you only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames using denseflow. ```shell bash extract_rgb_frames.sh ``` +If you didn't install denseflow, you can still extract RGB frames using OpenCV by the following script, but it will keep the original size of the images. + +```shell +bash extract_rgb_frames_opencv.sh +``` + If both are required, run the following script to extract frames using "tvl1" algorithm. ```shell diff --git a/tools/data/sthv1/extract_rgb_frames_opencv.sh b/tools/data/sthv1/extract_rgb_frames_opencv.sh new file mode 100644 index 0000000..293b888 --- /dev/null +++ b/tools/data/sthv1/extract_rgb_frames_opencv.sh @@ -0,0 +1,7 @@ +#! /usr/bin/bash env + +cd ../ +python build_rawframes.py ../../data/sthv1/videos/ ../../data/sthv1/rawframes/ --task rgb --level 1 --ext webm --use-opencv +echo "Genearte raw frames (RGB only)" + +cd sthv1/ diff --git a/tools/data/sthv1/preparing_sthv1.md b/tools/data/sthv1/preparing_sthv1.md index d0032ee..9e2981d 100644 --- a/tools/data/sthv1/preparing_sthv1.md +++ b/tools/data/sthv1/preparing_sthv1.md @@ -1,20 +1,20 @@ # Preparing Something-Something V1 For basic dataset information, you can refer to the dataset [website](https://20bn.com/datasets/something-something/v1). -Before we start, please make sure that the directory is located at `$MMACTION/tools/data/sthv1/`. +Before we start, please make sure that the directory is located at `$MMACTION2/tools/data/sthv1/`. ## Step 1. Prepare Annotations -First of all, you have to sign in and download annotations to `$MMACTION/data/sthv1/annotations` on the official [website](https://20bn.com/datasets/something-something/v1). +First of all, you have to sign in and download annotations to `$MMACTION2/data/sthv1/annotations` on the official [website](https://20bn.com/datasets/something-something/v1). ## Step 2. Prepare Videos -Then, you can download all data parts to `$MMACTION/data/sthv1/` and use the following command to extract. +Then, you can download all data parts to `$MMACTION2/data/sthv1/` and use the following command to extract. ```shell -cd $MMACTION/data/sthv1/ +cd $MMACTION2/data/sthv1/ cat 20bn-something-something-v1-?? | tar zx -cd $MMACTION/tools/data/sthv1/ +cd $MMACTION2/tools/data/sthv1/ ``` ## Step 3. Extract RGB and Flow @@ -33,17 +33,24 @@ mkdir /mnt/SSD/sthv1_extracted/ ln -s /mnt/SSD/sthv1_extracted/ ../../../data/sthv1/rawframes ``` -If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames. +If you only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames using denseflow. ```shell -cd $MMACTION/tools/data/sthv1/ +cd $MMACTION2/tools/data/sthv1/ bash extract_rgb_frames.sh ``` +If you didn't install denseflow, you can still extract RGB frames using OpenCV by the following script, but it will keep the original size of the images. + +```shell +cd $MMACTION2/tools/data/sthv1/ +bash extract_rgb_frames_opencv.sh +``` + If both are required, run the following script to extract frames. ```shell -cd $MMACTION/tools/data/sthv1/ +cd $MMACTION2/tools/data/sthv1/ bash extract_frames.sh ``` @@ -52,7 +59,7 @@ bash extract_frames.sh you can run the follow script to generate file list in the format of rawframes and videos. ```shell -cd $MMACTION/tools/data/sthv1/ +cd $MMACTION2/tools/data/sthv1/ bash generate_{rawframes, videos}_filelist.sh ``` diff --git a/tools/data/sthv2/extract_rgb_frames_opencv.sh b/tools/data/sthv2/extract_rgb_frames_opencv.sh new file mode 100644 index 0000000..f5cb560 --- /dev/null +++ b/tools/data/sthv2/extract_rgb_frames_opencv.sh @@ -0,0 +1,7 @@ +#! /usr/bin/bash env + +cd ../ +python build_rawframes.py ../../data/sthv2/videos/ ../../data/sthv2/rawframes/ --task rgb --level 1 --ext webm --use-opencv +echo "Genearte raw frames (RGB only)" + +cd sthv2/ diff --git a/tools/data/sthv2/preparing_sthv2.md b/tools/data/sthv2/preparing_sthv2.md index 8aac233..440b627 100644 --- a/tools/data/sthv2/preparing_sthv2.md +++ b/tools/data/sthv2/preparing_sthv2.md @@ -1,18 +1,18 @@ # Preparing Something-Something V2 For basic dataset information, you can refer to the dataset [website](https://20bn.com/datasets/something-something/v2). -Before we start, please make sure that the directory is located at `$MMACTION/tools/data/sthv2/`. +Before we start, please make sure that the directory is located at `$MMACTION2/tools/data/sthv2/`. ## Step 1. Prepare Annotations -First of all, you have to sign in and download annotations to `$MMACTION/data/sthv2/annotations` on the official [website](https://20bn.com/datasets/something-something/v2). +First of all, you have to sign in and download annotations to `$MMACTION2/data/sthv2/annotations` on the official [website](https://20bn.com/datasets/something-something/v2). ## Step 2. Prepare Videos -Then, you can download all data parts to `$MMACTION/data/sthv2/` and use the following command to extract. +Then, you can download all data parts to `$MMACTION2/data/sthv2/` and use the following command to extract. ```shell -cd $MMACTION/data/sthv2/ +cd $MMACTION2/data/sthv2/ cat 20bn-something-something-v2-?? | tar zx ``` @@ -32,17 +32,24 @@ mkdir /mnt/SSD/sthv2_extracted/ ln -s /mnt/SSD/sthv2_extracted/ ../../../data/sthv2/rawframes ``` -If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames. +If you only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames using denseflow. ```shell -cd $MMACTION/tools/data/sthv2/ +cd $MMACTION2/tools/data/sthv2/ bash extract_rgb_frames.sh ``` +If you didn't install denseflow, you can still extract RGB frames using OpenCV by the following script, but it will keep the original size of the images. + +```shell +cd $MMACTION2/tools/data/sthv2/ +bash extract_rgb_frames_opencv.sh +``` + If both are required, run the following script to extract frames. ```shell -cd $MMACTION/tools/data/sthv2/ +cd $MMACTION2/tools/data/sthv2/ bash extract_frames.sh ``` @@ -51,7 +58,7 @@ bash extract_frames.sh you can run the follow script to generate file list in the format of rawframes and videos. ```shell -cd $MMACTION/tools/data/sthv2/ +cd $MMACTION2/tools/data/sthv2/ bash generate_{rawframes, videos}_filelist.sh ``` diff --git a/tools/data/thumos14/extract_rgb_frames_opencv.sh b/tools/data/thumos14/extract_rgb_frames_opencv.sh new file mode 100644 index 0000000..1fa914a --- /dev/null +++ b/tools/data/thumos14/extract_rgb_frames_opencv.sh @@ -0,0 +1,10 @@ +#! /usr/bin/bash env + +cd ../ +python build_rawframes.py ../../data/thumos14/videos/validation/ ../../data/thumos14/rawframes/validation/ --level 1 --ext mp4 --task rgb --use-opencv +echo "Raw frames (RGB only) generated for val set" + +python build_rawframes.py ../../data/thumos14/videos/test/ ../../data/thumos14/rawframes/test/ --level 1 --ext mp4 --task rgb --use-opencv +echo "Raw frames (RGB only) generated for test set" + +cd thumos14/ diff --git a/tools/data/thumos14/preparing_thumos14.md b/tools/data/thumos14/preparing_thumos14.md index b765338..513be4b 100644 --- a/tools/data/thumos14/preparing_thumos14.md +++ b/tools/data/thumos14/preparing_thumos14.md @@ -1,14 +1,14 @@ # Preparing THUMOS'14 For basic dataset information, you can refer to the dataset [website](https://www.crcv.ucf.edu/THUMOS14/download.html). -Before we start, please make sure that the directory is located at `$MMACTION/tools/data/thumos14/`. +Before we start, please make sure that the directory is located at `$MMACTION2/tools/data/thumos14/`. ## Step 1. Prepare Annotations First of all, run the following script to prepare annotations. ```shell -cd $MMACTION/tools/data/thumos14/ +cd $MMACTION2/tools/data/thumos14/ bash download_annotations.sh ``` @@ -17,7 +17,7 @@ bash download_annotations.sh Then, you can run the following script to prepare videos. ```shell -cd $MMACTION/tools/data/thumos14/ +cd $MMACTION2/tools/data/thumos14/ bash download_videos.sh ``` @@ -37,17 +37,24 @@ mkdir /mnt/SSD/thumos14_extracted/ ln -s /mnt/SSD/thumos14_extracted/ ../data/thumos14/rawframes/ ``` -If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames. +If you only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames using denseflow. ```shell -cd $MMACTION/tools/data/thumos14/ +cd $MMACTION2/tools/data/thumos14/ bash extract_rgb_frames.sh ``` +If you didn't install denseflow, you can still extract RGB frames using OpenCV by the following script, but it will keep the original size of the images. + +```shell +cd $MMACTION2/tools/data/thumos14/ +bash extract_rgb_frames_opencv.sh +``` + If both are required, run the following script to extract frames. ```shell -cd $MMACTION/tools/data/thumos14/ +cd $MMACTION2/tools/data/thumos14/ bash extract_frames.sh tvl1 ``` @@ -56,7 +63,7 @@ bash extract_frames.sh tvl1 You can run the follow script to fetch pre-computed tag proposals. ```shell -cd $MMACTION/tools/data/thumos14/ +cd $MMACTION2/tools/data/thumos14/ bash fetch_tag_proposals.sh ``` diff --git a/tools/data/ucf101/extract_rgb_frames_opencv.sh b/tools/data/ucf101/extract_rgb_frames_opencv.sh new file mode 100644 index 0000000..c6df18a --- /dev/null +++ b/tools/data/ucf101/extract_rgb_frames_opencv.sh @@ -0,0 +1,7 @@ +#! /usr/bin/bash env + +cd ../ +python build_rawframes.py ../../data/ucf101/videos/ ../../data/ucf101/rawframes/ --task rgb --level 2 --ext avi --use-opencv +echo "Genearte raw frames (RGB only)" + +cd ucf101/ diff --git a/tools/data/ucf101/preparing_ucf101.md b/tools/data/ucf101/preparing_ucf101.md index af84bad..061f2a8 100644 --- a/tools/data/ucf101/preparing_ucf101.md +++ b/tools/data/ucf101/preparing_ucf101.md @@ -1,7 +1,7 @@ # Preparing UCF-101 For basic dataset information, you can refer to the dataset [website](https://www.crcv.ucf.edu/data/UCF101.php). -Before we start, please make sure that the directory is located at `$MMACTION/tools/data/ucf101/`. +Before we start, please make sure that the directory is located at `$MMACTION2/tools/data/ucf101/`. ## Step 1. Prepare Annotations @@ -35,12 +35,18 @@ mkdir /mnt/SSD/ucf101_extracted/ ln -s /mnt/SSD/ucf101_extracted/ ../../../data/ucf101/rawframes ``` -If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames. +If you only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames using denseflow. ```shell bash extract_rgb_frames.sh ``` +If you didn't install denseflow, you can still extract RGB frames using OpenCV by the following script, but it will keep the original size of the images. + +```shell +bash extract_rgb_frames_opencv.sh +``` + If both are required, run the following script to extract frames using "tvl1" algorithm. ```shell -- GitLab