未验证 提交 0bd0372b 编写于 作者: J Jintao Lin 提交者: GitHub

Extract rgb frames without denseflow (#14)

上级 0483cef1
......@@ -11,7 +11,7 @@ To make video decoding faster, we support several efficient video loading librar
## Supported Datasets
The supported datasets are listed below.
We provide shell scripts for data preparation under the path `$MMACTION/tools/data/`.
We provide shell scripts for data preparation under the path `$MMACTION2/tools/data/`.
To ease usage, we provide tutorials of data deployment for each dataset.
- [UCF101](https://www.crcv.ucf.edu/data/UCF101.php): See [preparing_ucf101.md](/tools/data/ucf101/preparing_ucf101.md).
......@@ -28,7 +28,7 @@ Now, you can switch to [getting_started.md](getting_started.md) to train and tes
## Getting Data
The following guide is helpful when you want to experiment with custom dataset.
Similar to the datasets stated above, it is recommended organizing in `$MMACTION/data/$DATASET`.
Similar to the datasets stated above, it is recommended organizing in `$MMACTION2/data/$DATASET`.
### Prepare videos
......@@ -67,10 +67,10 @@ python build_rawframes.py ${SRC_FOLDER} ${OUT_FOLDER} [--task ${TASK}] [--level
The recommended practice is
1. set `$OUT_FOLDER` to be a folder located in SSD.
2. symlink the link `$OUT_FOLDER` to `$MMACTION/data/$DATASET/rawframes`.
2. symlink the link `$OUT_FOLDER` to `$MMACTION2/data/$DATASET/rawframes`.
```shell
ln -s ${YOUR_FOLDER} $MMACTION/data/$DATASET/rawframes
ln -s ${YOUR_FOLDER} $MMACTION2/data/$DATASET/rawframes
```
### Generate file list
......@@ -78,7 +78,7 @@ ln -s ${YOUR_FOLDER} $MMACTION/data/$DATASET/rawframes
We provide a convenient script to generate annotation file list. You can use the following command to extract frames.
```shell
cd $MMACTION
cd $MMACTION2
python tools/data/build_file_list.py ${DATASET} ${SRC_FOLDER} [--rgb-prefix ${RGB_PREFIX}] \
[--flow-x-prefix ${FLOW_X_PREFIX}] [--flow-y-prefix ${FLOW_Y_PREFIX}] [--num-split ${NUM_SPLIT}] \
[--subset ${SUBSET}] [--level ${LEVEL}] [--format ${FORMAT}] [--out-root-path ${OUT_ROOT_PATH}] \
......@@ -87,8 +87,8 @@ python tools/data/build_file_list.py ${DATASET} ${SRC_FOLDER} [--rgb-prefix ${RG
- `DATASET`: Dataset to be prepared, e.g., `ucf101`, `kinetics400`, `thumos14`, `sthv1`, `sthv2`, etc.
- `SRC_FOLDER`: Folder of the corresponding data format:
- "$MMACTION/data/$DATASET/rawframes" if `--format rawframes`.
- "$MMACTION/data/$DATASET/videos" if `--format videos`.
- "$MMACTION2/data/$DATASET/rawframes" if `--format rawframes`.
- "$MMACTION2/data/$DATASET/videos" if `--format videos`.
- `RGB_PREFIX`: Name prefix of rgb frames.
- `FLOW_X_PREFIX`: Name prefix of x flow frames.
- `FLOW_Y_PREFIX`: Name prefix of y flow frames.
......
......@@ -5,7 +5,7 @@ For installation instructions, please see [install.md](install.md).
## Datasets
It is recommended to symlink the dataset root to `$MMACTION/data`.
It is recommended to symlink the dataset root to `$MMACTION2/data`.
If your folder structure is different, you may need to change the corresponding paths in config files.
```
......
......@@ -2,7 +2,7 @@
For basic dataset information, please refer to the official [website](http://activity-net.org/).
Here, we use the ActivityNet rescaled feature provided in this [repo](https://github.com/wzmsltw/BSN-boundary-sensitive-network#code-and-data-preparation).
Before we start, please make sure that current working directory is `$MMACTION/tools/data/activitynet/`.
Before we start, please make sure that current working directory is `$MMACTION2/tools/data/activitynet/`.
## Step 1. Download Annotations
First of all, you can run the following script to download annotation files.
......
......@@ -3,8 +3,12 @@ import glob
import os
import os.path as osp
import sys
import warnings
from multiprocessing import Pool
import mmcv
import numpy as np
def extract_frame(vid_item, dev_id=0):
"""Generate optical flow using dense flow.
......@@ -19,21 +23,50 @@ def extract_frame(vid_item, dev_id=0):
"""
full_path, vid_path, vid_id, method, task = vid_item
if ('/' in vid_path):
vid_name = vid_path.split('.')[0].split('/')[0]
out_full_path = osp.join(args.out_dir, vid_name)
act_name = osp.basename(osp.dirname(vid_path))
out_full_path = osp.join(args.out_dir, act_name)
else:
out_full_path = args.out_dir
if task == 'rgb':
if args.new_short == 0:
cmd = osp.join(
f"denseflow '{full_path}' -b=20 -s=0 -o='{out_full_path}'"
f' -nw={args.new_width} -nh={args.new_height} -v')
if args.use_opencv:
# Not like using denseflow,
# Use OpenCV will not make a sub directory with the video name
video_name = osp.splitext(osp.basename(vid_path))[0]
out_full_path = osp.join(out_full_path, video_name)
vr = mmcv.VideoReader(full_path)
for i in range(len(vr)):
if vr[i] is not None:
w, h, c = np.shape(vr[i])
if args.new_short == 0:
out_img = mmcv.imresize(vr[i], (args.new_width,
args.new_height))
else:
if min(h, w) == h:
new_h = args.new_short
new_w = int((new_h / h) * w)
else:
new_w = args.new_short
new_h = int((new_w / w) * h)
out_img = mmcv.imresize(vr[i], (new_h, new_w))
mmcv.imwrite(out_img,
f'{out_full_path}/img_{i + 1:05d}.jpg')
else:
warnings.warn(
'Length inconsistent!'
f'Early stop with {i + 1} out of {len(vr)} frames.')
break
else:
cmd = osp.join(
f"denseflow '{full_path}' -b=20 -s=0 -o='{out_full_path}'"
f' -ns={args.new_short} -v')
os.system(cmd)
if args.new_short == 0:
cmd = osp.join(
f"denseflow '{full_path}' -b=20 -s=0 -o='{out_full_path}'"
f' -nw={args.new_width} -nh={args.new_height} -v')
else:
cmd = osp.join(
f"denseflow '{full_path}' -b=20 -s=0 -o='{out_full_path}'"
f' -ns={args.new_short} -v')
os.system(cmd)
elif task == 'flow':
if args.new_short == 0:
cmd = osp.join(
......@@ -106,6 +139,10 @@ def parse_args():
default='avi',
choices=['avi', 'mp4', 'webm'],
help='video file extensions')
parser.add_argument(
'--use-opencv',
action='store_true',
help='Whether to use opencv to extract rgb frames')
parser.add_argument(
'--new-width', type=int, default=0, help='resize image width')
parser.add_argument(
......
#! /usr/bin/bash env
cd ../
python build_rawframes.py ../../data/kinetics400/videos_train/ ../../data/kinetics400/rawframes_train/ --level 2 --ext mp4 --task rgb --new-width 340 --new-height 256 --use-opencv
echo "Raw frames (RGB only) generated for train set"
python build_rawframes.py ../../data/kinetics400/videos_val/ ../../data/kinetics400/rawframes_val/ --level 2 --ext mp4 --task rgb --new-width 340 --new-height 256 --use-opencv
echo "Raw frames (RGB only) generated for val set"
cd kinetics400/
# Preparing Kinetics-400
For basic dataset information, please refer to the official [website](https://deepmind.com/research/open-source/open-source-datasets/kinetics/).
Before we start, please make sure that the directory is located at `$MMACTION/tools/data/kinetics400/`.
Before we start, please make sure that the directory is located at `$MMACTION2/tools/data/kinetics400/`.
## Step 1. Prepare Annotations
......@@ -51,19 +51,25 @@ mkdir /mnt/SSD/kinetics400_extracted_val/
ln -s /mnt/SSD/kinetics400_extracted_val/ ../../../data/kinetics400/rawframes_val/
```
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames.
If you only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames using denseflow.
```shell
bash extract_rgb_frames.sh
```
If you didn't install denseflow, you can still extract RGB frames using OpenCV by the following script, but it will keep the original size of the images.
```shell
bash extract_rgb_frames_opencv.sh
```
If both are required, run the following script to extract frames.
```shell
bash extract_frames.sh
```
These two commands above can generate images with size 340x256, if you want to generate images with short edge 320 (320p),
These three commands above can generate images with size 340x256, if you want to generate images with short edge 320 (320p),
you can change the args `--new-width 340 --new-height 256` to `--new-short 320`.
More details can be found in [data_preparation](/docs/data_preparation.md)
......
#! /usr/bin/bash env
cd ../
python build_rawframes.py ../../data/mit/videos/training ../../data/mit/rawframes/training/ --level 2 --ext mp4 --task rgb --use-opencv
echo "Raw frames (RGB only) generated for train set"
python build_rawframes.py ../../data/mit/videos/validation ../../data/mit/rawframes/validation/ --level 2 --ext mp4 --task rgb --use-opencv
echo "Raw frames (RGB only) generated for val set"
cd mit/
# Preparing Moments in Time
For basic dataset information, you can refer to the dataset [website](http://moments.csail.mit.edu/).
Before we start, please make sure that the directory is located at `$MMACTION/tools/data/mit/`.
Before we start, please make sure that the directory is located at `$MMACTION2/tools/data/mit/`.
## Step 1. Prepare Annotations and Videos
......@@ -17,10 +17,7 @@ This part is **optional** if you only want to use the video loader.
Before extracting, please refer to [install.md](/docs/install.md) for installing [dense_flow](https://github.com/open-mmlab/denseflow).
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames.
Fist, You can run the following script to soft link the extracted frames.
If you have plenty of SSD space, then we recommend extracting frames there for better I/O performance. And you can run the following script to soft link the extracted frames.
```shell
# execute these two line (Assume the SSD is mounted at "/mnt/SSD/")
......@@ -28,10 +25,18 @@ mkdir /mnt/SSD/mit_extracted/
ln -s /mnt/SSD/mit_extracted/ ../../../data/mit/rawframes
```
If you only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames using denseflow.
```shell
bash extract_rgb_frames.sh
```
If you didn't install denseflow, you can still extract RGB frames using OpenCV by the following script, but it will keep the original size of the images.
```shell
bash extract_rgb_frames_opencv.sh
```
If both are required, run the following script to extract frames.
```shell
......
#! /usr/bin/bash env
cd ../
python build_rawframes.py ../../data/mmit/videos/ ../../data/mmit/rawframes/ --task rgb --level 2 --ext mp4 --use-opencv
echo "Genearte raw frames (RGB only)"
cd mmit/
# Preparing Multi-Moments in Time
For basic dataset information, you can refer to the dataset [website](moments.csail.mit.edu).
Before we start, please make sure that the directory is located at `$MMACTION/tools/data/mmit/`.
Before we start, please make sure that the directory is located at `$MMACTION2/tools/data/mmit/`.
## Step 1. Prepare Annotations and Videos
......@@ -25,12 +25,18 @@ mkdir /mnt/SSD/mmit_extracted/
ln -s /mnt/SSD/mmit_extracted/ ../../../data/mmit/rawframes
```
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames.
If you only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames using denseflow.
```shell
bash extract_rgb_frames.sh
```
If you didn't install denseflow, you can still extract RGB frames using OpenCV by the following script, but it will keep the original size of the images.
```shell
bash extract_rgb_frames_opencv.sh
```
If both are required, run the following script to extract frames using "tvl1" algorithm.
```shell
......
#! /usr/bin/bash env
cd ../
python build_rawframes.py ../../data/sthv1/videos/ ../../data/sthv1/rawframes/ --task rgb --level 1 --ext webm --use-opencv
echo "Genearte raw frames (RGB only)"
cd sthv1/
# Preparing Something-Something V1
For basic dataset information, you can refer to the dataset [website](https://20bn.com/datasets/something-something/v1).
Before we start, please make sure that the directory is located at `$MMACTION/tools/data/sthv1/`.
Before we start, please make sure that the directory is located at `$MMACTION2/tools/data/sthv1/`.
## Step 1. Prepare Annotations
First of all, you have to sign in and download annotations to `$MMACTION/data/sthv1/annotations` on the official [website](https://20bn.com/datasets/something-something/v1).
First of all, you have to sign in and download annotations to `$MMACTION2/data/sthv1/annotations` on the official [website](https://20bn.com/datasets/something-something/v1).
## Step 2. Prepare Videos
Then, you can download all data parts to `$MMACTION/data/sthv1/` and use the following command to extract.
Then, you can download all data parts to `$MMACTION2/data/sthv1/` and use the following command to extract.
```shell
cd $MMACTION/data/sthv1/
cd $MMACTION2/data/sthv1/
cat 20bn-something-something-v1-?? | tar zx
cd $MMACTION/tools/data/sthv1/
cd $MMACTION2/tools/data/sthv1/
```
## Step 3. Extract RGB and Flow
......@@ -33,17 +33,24 @@ mkdir /mnt/SSD/sthv1_extracted/
ln -s /mnt/SSD/sthv1_extracted/ ../../../data/sthv1/rawframes
```
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames.
If you only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames using denseflow.
```shell
cd $MMACTION/tools/data/sthv1/
cd $MMACTION2/tools/data/sthv1/
bash extract_rgb_frames.sh
```
If you didn't install denseflow, you can still extract RGB frames using OpenCV by the following script, but it will keep the original size of the images.
```shell
cd $MMACTION2/tools/data/sthv1/
bash extract_rgb_frames_opencv.sh
```
If both are required, run the following script to extract frames.
```shell
cd $MMACTION/tools/data/sthv1/
cd $MMACTION2/tools/data/sthv1/
bash extract_frames.sh
```
......@@ -52,7 +59,7 @@ bash extract_frames.sh
you can run the follow script to generate file list in the format of rawframes and videos.
```shell
cd $MMACTION/tools/data/sthv1/
cd $MMACTION2/tools/data/sthv1/
bash generate_{rawframes, videos}_filelist.sh
```
......
#! /usr/bin/bash env
cd ../
python build_rawframes.py ../../data/sthv2/videos/ ../../data/sthv2/rawframes/ --task rgb --level 1 --ext webm --use-opencv
echo "Genearte raw frames (RGB only)"
cd sthv2/
# Preparing Something-Something V2
For basic dataset information, you can refer to the dataset [website](https://20bn.com/datasets/something-something/v2).
Before we start, please make sure that the directory is located at `$MMACTION/tools/data/sthv2/`.
Before we start, please make sure that the directory is located at `$MMACTION2/tools/data/sthv2/`.
## Step 1. Prepare Annotations
First of all, you have to sign in and download annotations to `$MMACTION/data/sthv2/annotations` on the official [website](https://20bn.com/datasets/something-something/v2).
First of all, you have to sign in and download annotations to `$MMACTION2/data/sthv2/annotations` on the official [website](https://20bn.com/datasets/something-something/v2).
## Step 2. Prepare Videos
Then, you can download all data parts to `$MMACTION/data/sthv2/` and use the following command to extract.
Then, you can download all data parts to `$MMACTION2/data/sthv2/` and use the following command to extract.
```shell
cd $MMACTION/data/sthv2/
cd $MMACTION2/data/sthv2/
cat 20bn-something-something-v2-?? | tar zx
```
......@@ -32,17 +32,24 @@ mkdir /mnt/SSD/sthv2_extracted/
ln -s /mnt/SSD/sthv2_extracted/ ../../../data/sthv2/rawframes
```
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames.
If you only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames using denseflow.
```shell
cd $MMACTION/tools/data/sthv2/
cd $MMACTION2/tools/data/sthv2/
bash extract_rgb_frames.sh
```
If you didn't install denseflow, you can still extract RGB frames using OpenCV by the following script, but it will keep the original size of the images.
```shell
cd $MMACTION2/tools/data/sthv2/
bash extract_rgb_frames_opencv.sh
```
If both are required, run the following script to extract frames.
```shell
cd $MMACTION/tools/data/sthv2/
cd $MMACTION2/tools/data/sthv2/
bash extract_frames.sh
```
......@@ -51,7 +58,7 @@ bash extract_frames.sh
you can run the follow script to generate file list in the format of rawframes and videos.
```shell
cd $MMACTION/tools/data/sthv2/
cd $MMACTION2/tools/data/sthv2/
bash generate_{rawframes, videos}_filelist.sh
```
......
#! /usr/bin/bash env
cd ../
python build_rawframes.py ../../data/thumos14/videos/validation/ ../../data/thumos14/rawframes/validation/ --level 1 --ext mp4 --task rgb --use-opencv
echo "Raw frames (RGB only) generated for val set"
python build_rawframes.py ../../data/thumos14/videos/test/ ../../data/thumos14/rawframes/test/ --level 1 --ext mp4 --task rgb --use-opencv
echo "Raw frames (RGB only) generated for test set"
cd thumos14/
# Preparing THUMOS'14
For basic dataset information, you can refer to the dataset [website](https://www.crcv.ucf.edu/THUMOS14/download.html).
Before we start, please make sure that the directory is located at `$MMACTION/tools/data/thumos14/`.
Before we start, please make sure that the directory is located at `$MMACTION2/tools/data/thumos14/`.
## Step 1. Prepare Annotations
First of all, run the following script to prepare annotations.
```shell
cd $MMACTION/tools/data/thumos14/
cd $MMACTION2/tools/data/thumos14/
bash download_annotations.sh
```
......@@ -17,7 +17,7 @@ bash download_annotations.sh
Then, you can run the following script to prepare videos.
```shell
cd $MMACTION/tools/data/thumos14/
cd $MMACTION2/tools/data/thumos14/
bash download_videos.sh
```
......@@ -37,17 +37,24 @@ mkdir /mnt/SSD/thumos14_extracted/
ln -s /mnt/SSD/thumos14_extracted/ ../data/thumos14/rawframes/
```
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames.
If you only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames using denseflow.
```shell
cd $MMACTION/tools/data/thumos14/
cd $MMACTION2/tools/data/thumos14/
bash extract_rgb_frames.sh
```
If you didn't install denseflow, you can still extract RGB frames using OpenCV by the following script, but it will keep the original size of the images.
```shell
cd $MMACTION2/tools/data/thumos14/
bash extract_rgb_frames_opencv.sh
```
If both are required, run the following script to extract frames.
```shell
cd $MMACTION/tools/data/thumos14/
cd $MMACTION2/tools/data/thumos14/
bash extract_frames.sh tvl1
```
......@@ -56,7 +63,7 @@ bash extract_frames.sh tvl1
You can run the follow script to fetch pre-computed tag proposals.
```shell
cd $MMACTION/tools/data/thumos14/
cd $MMACTION2/tools/data/thumos14/
bash fetch_tag_proposals.sh
```
......
#! /usr/bin/bash env
cd ../
python build_rawframes.py ../../data/ucf101/videos/ ../../data/ucf101/rawframes/ --task rgb --level 2 --ext avi --use-opencv
echo "Genearte raw frames (RGB only)"
cd ucf101/
# Preparing UCF-101
For basic dataset information, you can refer to the dataset [website](https://www.crcv.ucf.edu/data/UCF101.php).
Before we start, please make sure that the directory is located at `$MMACTION/tools/data/ucf101/`.
Before we start, please make sure that the directory is located at `$MMACTION2/tools/data/ucf101/`.
## Step 1. Prepare Annotations
......@@ -35,12 +35,18 @@ mkdir /mnt/SSD/ucf101_extracted/
ln -s /mnt/SSD/ucf101_extracted/ ../../../data/ucf101/rawframes
```
If you didn't install dense_flow in the installation or only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames.
If you only want to play with RGB frames (since extracting optical flow can be time-consuming), consider running the following script to extract **RGB-only** frames using denseflow.
```shell
bash extract_rgb_frames.sh
```
If you didn't install denseflow, you can still extract RGB frames using OpenCV by the following script, but it will keep the original size of the images.
```shell
bash extract_rgb_frames_opencv.sh
```
If both are required, run the following script to extract frames using "tvl1" algorithm.
```shell
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册