diff --git a/CHANGELOG.md b/CHANGELOG.md index 0d7967d97241799c823fd7edde5b47dde0a775ad..f8f048aa878333567978d4b09f181e100cc19c82 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -13,6 +13,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - interactor: add HRNet interactive segmentation serverless function () - Added GPU implementation for SiamMask, reworked tracking approach () - Progress bar for manifest creating () +- Add a tutorial on attaching cloud storage AWS-S3 () + and Azure Blob Container () ### Changed diff --git a/README.md b/README.md index 0e9406753edf6ce6c6735febe413cf178bc9d534..b6f6eb9fdefcdd9bef1efde67f48e513e866976a 100644 --- a/README.md +++ b/README.md @@ -86,7 +86,7 @@ For more information about supported formats look at the | [Object reidentification](/serverless/openvino/omz/intel/person-reidentification-retail-300/nuclio) | reid | OpenVINO | X | | | [Semantic segmentation for ADAS](/serverless/openvino/omz/intel/semantic-segmentation-adas-0001/nuclio) | detector | OpenVINO | X | | | [Text detection v4](/serverless/openvino/omz/intel/text-detection-0004/nuclio) | detector | OpenVINO | X | | -| [SiamMask](/serverless/pytorch/foolwood/siammask/nuclio) | tracker | PyTorch | X | | +| [SiamMask](/serverless/pytorch/foolwood/siammask/nuclio) | tracker | PyTorch | X | X | | [f-BRS](/serverless/pytorch/saic-vul/fbrs/nuclio) | interactor | PyTorch | X | | | [HRNet](/serverless/pytorch/saic-vul/hrnet/nuclio) | interactor | PyTorch | | X | | [Inside-Outside Guidance](/serverless/pytorch/shiyinzhang/iog/nuclio) | interactor | PyTorch | X | | diff --git a/site/content/en/docs/administration/advanced/installation_automatic_annotation.md b/site/content/en/docs/administration/advanced/installation_automatic_annotation.md index 27d686a4e8e9e2c15fcab4bf09a660636198f255..8e1465addcb0bd177aef9b3e43e106f91822d606 100644 --- a/site/content/en/docs/administration/advanced/installation_automatic_annotation.md +++ b/site/content/en/docs/administration/advanced/installation_automatic_annotation.md @@ -38,7 +38,7 @@ description: 'Information about the installation of components needed for semi-a wget https://github.com/nuclio/nuclio/releases/download//nuctl--linux-amd64 ``` - After downloading the nuclio, give it a proper permission and do a softlink + After downloading the nuclio, give it a proper permission and do a softlink. ``` sudo chmod +x nuctl--linux-amd64 @@ -94,7 +94,7 @@ description: 'Information about the installation of components needed for semi-a - The number of GPU deployed functions will be limited to your GPU memory. - See [deploy_gpu.sh](https://github.com/openvinotoolkit/cvat/blob/develop/serverless/deploy_gpu.sh) script for more examples. - - For some models (namely [SiamMask](/docs/manual/advanced/ai-tools#trackers) you need an [Nvidia driver](https://www.nvidia.com/en-us/drivers/unix/) + - For some models (namely [SiamMask](/docs/manual/advanced/ai-tools#trackers)) you need an [Nvidia driver](https://www.nvidia.com/en-us/drivers/unix/) version greater than or equal to 450.80.02. **Note for Windows users:** diff --git a/site/content/en/docs/administration/advanced/mounting_cloud_storages.md b/site/content/en/docs/administration/advanced/mounting_cloud_storages.md index 873c5c13cf11a06e3a63ec1d0f4a59f5d45c5364..7f97463b541ea4cd101ec9df0283d83e14e11c44 100644 --- a/site/content/en/docs/administration/advanced/mounting_cloud_storages.md +++ b/site/content/en/docs/administration/advanced/mounting_cloud_storages.md @@ -205,9 +205,9 @@ Follow the first 7 mounting steps above. 1. Edit `/etc/fstab` with the blobfuse script. Add the following line(replace paths): -```bash -/absolute/path/to/azure_fuse fuse allow_other,user,_netdev -``` + ```bash + /absolute/path/to/azure_fuse fuse allow_other,user,_netdev + ``` ##### Using systemd diff --git a/site/content/en/docs/getting_started.md b/site/content/en/docs/getting_started.md index 787a4f7ba1901ca8bee6043acb878fffae1aac51..0ec6adea2685f9f3115ed0de7dcfe87c9d47e6ee 100644 --- a/site/content/en/docs/getting_started.md +++ b/site/content/en/docs/getting_started.md @@ -8,17 +8,21 @@ This section contains basic information and links to sections necessary for a qu ## Installation -First step is to install CVAT on your system. Use the [Installation Guide](/docs/administration/basics/installation/). +First step is to install CVAT on your system: +- [Installation on Ubuntu](/docs/administration/basics/installation/#ubuntu-1804-x86_64amd64) +- [Installation on Windows 10](/docs/administration/basics/installation/#windows-10) +- [Installation on Mac OS](/docs/administration/basics/installation/#mac-os-mojave) -## Getting started in CVAT +To learn how to create a superuser and log in to CVAT, +go to the [authorization](/docs/manual/basics/authorization/) section. -To find out more, go to the [authorization](/docs/manual/basics/authorization/) section. +## Getting started in CVAT To create a task, go to `Tasks` section. Click `Create new task` to go to the task creation page. Set the name of the future task. -Set the label using the constructor: first click "add label", then enter the name of the label and choose the color. +Set the label using the constructor: first click `Add label`, then enter the name of the label and choose the color. ![](/images/create_a_new_task.gif) @@ -26,10 +30,12 @@ You need to upload images or videos for your future annotation. To do so, simply To learn more, go to [creating an annotation task](/docs/manual/basics/creating_an_annotation_task/) -## Basic annotation +## Annotation + +### Basic When the task is created, you will see a corresponding message in the top right corner. -Click the "Open task" button to go to the task page. +Click the `Open task` button to go to the task page. Once on the task page, open a link to the job in the jobs list. @@ -44,16 +50,24 @@ Choose a correct section for your type of the task and start annotation. | Cuboids | [Annotation with cuboids](/docs/manual/advanced/annotation-with-cuboids/) | [Editing the cuboid](/docs/manual/advanced/annotation-with-cuboids/editing-the-cuboid/) | | Tag | [Annotation with tags](/docs/manual/advanced/annotation-with-tags/) | | +### Advanced + +In CVAT there is the possibility of using automatic and semi-automatic annotation what gives +you the opportunity to speed up the execution of the annotation: +- [OpenCV tools](/docs/manual/advanced/opencv-tools/) - tools included in CVAT by default. +- [AI tools](/docs/manual/advanced/ai-tools/) - tools requiring installation. +- [Automatic annotation](/docs/manual/advanced/automatic-annotation/) - automatic annotation with using DL models. + ## Dump annotation ![](/images/image028.jpg) 1. To download the annotations, first you have to save all changes. - Click the Save button or press `Ctrl+S`to save annotations quickly. + Click the `Save` button or press `Ctrl+S`to save annotations quickly. -2. After you saved the changes, click the Menu button. +2. After you saved the changes, click the `Menu` button. -3. Then click the Dump Annotation button. +3. Then click the `Dump Annotation` button. 4. Lastly choose a format of the dump annotation file. diff --git a/site/content/en/docs/manual/advanced/models.md b/site/content/en/docs/manual/advanced/models.md index aef49c79f4e99e4da81c9f458bc9a9e0cc1f0705..6713f88e3e750cb6853755852930c29c78455d05 100644 --- a/site/content/en/docs/manual/advanced/models.md +++ b/site/content/en/docs/manual/advanced/models.md @@ -4,6 +4,10 @@ linkTitle: 'Models' weight: 13 --- +To deploy the models, you will need to install the necessary components using +[Semi-automatic and Automatic Annotation guide](/docs/administration/advanced/installation_automatic_annotation/). +To learn how to deploy the model, read [Serverless tutorial](/docs/manual/advanced/serverless-tutorial/). + The Models page contains a list of deep learning (DL) models deployed for semi-automatic and automatic annotation. To open the Models page, click the Models button on the navigation bar. The list of models is presented in the form of a table. The parameters indicated for each model are the following: @@ -20,5 +24,3 @@ The list of models is presented in the form of a table. The parameters indicated - `Labels` - list of the supported labels (only for the models of the `detectors` type) ![](/images/image099.jpg) - -Read how to install your model [here](/docs/administration/basics/installation/#semi-automatic-and-automatic-annotation). diff --git a/site/content/en/images/create_a_new_task.gif b/site/content/en/images/create_a_new_task.gif index 4ab75145e3fddd2df7d641d4fd48afdc9b5a3f6d..2cc683ceca1183d7c06d943004ad9e2317701325 100644 Binary files a/site/content/en/images/create_a_new_task.gif and b/site/content/en/images/create_a_new_task.gif differ