未验证 提交 584392aa 编写于 作者: T Timur Osmanov 提交者: GitHub

Updating documentation (#3800)

* update docs

* update README.md

* fix mistake

* update CHANGELOG.md and fix remark issues

* cancel change backup_guide.md
上级 0faba29b
......@@ -13,6 +13,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- interactor: add HRNet interactive segmentation serverless function (<https://github.com/openvinotoolkit/cvat/pull/3740>)
- Added GPU implementation for SiamMask, reworked tracking approach (<https://github.com/openvinotoolkit/cvat/pull/3571>)
- Progress bar for manifest creating (<https://github.com/openvinotoolkit/cvat/pull/3712>)
- Add a tutorial on attaching cloud storage AWS-S3 (<https://github.com/openvinotoolkit/cvat/pull/3745>)
and Azure Blob Container (<https://github.com/openvinotoolkit/cvat/pull/3778>)
### Changed
......
......@@ -86,7 +86,7 @@ For more information about supported formats look at the
| [Object reidentification](/serverless/openvino/omz/intel/person-reidentification-retail-300/nuclio) | reid | OpenVINO | X | |
| [Semantic segmentation for ADAS](/serverless/openvino/omz/intel/semantic-segmentation-adas-0001/nuclio) | detector | OpenVINO | X | |
| [Text detection v4](/serverless/openvino/omz/intel/text-detection-0004/nuclio) | detector | OpenVINO | X | |
| [SiamMask](/serverless/pytorch/foolwood/siammask/nuclio) | tracker | PyTorch | X | |
| [SiamMask](/serverless/pytorch/foolwood/siammask/nuclio) | tracker | PyTorch | X | X |
| [f-BRS](/serverless/pytorch/saic-vul/fbrs/nuclio) | interactor | PyTorch | X | |
| [HRNet](/serverless/pytorch/saic-vul/hrnet/nuclio) | interactor | PyTorch | | X |
| [Inside-Outside Guidance](/serverless/pytorch/shiyinzhang/iog/nuclio) | interactor | PyTorch | X | |
......
......@@ -38,7 +38,7 @@ description: 'Information about the installation of components needed for semi-a
wget https://github.com/nuclio/nuclio/releases/download/<version>/nuctl-<version>-linux-amd64
```
After downloading the nuclio, give it a proper permission and do a softlink
After downloading the nuclio, give it a proper permission and do a softlink.
```
sudo chmod +x nuctl-<version>-linux-amd64
......@@ -94,7 +94,7 @@ description: 'Information about the installation of components needed for semi-a
- The number of GPU deployed functions will be limited to your GPU memory.
- See [deploy_gpu.sh](https://github.com/openvinotoolkit/cvat/blob/develop/serverless/deploy_gpu.sh)
script for more examples.
- For some models (namely [SiamMask](/docs/manual/advanced/ai-tools#trackers) you need an [Nvidia driver](https://www.nvidia.com/en-us/drivers/unix/)
- For some models (namely [SiamMask](/docs/manual/advanced/ai-tools#trackers)) you need an [Nvidia driver](https://www.nvidia.com/en-us/drivers/unix/)
version greater than or equal to 450.80.02.
**Note for Windows users:**
......
......@@ -205,9 +205,9 @@ Follow the first 7 mounting steps above.
1. Edit `/etc/fstab` with the blobfuse script. Add the following line(replace paths):
```bash
/absolute/path/to/azure_fuse </path/to/desired/mountpoint> fuse allow_other,user,_netdev
```
```bash
/absolute/path/to/azure_fuse </path/to/desired/mountpoint> fuse allow_other,user,_netdev
```
##### <a name="azure_using_systemd">Using systemd</a>
......
......@@ -8,17 +8,21 @@ This section contains basic information and links to sections necessary for a qu
## Installation
First step is to install CVAT on your system. Use the [Installation Guide](/docs/administration/basics/installation/).
First step is to install CVAT on your system:
- [Installation on Ubuntu](/docs/administration/basics/installation/#ubuntu-1804-x86_64amd64)
- [Installation on Windows 10](/docs/administration/basics/installation/#windows-10)
- [Installation on Mac OS](/docs/administration/basics/installation/#mac-os-mojave)
## Getting started in CVAT
To learn how to create a superuser and log in to CVAT,
go to the [authorization](/docs/manual/basics/authorization/) section.
To find out more, go to the [authorization](/docs/manual/basics/authorization/) section.
## Getting started in CVAT
To create a task, go to `Tasks` section. Click `Create new task` to go to the task creation page.
Set the name of the future task.
Set the label using the constructor: first click "add label", then enter the name of the label and choose the color.
Set the label using the constructor: first click `Add label`, then enter the name of the label and choose the color.
![](/images/create_a_new_task.gif)
......@@ -26,10 +30,12 @@ You need to upload images or videos for your future annotation. To do so, simply
To learn more, go to [creating an annotation task](/docs/manual/basics/creating_an_annotation_task/)
## Basic annotation
## Annotation
### Basic
When the task is created, you will see a corresponding message in the top right corner.
Click the "Open task" button to go to the task page.
Click the `Open task` button to go to the task page.
Once on the task page, open a link to the job in the jobs list.
......@@ -44,16 +50,24 @@ Choose a correct section for your type of the task and start annotation.
| Cuboids | [Annotation with cuboids](/docs/manual/advanced/annotation-with-cuboids/) | [Editing the cuboid](/docs/manual/advanced/annotation-with-cuboids/editing-the-cuboid/) |
| Tag | [Annotation with tags](/docs/manual/advanced/annotation-with-tags/) | |
### Advanced
In CVAT there is the possibility of using automatic and semi-automatic annotation what gives
you the opportunity to speed up the execution of the annotation:
- [OpenCV tools](/docs/manual/advanced/opencv-tools/) - tools included in CVAT by default.
- [AI tools](/docs/manual/advanced/ai-tools/) - tools requiring installation.
- [Automatic annotation](/docs/manual/advanced/automatic-annotation/) - automatic annotation with using DL models.
## Dump annotation
![](/images/image028.jpg)
1. To download the annotations, first you have to save all changes.
Click the Save button or press `Ctrl+S`to save annotations quickly.
Click the `Save` button or press `Ctrl+S`to save annotations quickly.
2. After you saved the changes, click the Menu button.
2. After you saved the changes, click the `Menu` button.
3. Then click the Dump Annotation button.
3. Then click the `Dump Annotation` button.
4. Lastly choose a format of the dump annotation file.
......
......@@ -4,6 +4,10 @@ linkTitle: 'Models'
weight: 13
---
To deploy the models, you will need to install the necessary components using
[Semi-automatic and Automatic Annotation guide](/docs/administration/advanced/installation_automatic_annotation/).
To learn how to deploy the model, read [Serverless tutorial](/docs/manual/advanced/serverless-tutorial/).
The Models page contains a list of deep learning (DL) models deployed for semi-automatic and automatic annotation.
To open the Models page, click the Models button on the navigation bar.
The list of models is presented in the form of a table. The parameters indicated for each model are the following:
......@@ -20,5 +24,3 @@ The list of models is presented in the form of a table. The parameters indicated
- `Labels` - list of the supported labels (only for the models of the `detectors` type)
![](/images/image099.jpg)
Read how to install your model [here](/docs/administration/basics/installation/#semi-automatic-and-automatic-annotation).
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册