1. 22 5月, 2023 1 次提交
    • A
      Update Django to 4.x (#6122) · cfe2ea38
      Andrey Zhavoronkov 提交于
      - Reduced rest_api testing time by ~25% in my environment: 430s vs 560s
      - Enabled gzip compression
      - Fixed webhook tests that not actually waiting for the required number
      of delivered messages in response.
      - Fixed `preview` tests
      cfe2ea38
  2. 20 5月, 2023 1 次提交
  3. 19 5月, 2023 2 次提交
  4. 18 5月, 2023 1 次提交
  5. 17 5月, 2023 1 次提交
  6. 16 5月, 2023 1 次提交
  7. 15 5月, 2023 3 次提交
    • R
      Add an option to call Nuclio functions via the dashboard (#6146) · 7d21a73a
      Roman Donchenko 提交于
      Currently, this only happens when running in Kubernetes. This option
      lets CVAT use Nuclio that's deployed to Kubernetes without being
      deployed to Kubernetes itself, or just to use Nuclio that is deployed on
      another machine.
      7d21a73a
    • R
      Fix the CPU HRNet Nuclio function (#6150) · c005d6c0
      Roman Donchenko 提交于
      <!-- Raise an issue to propose your change
      (https://github.com/opencv/cvat/issues).
      It helps to avoid duplication of efforts from multiple independent
      contributors.
      Discuss your ideas with maintainers to be sure that changes will be
      approved and merged.
      Read the [Contribution
      guide](https://opencv.github.io/cvat/docs/contributing/). -->
      
      <!-- Provide a general summary of your changes in the Title above -->
      
      ### Motivation and context
      <!-- Why is this change required? What problem does it solve? If it
      fixes an open
      issue, please link to the issue here. Describe your changes in detail,
      add
      screenshots. -->
      It was broken for two reasons:
      
      * Due to some changes on the PyTorch website, the old way of installing
      the PyTorch packages now installs the ROCm version rather than the CPU
      version (and it doesn't work doe to a missing dependency).
      
      * The newest version of NumPy doesn't work with HRNet due to the
      latter's usage of `np.int`.
      
      Fix these problems, and in addition, rework the build recipe to avoid
      installing unneeded packages. Altogether, the changes massively shrink
      the Docker image size (from ~14 GB to ~2 GB).
      
      I didn't update the GPU version, because a) the first issue doesn't
      affect it, b) the second issue is already fixed in it, and c) I don't
      have a GPU to test it on.
      
      ### How has this been tested?
      <!-- Please describe in detail how you tested your changes.
      Include details of your testing environment, and the tests you ran to
      see how your change affects other areas of the code, etc. -->
      Manual testing with CVAT.
      
      ### Checklist
      <!-- Go over all the following points, and put an `x` in all the boxes
      that apply.
      If an item isn't applicable for some reason, then ~~explicitly
      strikethrough~~ the whole
      line. If you don't do that, GitHub will show incorrect progress for the
      pull request.
      If you're unsure about any of these, don't hesitate to ask. We're here
      to help! -->
      - [x] I submit my changes into the `develop` branch
      - [x] I have added a description of my changes into the
      [CHANGELOG](https://github.com/opencv/cvat/blob/develop/CHANGELOG.md)
      file
      - ~~[ ] I have updated the documentation accordingly~~
      - ~~[ ] I have added tests to cover my changes~~
      - ~~[ ] I have linked related issues (see [GitHub docs](
      
      https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword))~~
      - ~~[ ] I have increased versions of npm packages if it is necessary
      
      ([cvat-canvas](https://github.com/opencv/cvat/tree/develop/cvat-canvas#versioning),
      
      [cvat-core](https://github.com/opencv/cvat/tree/develop/cvat-core#versioning),
      
      [cvat-data](https://github.com/opencv/cvat/tree/develop/cvat-data#versioning)
      and
      
      [cvat-ui](https://github.com/opencv/cvat/tree/develop/cvat-ui#versioning))~~
      
      ### License
      
      - [x] I submit _my code changes_ under the same [MIT License](
      https://github.com/opencv/cvat/blob/develop/LICENSE) that covers the
      project.
        Feel free to contact the maintainers if that's a concern.
      Co-authored-by: NBoris Sekachev <boris.sekachev@yandex.ru>
      c005d6c0
    • R
      Modernize OpenVINO-based Nuclio functions and allow them to run on Kubernetes (#6129) · 98616c72
      Roman Donchenko 提交于
      Currently, OpenVINO-based functions assume that a local directory will
      be mounted into the container. In Kubernetes, that isn't possible, so
      implement an alternate approach: create a separate base image and
      inherit the function image from it.
      
      In addition, implement some modernizations:
      
      * Upgrade the version of OpenVINO to the latest (2022.3). Make the
      necessary updates to the code. Note that 2022.1 introduced an entirely
      new inference API, but I haven't switched to it yet to minimize changes.
      
      * Use the runtime version of the Docker image as the base instead of the
      dev version. This significantly reduces the size of the final image (by
      ~3GB).
      
      * Replace the `faster_rcnn_inception_v2_coco` model with
      `faster_rcnn_inception_resnet_v2_atrous_coco`, as the former has been
        removed from OMZ.
      
      * Ditto with `person-reidentification-retail-0300` -> `0277`.
      
      * The IRs used in the DEXTR function are not supported by OpenVINO
      anymore (format too old), so rewrite the build process to create them
      from the original code/weights instead.
      98616c72
  8. 11 5月, 2023 1 次提交
    • B
      Running SAM backbone on frontend (#6019) · 0712d7d0
      Boris Sekachev 提交于
      <!-- Raise an issue to propose your change
      (https://github.com/opencv/cvat/issues).
      It helps to avoid duplication of efforts from multiple independent
      contributors.
      Discuss your ideas with maintainers to be sure that changes will be
      approved and merged.
      Read the [Contribution
      guide](https://opencv.github.io/cvat/docs/contributing/). -->
      
      <!-- Provide a general summary of your changes in the Title above -->
      
      ### Motivation and context
      Resolved #5984 
      Resolved #6049
      Resolved #6041
      
      - Compatible only with ``sam_vit_h_4b8939.pth`` weights. Need to
      re-export ONNX mask decoder with some custom model changes (see below)
      to support other weights (or just download them using links below)
      - Need to redeploy the serverless function because its interface has
      been changed.
      
      Decoders for other weights:
      sam_vit_l_0b3195.pth:
      [Download](https://drive.google.com/file/d/1Nb5CJKQm_6s1n3xLSZYso6VNgljjfR-6/view?usp=sharing)
      sam_vit_b_01ec64.pth:
      [Download](https://drive.google.com/file/d/17cZAXBPaOABS170c9bcj9PdQsMziiBHw/view?usp=sharing)
      
      Changes done in ONNX part:
      ```
      git diff scripts/export_onnx_model.py
      diff --git a/scripts/export_onnx_model.py b/scripts/export_onnx_model.py
      index 8441258..18d5be7 100644
      --- a/scripts/export_onnx_model.py
      +++ b/scripts/export_onnx_model.py
      @@ -138,7 +138,7 @@ def run_export(
      
           _ = onnx_model(**dummy_inputs)
      
      -    output_names = ["masks", "iou_predictions", "low_res_masks"]
      +    output_names = ["masks", "iou_predictions", "low_res_masks", "xtl", "ytl", "xbr", "ybr"]
      
           with warnings.catch_warnings():
               warnings.filterwarnings("ignore", category=torch.jit.TracerWarning)
      bsekachev@DESKTOP-OTBLK26:~/sam$ git diff segment_anything/utils/onnx.py
      diff --git a/segment_anything/utils/onnx.py b/segment_anything/utils/onnx.py
      index 3196bdf..85729c1 100644
      --- a/segment_anything/utils/onnx.py
      +++ b/segment_anything/utils/onnx.py
      @@ -87,7 +87,15 @@ class SamOnnxModel(nn.Module):
               orig_im_size = orig_im_size.to(torch.int64)
               h, w = orig_im_size[0], orig_im_size[1]
               masks = F.interpolate(masks, size=(h, w), mode="bilinear", align_corners=False)
      -        return masks
      +        masks = torch.gt(masks, 0).to(torch.uint8)
      +        nonzero = torch.nonzero(masks)
      +        xindices = nonzero[:, 3:4]
      +        yindices = nonzero[:, 2:3]
      +        ytl = torch.min(yindices).to(torch.int64)
      +        ybr = torch.max(yindices).to(torch.int64)
      +        xtl = torch.min(xindices).to(torch.int64)
      +        xbr = torch.max(xindices).to(torch.int64)
      +        return masks[:, :, ytl:ybr + 1, xtl:xbr + 1], xtl, ytl, xbr, ybr
      
           def select_masks(
               self, masks: torch.Tensor, iou_preds: torch.Tensor, num_points: int
      @@ -132,7 +140,7 @@ class SamOnnxModel(nn.Module):
               if self.return_single_mask:
                   masks, scores = self.select_masks(masks, scores, point_coords.shape[1])
      
      -        upscaled_masks = self.mask_postprocessing(masks, orig_im_size)
      +        upscaled_masks, xtl, ytl, xbr, ybr = self.mask_postprocessing(masks, orig_im_size)
      
               if self.return_extra_metrics:
                   stability_scores = calculate_stability_score(
      @@ -141,4 +149,4 @@ class SamOnnxModel(nn.Module):
                   areas = (upscaled_masks > self.model.mask_threshold).sum(-1).sum(-1)
                   return upscaled_masks, scores, stability_scores, areas, masks
      
      -        return upscaled_masks, scores, masks
      +        return upscaled_masks, scores, masks, xtl, ytl, xbr, ybr
      ```
      
      ### How has this been tested?
      <!-- Please describe in detail how you tested your changes.
      Include details of your testing environment, and the tests you ran to
      see how your change affects other areas of the code, etc. -->
      
      ### Checklist
      <!-- Go over all the following points, and put an `x` in all the boxes
      that apply.
      If an item isn't applicable for some reason, then ~~explicitly
      strikethrough~~ the whole
      line. If you don't do that, GitHub will show incorrect progress for the
      pull request.
      If you're unsure about any of these, don't hesitate to ask. We're here
      to help! -->
      - [x] I submit my changes into the `develop` branch
      - [x] I have added a description of my changes into the
      [CHANGELOG](https://github.com/opencv/cvat/blob/develop/CHANGELOG.md)
      file
      - [ ] I have updated the documentation accordingly
      - [ ] I have added tests to cover my changes
      - [x] I have linked related issues (see [GitHub docs](
      
      https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword))
      - [x] I have increased versions of npm packages if it is necessary
      
      ([cvat-canvas](https://github.com/opencv/cvat/tree/develop/cvat-canvas#versioning),
      
      [cvat-core](https://github.com/opencv/cvat/tree/develop/cvat-core#versioning),
      
      [cvat-data](https://github.com/opencv/cvat/tree/develop/cvat-data#versioning)
      and
      
      [cvat-ui](https://github.com/opencv/cvat/tree/develop/cvat-ui#versioning))
      
      ### License
      
      - [x] I submit _my code changes_ under the same [MIT License](
      https://github.com/opencv/cvat/blob/develop/LICENSE) that covers the
      project.
        Feel free to contact the maintainers if that's a concern.
      0712d7d0
  9. 05 5月, 2023 1 次提交
  10. 01 5月, 2023 1 次提交
  11. 28 4月, 2023 1 次提交
  12. 27 4月, 2023 1 次提交
  13. 25 4月, 2023 2 次提交
  14. 19 4月, 2023 2 次提交
  15. 14 4月, 2023 5 次提交
  16. 13 4月, 2023 1 次提交
  17. 12 4月, 2023 1 次提交
  18. 11 4月, 2023 2 次提交
  19. 09 4月, 2023 1 次提交
  20. 06 4月, 2023 4 次提交
  21. 05 4月, 2023 1 次提交
  22. 30 3月, 2023 2 次提交
  23. 29 3月, 2023 2 次提交
    • R
      Remove support for redundant request media types in the API (#5874) · 409fac52
      Roman Donchenko 提交于
      Currently, every API endpoint that takes a request body supports (or at
      least declares to support) 4 media types:
      
      * `application/json`
      * `application/offset+octet-stream`
      * `application/x-www-form-urlencoded`
      * `multipart/form-data`
      
      Supporting multiple media types has a cost. We need to test that the
      various media types actually work, and we need to document their use
      (e.g., providing examples for each supported type). In practice, we
      mostly don't... but we still need to. In addition, the user, seeing the
      list of supported types, has to decide which one to use.
      
      Now, the cost could be worthwhile if the multiple type support provided
      value. However, for the most part, it doesn't:
      
      * `application/offset+octet-stream` only makes sense for the TUS
      endpoints. Moreover, for those endpoints it's the only type that makes
      sense.
      
      * `application/x-www-form-urlencoded` is strictly inferior to JSON. It
      doesn't support compound values, and it doesn't carry type information,
      so you can't, for example, distinguish a string from a null. It's
      potentially susceptible to CSRF attacks (we have protections against
      those, but we could accidentally disable them and not notice). Its main
      use is for form submissions, but we don't use HTML-based submissions.
      
      * `multipart/form-data` shares the downsides of
      `application/x-www-form-urlencoded`, however it does have a redeeming
      quality: it allows to efficiently upload binary files. Therefore, it has
      legitimate uses in endpoints that accept such files.
      
      Therefore, I believe it is justified to reduce the API surface area as
      follows:
      
      * Restrict `application/offset+octet-stream` to TUS endpoints and remove
      support for other types from those endpoints.
      
      * Remove `application/x-www-form-urlencoded` support entirely.
      
      * Restrict `multipart/form-data` support to endpoints dealing with file
      uploads.
      
      Note that I had to keep `multipart/form-data` support for `POST
      /api/cloudstorages` and `PATCH /api/cloudstorages/<id>`. That's because
      they accept a file-type parameter (`key_file`). I don't especially like
      this. Key files are not big, so the efficiency benefits of
      `multipart/form-data` don't matter. Therefore, I don't think we really
      need to support this type here; it would be more elegant to just use
      JSON and Base64-encode the key file contents. However, I don't have time
      to make that change right now, so I'm
      leaving it for another time.
      409fac52
    • A
      Fix export of a job from a task with multiple jobs (#5928) · ce635657
      Anastasia Yasakova 提交于
      Fixed #5927 
      ce635657
  24. 27 3月, 2023 1 次提交
  25. 24 3月, 2023 1 次提交
    • R
      Fix Nuclio function invocations when deployed via the Helm chart (#5917) · 7b7b5b4e
      Roman Donchenko 提交于
      The `CVAT_NUCLIO_FUNCTION_NAMESPACE` needs to be defined consistently in
      order for Nuclio integration to work. Currently, it's set to `cvat` for
      the main CVAT server process, but not for any other CVAT process (which
      means it defaults to `nuclio` in those processes). Since it's the
      annotation worker process that actually invokes the Nuclio functions,
      the invocation fails.
      
      Fix it by synchronizing the Nuclio environment variables across all
      backend deployments. Technically, I think only the server and annotation
      worker deployments need these variables, but since they're accessed by
      `cvat/settings/base.py` in every process that loads Django, define them
      everywhere to be sure.
      
      Fixes #5626.
      7b7b5b4e