1. 02 6月, 2023 1 次提交
  2. 19 5月, 2023 1 次提交
  3. 18 5月, 2023 5 次提交
  4. 17 5月, 2023 2 次提交
  5. 16 5月, 2023 1 次提交
  6. 15 5月, 2023 4 次提交
  7. 12 5月, 2023 2 次提交
  8. 11 5月, 2023 1 次提交
    • B
      Running SAM backbone on frontend (#6019) · 0712d7d0
      Boris Sekachev 提交于
      <!-- Raise an issue to propose your change
      (https://github.com/opencv/cvat/issues).
      It helps to avoid duplication of efforts from multiple independent
      contributors.
      Discuss your ideas with maintainers to be sure that changes will be
      approved and merged.
      Read the [Contribution
      guide](https://opencv.github.io/cvat/docs/contributing/). -->
      
      <!-- Provide a general summary of your changes in the Title above -->
      
      ### Motivation and context
      Resolved #5984 
      Resolved #6049
      Resolved #6041
      
      - Compatible only with ``sam_vit_h_4b8939.pth`` weights. Need to
      re-export ONNX mask decoder with some custom model changes (see below)
      to support other weights (or just download them using links below)
      - Need to redeploy the serverless function because its interface has
      been changed.
      
      Decoders for other weights:
      sam_vit_l_0b3195.pth:
      [Download](https://drive.google.com/file/d/1Nb5CJKQm_6s1n3xLSZYso6VNgljjfR-6/view?usp=sharing)
      sam_vit_b_01ec64.pth:
      [Download](https://drive.google.com/file/d/17cZAXBPaOABS170c9bcj9PdQsMziiBHw/view?usp=sharing)
      
      Changes done in ONNX part:
      ```
      git diff scripts/export_onnx_model.py
      diff --git a/scripts/export_onnx_model.py b/scripts/export_onnx_model.py
      index 8441258..18d5be7 100644
      --- a/scripts/export_onnx_model.py
      +++ b/scripts/export_onnx_model.py
      @@ -138,7 +138,7 @@ def run_export(
      
           _ = onnx_model(**dummy_inputs)
      
      -    output_names = ["masks", "iou_predictions", "low_res_masks"]
      +    output_names = ["masks", "iou_predictions", "low_res_masks", "xtl", "ytl", "xbr", "ybr"]
      
           with warnings.catch_warnings():
               warnings.filterwarnings("ignore", category=torch.jit.TracerWarning)
      bsekachev@DESKTOP-OTBLK26:~/sam$ git diff segment_anything/utils/onnx.py
      diff --git a/segment_anything/utils/onnx.py b/segment_anything/utils/onnx.py
      index 3196bdf..85729c1 100644
      --- a/segment_anything/utils/onnx.py
      +++ b/segment_anything/utils/onnx.py
      @@ -87,7 +87,15 @@ class SamOnnxModel(nn.Module):
               orig_im_size = orig_im_size.to(torch.int64)
               h, w = orig_im_size[0], orig_im_size[1]
               masks = F.interpolate(masks, size=(h, w), mode="bilinear", align_corners=False)
      -        return masks
      +        masks = torch.gt(masks, 0).to(torch.uint8)
      +        nonzero = torch.nonzero(masks)
      +        xindices = nonzero[:, 3:4]
      +        yindices = nonzero[:, 2:3]
      +        ytl = torch.min(yindices).to(torch.int64)
      +        ybr = torch.max(yindices).to(torch.int64)
      +        xtl = torch.min(xindices).to(torch.int64)
      +        xbr = torch.max(xindices).to(torch.int64)
      +        return masks[:, :, ytl:ybr + 1, xtl:xbr + 1], xtl, ytl, xbr, ybr
      
           def select_masks(
               self, masks: torch.Tensor, iou_preds: torch.Tensor, num_points: int
      @@ -132,7 +140,7 @@ class SamOnnxModel(nn.Module):
               if self.return_single_mask:
                   masks, scores = self.select_masks(masks, scores, point_coords.shape[1])
      
      -        upscaled_masks = self.mask_postprocessing(masks, orig_im_size)
      +        upscaled_masks, xtl, ytl, xbr, ybr = self.mask_postprocessing(masks, orig_im_size)
      
               if self.return_extra_metrics:
                   stability_scores = calculate_stability_score(
      @@ -141,4 +149,4 @@ class SamOnnxModel(nn.Module):
                   areas = (upscaled_masks > self.model.mask_threshold).sum(-1).sum(-1)
                   return upscaled_masks, scores, stability_scores, areas, masks
      
      -        return upscaled_masks, scores, masks
      +        return upscaled_masks, scores, masks, xtl, ytl, xbr, ybr
      ```
      
      ### How has this been tested?
      <!-- Please describe in detail how you tested your changes.
      Include details of your testing environment, and the tests you ran to
      see how your change affects other areas of the code, etc. -->
      
      ### Checklist
      <!-- Go over all the following points, and put an `x` in all the boxes
      that apply.
      If an item isn't applicable for some reason, then ~~explicitly
      strikethrough~~ the whole
      line. If you don't do that, GitHub will show incorrect progress for the
      pull request.
      If you're unsure about any of these, don't hesitate to ask. We're here
      to help! -->
      - [x] I submit my changes into the `develop` branch
      - [x] I have added a description of my changes into the
      [CHANGELOG](https://github.com/opencv/cvat/blob/develop/CHANGELOG.md)
      file
      - [ ] I have updated the documentation accordingly
      - [ ] I have added tests to cover my changes
      - [x] I have linked related issues (see [GitHub docs](
      
      https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword))
      - [x] I have increased versions of npm packages if it is necessary
      
      ([cvat-canvas](https://github.com/opencv/cvat/tree/develop/cvat-canvas#versioning),
      
      [cvat-core](https://github.com/opencv/cvat/tree/develop/cvat-core#versioning),
      
      [cvat-data](https://github.com/opencv/cvat/tree/develop/cvat-data#versioning)
      and
      
      [cvat-ui](https://github.com/opencv/cvat/tree/develop/cvat-ui#versioning))
      
      ### License
      
      - [x] I submit _my code changes_ under the same [MIT License](
      https://github.com/opencv/cvat/blob/develop/LICENSE) that covers the
      project.
        Feel free to contact the maintainers if that's a concern.
      0712d7d0
  9. 10 5月, 2023 4 次提交
  10. 09 5月, 2023 1 次提交
  11. 05 5月, 2023 2 次提交
  12. 01 5月, 2023 1 次提交
  13. 28 4月, 2023 1 次提交
  14. 27 4月, 2023 1 次提交
  15. 25 4月, 2023 7 次提交
  16. 21 4月, 2023 1 次提交
    • R
      Dockerfile: build PyAV and its dependencies in a separate stage (#6054) · c657c82b
      Roman Donchenko 提交于
      This way, it can be done in parallel with pip fetching all other
      packages in the main build stage. In practice, I find that the build of
      PyAV finishes before pip is done downloading, so we basically get it
      done for free (in terms of time).
      
      With this change, I measured a build time of 9:09 (starting from
      scratch).
      c657c82b
  17. 20 4月, 2023 1 次提交
    • R
      Implement Python dependency pinning via pip-compile-multi (#6048) · 157f7e35
      Roman Donchenko 提交于
      This improves the reproducibility of the server build process. Now new
      versions of dependencies can no longer break the server unless we
      explicitly upgrade to them.
      
      To minimize changes, I did not update any of the version constraints we
      currently have; however, in the future, we should be able to relax a lot
      of them.
      
      Resolves #5310.
      157f7e35
  18. 19 4月, 2023 3 次提交
    • R
      Stop adding package source code into the release Docker images (#6040) · a3534979
      Roman Donchenko 提交于
      The policy that mandated this is no longer relevant now that CVAT is no
      longer developed by Intel. Moreover, the source code included was not
      even complete (it didn't contain Python or NPM packages).
      
      This saves ~1.6 GB in the unpacked image (and probably a bunch of build
      time too, but I didn't measure it).
      a3534979
    • R
      Update Redis and Redis accessories (#6016) · b8cdd4ef
      Roman Donchenko 提交于
      This originally started as a security update for redis-py (see
      <https://github.com/redis/redis-py/releases/tag/v4.5.3>,
      <https://github.com/redis/redis-py/releases/tag/v4.5.4>). However, I
      also had to update other Redis-related components because of
      incompatibilities.
      
      * The old version of fakeredis is not compatible with the redis-py 4.x,
      so I bumped it too. This also allowed me to remove the six workaround.
      
      * redis-py 4.1.0 and newer don't support Redis < 5, so I bumped Redis
      itself in `docker-compose.yml`. Note that the Helm chart is already
      using Redis 7.0.x.
      
      Obsoletes #5946.
      b8cdd4ef
    • R
      Dockerfile: retain the pip download cache between builds (#6035) · 65d43aae
      Roman Donchenko 提交于
      This speeds up the build when the entire step can't be cached (e.g. the
      requirements file changed), but the package list remains mostly the
      same.
      
      The savings are... rather underwhelming, actually. I have observed about
      a minute in savings, although it obviously depends on the network
      connection speed. I think this is because pip is inefficient at loading
      from its own cache (I have observed it loading the entire cached file
      into memory, for example).
      
      Still, savings are savings, and we're getting them basically for free,
      so why not.
      
      Note that I only persist the HTTP cache, and not the wheel cache. That's
      because any wheels that pip builds could depend on the system packages,
      and I don't want old wheels to be reused if the system packages change.
      
      Also, disable the pip autoupdate checks, which isn't much of an
      optimization, but it gets rid of some pointless warnings.
      65d43aae
  19. 17 4月, 2023 1 次提交