未验证 提交 0baf7946 编写于 作者: N Nikita Manovich 提交者: GitHub

Tutorial about serverless functions (#3124)

Co-authored-by: NRoman Donchenko <roman.donchenko@intel.com>
上级 330b8a83
......@@ -13,6 +13,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Support of cloud storage without copying data into CVAT: server part (<https://github.com/openvinotoolkit/cvat/pull/2620>)
- Filter `is_active` for user list (<https://github.com/openvinotoolkit/cvat/pull/3235>)
- Ability to export/import tasks (<https://github.com/openvinotoolkit/cvat/pull/3056>)
- Add a tutorial for semi-automatic/automatic annotation (<https://github.com/openvinotoolkit/cvat/pull/3124>)
- Explicit "Done" button when drawing any polyshapes (<https://github.com/openvinotoolkit/cvat/pull/3417>)
### Changed
......
......@@ -91,6 +91,7 @@ For more information about supported formats look at the
| [Inside-Outside Guidance](/serverless/pytorch/shiyinzhang/iog/nuclio) | interactor | PyTorch | X | |
| [Faster RCNN](/serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio) | detector | TensorFlow | X | X |
| [Mask RCNN](/serverless/tensorflow/matterport/mask_rcnn/nuclio) | detector | TensorFlow | X | X |
| [RetinaNet](serverless/pytorch/facebookresearch/detectron2/retinanet/nuclio) | detector | PyTorch | X | X |
<!--lint enable maximum-line-length-->
......@@ -162,8 +163,8 @@ Other ways to ask questions and get our support:
- [DataIsKey](https://dataiskey.eu/annotation-tool/) uses CVAT as their prime data labeling tool
to offer annotation services for projects of any size.
- [Human Protocol](https://hmt.ai) uses CVAT as a way of adding annotation service to the human protocol.
<!-- prettier-ignore-start -->
<!-- Badges -->
<!-- prettier-ignore-start -->
<!-- Badges -->
[docker-server-pulls-img]: https://img.shields.io/docker/pulls/openvino/cvat_server.svg?style=flat-square&label=server%20pulls
[docker-server-image-url]: https://hub.docker.com/r/openvino/cvat_server
......
......@@ -6,7 +6,9 @@ FUNCTIONS_DIR=${1:-$SCRIPT_DIR}
nuctl create project cvat
for func_config in $(find "$FUNCTIONS_DIR" -name "function.yaml")
shopt -s globstar
for func_config in "$FUNCTIONS_DIR"/**/function.yaml
do
func_root=$(dirname "$func_config")
echo Deploying $(dirname "$func_root") function...
......
......@@ -2,24 +2,19 @@
# Sample commands to deploy nuclio functions on GPU
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
FUNCTIONS_DIR=${1:-$SCRIPT_DIR}
nuctl create project cvat
nuctl deploy --project-name cvat \
--path "$SCRIPT_DIR/tensorflow/faster_rcnn_inception_v2_coco/nuclio" \
--platform local --base-image tensorflow/tensorflow:2.1.1-gpu \
--desc "GPU based Faster RCNN from Tensorflow Object Detection API" \
--image cvat/tf.faster_rcnn_inception_v2_coco_gpu \
--triggers '{"myHttpTrigger": {"maxWorkers": 1}}' \
--resource-limit nvidia.com/gpu=1 --verbose
nuctl deploy --project-name cvat \
--path "$SCRIPT_DIR/tensorflow/matterport/mask_rcnn/nuclio" \
--platform local --base-image tensorflow/tensorflow:1.15.5-gpu-py3 \
--desc "GPU based implementation of Mask RCNN on Python 3, Keras, and TensorFlow." \
--image cvat/tf.matterport.mask_rcnn_gpu\
--triggers '{"myHttpTrigger": {"maxWorkers": 1}}' \
--resource-limit nvidia.com/gpu=1 --verbose
shopt -s globstar
for func_config in "$FUNCTIONS_DIR"/**/function-gpu.yaml
do
func_root=$(dirname "$func_config")
echo "Deploying $(dirname "$func_root") function..."
nuctl deploy --project-name cvat --path "$func_root" \
--volume "$SCRIPT_DIR/common:/opt/nuclio/common" \
--file "$func_config" --platform local
done
nuctl get function
......@@ -8,7 +8,7 @@ def init_context(context):
context.logger.info("Init context... 0%")
model = ModelHandler()
setattr(context.user_data, 'model', model)
context.user_data.model = model
context.logger.info("Init context...100%")
......@@ -16,7 +16,7 @@ def handler(context, event):
context.logger.info("call handler")
data = event.body
points = data["pos_points"]
buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8')))
buf = io.BytesIO(base64.b64decode(data["image"]))
image = Image.open(buf)
polygon = context.user_data.model.handle(image, points)
......
......@@ -8,15 +8,15 @@ def init_context(context):
context.logger.info("Init context... 0%")
model = ModelHandler()
setattr(context.user_data, 'model', model)
context.user_data.model = model
context.logger.info("Init context...100%")
def handler(context, event):
context.logger.info("Run person-reidentification-retail-0300 model")
data = event.body
buf0 = io.BytesIO(base64.b64decode(data["image0"].encode('utf-8')))
buf1 = io.BytesIO(base64.b64decode(data["image1"].encode('utf-8')))
buf0 = io.BytesIO(base64.b64decode(data["image0"]))
buf1 = io.BytesIO(base64.b64decode(data["image1"]))
threshold = float(data.get("threshold", 0.5))
max_distance = float(data.get("max_distance", 50))
image0 = Image.open(buf0)
......
......@@ -9,20 +9,22 @@ def init_context(context):
context.logger.info("Init context... 0%")
# Read labels
functionconfig = yaml.safe_load(open("/opt/nuclio/function.yaml"))
with open("/opt/nuclio/function.yaml", 'rb') as function_file:
functionconfig = yaml.safe_load(function_file)
labels_spec = functionconfig['metadata']['annotations']['spec']
labels = {item['id']: item['name'] for item in json.loads(labels_spec)}
# Read the DL model
model = ModelHandler(labels)
setattr(context.user_data, 'model', model)
context.user_data.model = model
context.logger.info("Init context...100%")
def handler(context, event):
context.logger.info("Run semantic-segmentation-adas-0001 model")
data = event.body
buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8')))
buf = io.BytesIO(base64.b64decode(data["image"]))
threshold = float(data.get("threshold", 0.5))
image = Image.open(buf)
......
......@@ -9,20 +9,21 @@ def init_context(context):
context.logger.info("Init context... 0%")
# Read labels
functionconfig = yaml.safe_load(open("/opt/nuclio/function.yaml"))
with open("/opt/nuclio/function.yaml", 'rb') as function_file:
functionconfig = yaml.safe_load(function_file)
labels_spec = functionconfig['metadata']['annotations']['spec']
labels = {item['id']: item['name'] for item in json.loads(labels_spec)}
# Read the DL model
model = ModelHandler(labels)
setattr(context.user_data, 'model', model)
context.user_data.model = model
context.logger.info("Init context...100%")
def handler(context, event):
context.logger.info("Run text-detection-0004 model")
data = event.body
buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8')))
buf = io.BytesIO(base64.b64decode(data["image"]))
pixel_threshold = float(data.get("pixel_threshold", 0.8))
link_threshold = float(data.get("link_threshold", 0.8))
image = Image.open(buf)
......
......@@ -9,20 +9,22 @@ def init_context(context):
context.logger.info("Init context... 0%")
# Read labels
functionconfig = yaml.safe_load(open("/opt/nuclio/function.yaml"))
with open("/opt/nuclio/function.yaml", 'rb') as function_file:
functionconfig = yaml.safe_load(function_file)
labels_spec = functionconfig['metadata']['annotations']['spec']
labels = {item['id']: item['name'] for item in json.loads(labels_spec)}
# Read the DL model
model = ModelHandler(labels)
setattr(context.user_data, 'model', model)
context.user_data.model = model
context.logger.info("Init context...100%")
def handler(context, event):
context.logger.info("Run faster_rcnn_inception_v2_coco model")
data = event.body
buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8')))
buf = io.BytesIO(base64.b64decode(data["image"]))
threshold = float(data.get("threshold", 0.5))
image = Image.open(buf)
......
......@@ -9,20 +9,22 @@ def init_context(context):
context.logger.info("Init context... 0%")
# Read labels
functionconfig = yaml.safe_load(open("/opt/nuclio/function.yaml"))
with open("/opt/nuclio/function.yaml", 'rb') as function_file:
functionconfig = yaml.safe_load(function_file)
labels_spec = functionconfig['metadata']['annotations']['spec']
labels = {item['id']: item['name'] for item in json.loads(labels_spec)}
# Read the DL model
model = ModelHandler(labels)
setattr(context.user_data, 'model', model)
context.user_data.model = model
context.logger.info("Init context...100%")
def handler(context, event):
context.logger.info("Run mask_rcnn_inception_resnet_v2_atrous_coco model")
data = event.body
buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8')))
buf = io.BytesIO(base64.b64decode(data["image"]))
threshold = float(data.get("threshold", 0.2))
image = Image.open(buf)
......
......@@ -9,20 +9,22 @@ def init_context(context):
context.logger.info("Init context... 0%")
# Read labels
functionconfig = yaml.safe_load(open("/opt/nuclio/function.yaml"))
with open("/opt/nuclio/function.yaml", 'rb') as function_file:
functionconfig = yaml.safe_load(function_file)
labels_spec = functionconfig['metadata']['annotations']['spec']
labels = {item['id']: item['name'] for item in json.loads(labels_spec)}
# Read the DL model
model = ModelHandler(labels)
setattr(context.user_data, 'model', model)
context.user_data.model = model
context.logger.info("Init context...100%")
def handler(context, event):
context.logger.info("Run yolo-v3-tf model")
data = event.body
buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8')))
buf = io.BytesIO(base64.b64decode(data["image"]))
threshold = float(data.get("threshold", 0.5))
image = Image.open(buf)
......
metadata:
name: pth.facebookresearch.detectron2.retinanet_r101
namespace: cvat
annotations:
name: RetinaNet R101
type: detector
framework: pytorch
spec: |
[
{ "id": 1, "name": "person" },
{ "id": 2, "name": "bicycle" },
{ "id": 3, "name": "car" },
{ "id": 4, "name": "motorcycle" },
{ "id": 5, "name": "airplane" },
{ "id": 6, "name": "bus" },
{ "id": 7, "name": "train" },
{ "id": 8, "name": "truck" },
{ "id": 9, "name": "boat" },
{ "id":10, "name": "traffic_light" },
{ "id":11, "name": "fire_hydrant" },
{ "id":13, "name": "stop_sign" },
{ "id":14, "name": "parking_meter" },
{ "id":15, "name": "bench" },
{ "id":16, "name": "bird" },
{ "id":17, "name": "cat" },
{ "id":18, "name": "dog" },
{ "id":19, "name": "horse" },
{ "id":20, "name": "sheep" },
{ "id":21, "name": "cow" },
{ "id":22, "name": "elephant" },
{ "id":23, "name": "bear" },
{ "id":24, "name": "zebra" },
{ "id":25, "name": "giraffe" },
{ "id":27, "name": "backpack" },
{ "id":28, "name": "umbrella" },
{ "id":31, "name": "handbag" },
{ "id":32, "name": "tie" },
{ "id":33, "name": "suitcase" },
{ "id":34, "name": "frisbee" },
{ "id":35, "name": "skis" },
{ "id":36, "name": "snowboard" },
{ "id":37, "name": "sports_ball" },
{ "id":38, "name": "kite" },
{ "id":39, "name": "baseball_bat" },
{ "id":40, "name": "baseball_glove" },
{ "id":41, "name": "skateboard" },
{ "id":42, "name": "surfboard" },
{ "id":43, "name": "tennis_racket" },
{ "id":44, "name": "bottle" },
{ "id":46, "name": "wine_glass" },
{ "id":47, "name": "cup" },
{ "id":48, "name": "fork" },
{ "id":49, "name": "knife" },
{ "id":50, "name": "spoon" },
{ "id":51, "name": "bowl" },
{ "id":52, "name": "banana" },
{ "id":53, "name": "apple" },
{ "id":54, "name": "sandwich" },
{ "id":55, "name": "orange" },
{ "id":56, "name": "broccoli" },
{ "id":57, "name": "carrot" },
{ "id":58, "name": "hot_dog" },
{ "id":59, "name": "pizza" },
{ "id":60, "name": "donut" },
{ "id":61, "name": "cake" },
{ "id":62, "name": "chair" },
{ "id":63, "name": "couch" },
{ "id":64, "name": "potted_plant" },
{ "id":65, "name": "bed" },
{ "id":67, "name": "dining_table" },
{ "id":70, "name": "toilet" },
{ "id":72, "name": "tv" },
{ "id":73, "name": "laptop" },
{ "id":74, "name": "mouse" },
{ "id":75, "name": "remote" },
{ "id":76, "name": "keyboard" },
{ "id":77, "name": "cell_phone" },
{ "id":78, "name": "microwave" },
{ "id":79, "name": "oven" },
{ "id":80, "name": "toaster" },
{ "id":81, "name": "sink" },
{ "id":83, "name": "refrigerator" },
{ "id":84, "name": "book" },
{ "id":85, "name": "clock" },
{ "id":86, "name": "vase" },
{ "id":87, "name": "scissors" },
{ "id":88, "name": "teddy_bear" },
{ "id":89, "name": "hair_drier" },
{ "id":90, "name": "toothbrush" }
]
spec:
description: RetinaNet R101 from Detectron2 optimized for GPU
runtime: 'python:3.8'
handler: main:handler
eventTimeout: 30s
build:
image: cvat/pth.facebookresearch.detectron2.retinanet_r101
baseImage: ubuntu:20.04
directives:
preCopy:
- kind: ENV
value: DEBIAN_FRONTEND=noninteractive
- kind: RUN
value: apt-get update && apt-get -y install curl git python3 python3-pip
- kind: WORKDIR
value: /opt/nuclio
- kind: RUN
value: pip3 install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
- kind: RUN
value: pip3 install 'git+https://github.com/facebookresearch/detectron2@v0.4'
- kind: RUN
value: curl -O https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/retinanet_R_101_FPN_3x/190397697/model_final_971ab9.pkl
- kind: RUN
value: ln -s /usr/bin/pip3 /usr/local/bin/pip
triggers:
myHttpTrigger:
maxWorkers: 1
kind: 'http'
workerAvailabilityTimeoutMilliseconds: 10000
attributes:
maxRequestBodySize: 33554432 # 32MB
resources:
limits:
nvidia.com/gpu: 1
platform:
attributes:
restartPolicy:
name: always
maximumRetryCount: 3
mountMode: volume
metadata:
name: pth.facebookresearch.detectron2.retinanet_r101
namespace: cvat
annotations:
name: RetinaNet R101
type: detector
framework: pytorch
spec: |
[
{ "id": 1, "name": "person" },
{ "id": 2, "name": "bicycle" },
{ "id": 3, "name": "car" },
{ "id": 4, "name": "motorcycle" },
{ "id": 5, "name": "airplane" },
{ "id": 6, "name": "bus" },
{ "id": 7, "name": "train" },
{ "id": 8, "name": "truck" },
{ "id": 9, "name": "boat" },
{ "id":10, "name": "traffic_light" },
{ "id":11, "name": "fire_hydrant" },
{ "id":13, "name": "stop_sign" },
{ "id":14, "name": "parking_meter" },
{ "id":15, "name": "bench" },
{ "id":16, "name": "bird" },
{ "id":17, "name": "cat" },
{ "id":18, "name": "dog" },
{ "id":19, "name": "horse" },
{ "id":20, "name": "sheep" },
{ "id":21, "name": "cow" },
{ "id":22, "name": "elephant" },
{ "id":23, "name": "bear" },
{ "id":24, "name": "zebra" },
{ "id":25, "name": "giraffe" },
{ "id":27, "name": "backpack" },
{ "id":28, "name": "umbrella" },
{ "id":31, "name": "handbag" },
{ "id":32, "name": "tie" },
{ "id":33, "name": "suitcase" },
{ "id":34, "name": "frisbee" },
{ "id":35, "name": "skis" },
{ "id":36, "name": "snowboard" },
{ "id":37, "name": "sports_ball" },
{ "id":38, "name": "kite" },
{ "id":39, "name": "baseball_bat" },
{ "id":40, "name": "baseball_glove" },
{ "id":41, "name": "skateboard" },
{ "id":42, "name": "surfboard" },
{ "id":43, "name": "tennis_racket" },
{ "id":44, "name": "bottle" },
{ "id":46, "name": "wine_glass" },
{ "id":47, "name": "cup" },
{ "id":48, "name": "fork" },
{ "id":49, "name": "knife" },
{ "id":50, "name": "spoon" },
{ "id":51, "name": "bowl" },
{ "id":52, "name": "banana" },
{ "id":53, "name": "apple" },
{ "id":54, "name": "sandwich" },
{ "id":55, "name": "orange" },
{ "id":56, "name": "broccoli" },
{ "id":57, "name": "carrot" },
{ "id":58, "name": "hot_dog" },
{ "id":59, "name": "pizza" },
{ "id":60, "name": "donut" },
{ "id":61, "name": "cake" },
{ "id":62, "name": "chair" },
{ "id":63, "name": "couch" },
{ "id":64, "name": "potted_plant" },
{ "id":65, "name": "bed" },
{ "id":67, "name": "dining_table" },
{ "id":70, "name": "toilet" },
{ "id":72, "name": "tv" },
{ "id":73, "name": "laptop" },
{ "id":74, "name": "mouse" },
{ "id":75, "name": "remote" },
{ "id":76, "name": "keyboard" },
{ "id":77, "name": "cell_phone" },
{ "id":78, "name": "microwave" },
{ "id":79, "name": "oven" },
{ "id":80, "name": "toaster" },
{ "id":81, "name": "sink" },
{ "id":83, "name": "refrigerator" },
{ "id":84, "name": "book" },
{ "id":85, "name": "clock" },
{ "id":86, "name": "vase" },
{ "id":87, "name": "scissors" },
{ "id":88, "name": "teddy_bear" },
{ "id":89, "name": "hair_drier" },
{ "id":90, "name": "toothbrush" }
]
spec:
description: RetinaNet R101 from Detectron2
runtime: 'python:3.8'
handler: main:handler
eventTimeout: 30s
build:
image: cvat/pth.facebookresearch.detectron2.retinanet_r101
baseImage: ubuntu:20.04
directives:
preCopy:
- kind: ENV
value: DEBIAN_FRONTEND=noninteractive
- kind: RUN
value: apt-get update && apt-get -y install curl git python3 python3-pip
- kind: WORKDIR
value: /opt/nuclio
- kind: RUN
value: pip3 install torch==1.8.1+cpu torchvision==0.9.1+cpu torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
- kind: RUN
value: pip3 install 'git+https://github.com/facebookresearch/detectron2@v0.4'
- kind: RUN
value: curl -O https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/retinanet_R_101_FPN_3x/190397697/model_final_971ab9.pkl
- kind: RUN
value: ln -s /usr/bin/pip3 /usr/local/bin/pip
triggers:
myHttpTrigger:
maxWorkers: 2
kind: 'http'
workerAvailabilityTimeoutMilliseconds: 10000
attributes:
maxRequestBodySize: 33554432 # 32MB
platform:
attributes:
restartPolicy:
name: always
maximumRetryCount: 3
mountMode: volume
import json
import base64
import io
from PIL import Image
import torch
from detectron2.model_zoo import get_config
from detectron2.data.detection_utils import convert_PIL_to_numpy
from detectron2.engine.defaults import DefaultPredictor
from detectron2.data.datasets.builtin_meta import COCO_CATEGORIES
CONFIG_OPTS = ["MODEL.WEIGHTS", "model_final_971ab9.pkl"]
CONFIDENCE_THRESHOLD = 0.5
def init_context(context):
context.logger.info("Init context... 0%")
cfg = get_config('COCO-Detection/retinanet_R_101_FPN_3x.yaml')
if torch.cuda.is_available():
CONFIG_OPTS.extend(['MODEL.DEVICE', 'cuda'])
else:
CONFIG_OPTS.extend(['MODEL.DEVICE', 'cpu'])
cfg.merge_from_list(CONFIG_OPTS)
cfg.MODEL.RETINANET.SCORE_THRESH_TEST = CONFIDENCE_THRESHOLD
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = CONFIDENCE_THRESHOLD
cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = CONFIDENCE_THRESHOLD
cfg.freeze()
predictor = DefaultPredictor(cfg)
context.user_data.model_handler = predictor
context.logger.info("Init context...100%")
def handler(context, event):
context.logger.info("Run retinanet-R101 model")
data = event.body
buf = io.BytesIO(base64.b64decode(data["image"]))
threshold = float(data.get("threshold", 0.5))
image = convert_PIL_to_numpy(Image.open(buf), format="BGR")
predictions = context.user_data.model_handler(image)
instances = predictions['instances']
pred_boxes = instances.pred_boxes
scores = instances.scores
pred_classes = instances.pred_classes
results = []
for box, score, label in zip(pred_boxes, scores, pred_classes):
label = COCO_CATEGORIES[int(label)]["name"]
if score >= threshold:
results.append({
"confidence": str(float(score)),
"label": label,
"points": box.tolist(),
"type": "rectangle",
})
return context.Response(body=json.dumps(results), headers={},
content_type='application/json', status_code=200)
......@@ -9,14 +9,14 @@ def init_context(context):
# Read the DL model
model = ModelHandler()
setattr(context.user_data, 'model', model)
context.user_data.model = model
context.logger.info("Init context...100%")
def handler(context, event):
context.logger.info("Run SiamMask model")
data = event.body
buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8')))
buf = io.BytesIO(base64.b64decode(data["image"]))
shape = data.get("shape")
state = data.get("state")
image = Image.open(buf)
......
......@@ -12,7 +12,7 @@ def init_context(context):
context.logger.info("Init context... 0%")
model = ModelHandler()
setattr(context.user_data, 'model', model)
context.user_data.model = model
context.logger.info("Init context...100%")
......@@ -22,7 +22,7 @@ def handler(context, event):
pos_points = data["pos_points"]
neg_points = data["neg_points"]
threshold = data.get("threshold", 0.5)
buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8')))
buf = io.BytesIO(base64.b64decode(data["image"]))
image = Image.open(buf)
polygon = context.user_data.model.handle(image, pos_points,
......
......@@ -13,7 +13,7 @@ def init_context(context):
context.logger.info("Init context... 0%")
model = ModelHandler()
setattr(context.user_data, 'model', model)
context.user_data.model = model
context.logger.info("Init context...100%")
......@@ -24,7 +24,7 @@ def handler(context, event):
neg_points = data["neg_points"]
obj_bbox = data.get("obj_bbox", None)
threshold = data.get("threshold", 0.8)
buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8')))
buf = io.BytesIO(base64.b64decode(data["image"]))
image = Image.open(buf)
if obj_bbox is None:
......
metadata:
name: tf-faster-rcnn-inception-v2-coco
namespace: cvat
annotations:
name: Faster RCNN via Tensorflow
type: detector
framework: tensorflow
spec: |
[
{ "id": 1, "name": "person" },
{ "id": 2, "name": "bicycle" },
{ "id": 3, "name": "car" },
{ "id": 4, "name": "motorcycle" },
{ "id": 5, "name": "airplane" },
{ "id": 6, "name": "bus" },
{ "id": 7, "name": "train" },
{ "id": 8, "name": "truck" },
{ "id": 9, "name": "boat" },
{ "id":10, "name": "traffic_light" },
{ "id":11, "name": "fire_hydrant" },
{ "id":13, "name": "stop_sign" },
{ "id":14, "name": "parking_meter" },
{ "id":15, "name": "bench" },
{ "id":16, "name": "bird" },
{ "id":17, "name": "cat" },
{ "id":18, "name": "dog" },
{ "id":19, "name": "horse" },
{ "id":20, "name": "sheep" },
{ "id":21, "name": "cow" },
{ "id":22, "name": "elephant" },
{ "id":23, "name": "bear" },
{ "id":24, "name": "zebra" },
{ "id":25, "name": "giraffe" },
{ "id":27, "name": "backpack" },
{ "id":28, "name": "umbrella" },
{ "id":31, "name": "handbag" },
{ "id":32, "name": "tie" },
{ "id":33, "name": "suitcase" },
{ "id":34, "name": "frisbee" },
{ "id":35, "name": "skis" },
{ "id":36, "name": "snowboard" },
{ "id":37, "name": "sports_ball" },
{ "id":38, "name": "kite" },
{ "id":39, "name": "baseball_bat" },
{ "id":40, "name": "baseball_glove" },
{ "id":41, "name": "skateboard" },
{ "id":42, "name": "surfboard" },
{ "id":43, "name": "tennis_racket" },
{ "id":44, "name": "bottle" },
{ "id":46, "name": "wine_glass" },
{ "id":47, "name": "cup" },
{ "id":48, "name": "fork" },
{ "id":49, "name": "knife" },
{ "id":50, "name": "spoon" },
{ "id":51, "name": "bowl" },
{ "id":52, "name": "banana" },
{ "id":53, "name": "apple" },
{ "id":54, "name": "sandwich" },
{ "id":55, "name": "orange" },
{ "id":56, "name": "broccoli" },
{ "id":57, "name": "carrot" },
{ "id":58, "name": "hot_dog" },
{ "id":59, "name": "pizza" },
{ "id":60, "name": "donut" },
{ "id":61, "name": "cake" },
{ "id":62, "name": "chair" },
{ "id":63, "name": "couch" },
{ "id":64, "name": "potted_plant" },
{ "id":65, "name": "bed" },
{ "id":67, "name": "dining_table" },
{ "id":70, "name": "toilet" },
{ "id":72, "name": "tv" },
{ "id":73, "name": "laptop" },
{ "id":74, "name": "mouse" },
{ "id":75, "name": "remote" },
{ "id":76, "name": "keyboard" },
{ "id":77, "name": "cell_phone" },
{ "id":78, "name": "microwave" },
{ "id":79, "name": "oven" },
{ "id":80, "name": "toaster" },
{ "id":81, "name": "sink" },
{ "id":83, "name": "refrigerator" },
{ "id":84, "name": "book" },
{ "id":85, "name": "clock" },
{ "id":86, "name": "vase" },
{ "id":87, "name": "scissors" },
{ "id":88, "name": "teddy_bear" },
{ "id":89, "name": "hair_drier" },
{ "id":90, "name": "toothbrush" }
]
spec:
description: Faster RCNN from Tensorflow Object Detection API optimized for GPU
runtime: 'python:3.6'
handler: main:handler
eventTimeout: 30s
build:
image: cvat/tf.faster_rcnn_inception_v2_coco
baseImage: tensorflow/tensorflow:2.1.1-gpu
directives:
preCopy:
- kind: RUN
value: apt install curl
- kind: WORKDIR
value: /opt/nuclio
postCopy:
- kind: RUN
value:
curl -O http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_v2_coco_2018_01_28.tar.gz &&
tar -xzf faster_rcnn_inception_v2_coco_2018_01_28.tar.gz && rm faster_rcnn_inception_v2_coco_2018_01_28.tar.gz
- kind: RUN
value: ln -s faster_rcnn_inception_v2_coco_2018_01_28 faster_rcnn
- kind: RUN
value: pip install pillow pyyaml
triggers:
myHttpTrigger:
maxWorkers: 1
kind: 'http'
workerAvailabilityTimeoutMilliseconds: 10000
attributes:
maxRequestBodySize: 33554432 # 32MB
resources:
limits:
nvidia.com/gpu: 1
platform:
attributes:
restartPolicy:
name: always
maximumRetryCount: 3
mountMode: volume
......@@ -108,9 +108,9 @@ spec:
postCopy:
- kind: RUN
value: curl -O http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_v2_coco_2018_01_28.tar.gz
- kind: RUN
value: tar -xzf faster_rcnn_inception_v2_coco_2018_01_28.tar.gz && rm faster_rcnn_inception_v2_coco_2018_01_28.tar.gz
value:
curl -O http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_v2_coco_2018_01_28.tar.gz &&
tar -xzf faster_rcnn_inception_v2_coco_2018_01_28.tar.gz && rm faster_rcnn_inception_v2_coco_2018_01_28.tar.gz
- kind: RUN
value: ln -s faster_rcnn_inception_v2_coco_2018_01_28 faster_rcnn
- kind: RUN
......
......@@ -10,17 +10,20 @@ def init_context(context):
context.logger.info("Init context... 0%")
model_path = "/opt/nuclio/faster_rcnn/frozen_inference_graph.pb"
model_handler = ModelLoader(model_path)
setattr(context.user_data, 'model_handler', model_handler)
functionconfig = yaml.safe_load(open("/opt/nuclio/function.yaml"))
context.user_data.model_handler = model_handler
with open("/opt/nuclio/function.yaml", 'rb') as function_file:
functionconfig = yaml.safe_load(function_file)
labels_spec = functionconfig['metadata']['annotations']['spec']
labels = {item['id']: item['name'] for item in json.loads(labels_spec)}
setattr(context.user_data, "labels", labels)
context.user_data.labels = labels
context.logger.info("Init context...100%")
def handler(context, event):
context.logger.info("Run faster_rcnn_inception_v2_coco model")
data = event.body
buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8')))
buf = io.BytesIO(base64.b64decode(data["image"]))
threshold = float(data.get("threshold", 0.5))
image = Image.open(buf)
......
metadata:
name: tf-matterport-mask-rcnn
namespace: cvat
annotations:
name: Mask RCNN via Tensorflow
type: detector
framework: tensorflow
spec: |
[
{ "id": 0, "name": "BG" },
{ "id": 1, "name": "person" },
{ "id": 2, "name": "bicycle" },
{ "id": 3, "name": "car" },
{ "id": 4, "name": "motorcycle" },
{ "id": 5, "name": "airplane" },
{ "id": 6, "name": "bus" },
{ "id": 7, "name": "train" },
{ "id": 8, "name": "truck" },
{ "id": 9, "name": "boat" },
{ "id": 10, "name": "traffic_light" },
{ "id": 11, "name": "fire_hydrant" },
{ "id": 12, "name": "stop_sign" },
{ "id": 13, "name": "parking_meter" },
{ "id": 14, "name": "bench" },
{ "id": 15, "name": "bird" },
{ "id": 16, "name": "cat" },
{ "id": 17, "name": "dog" },
{ "id": 18, "name": "horse" },
{ "id": 19, "name": "sheep" },
{ "id": 20, "name": "cow" },
{ "id": 21, "name": "elephant" },
{ "id": 22, "name": "bear" },
{ "id": 23, "name": "zebra" },
{ "id": 24, "name": "giraffe" },
{ "id": 25, "name": "backpack" },
{ "id": 26, "name": "umbrella" },
{ "id": 27, "name": "handbag" },
{ "id": 28, "name": "tie" },
{ "id": 29, "name": "suitcase" },
{ "id": 30, "name": "frisbee" },
{ "id": 31, "name": "skis" },
{ "id": 32, "name": "snowboard" },
{ "id": 33, "name": "sports_ball" },
{ "id": 34, "name": "kite" },
{ "id": 35, "name": "baseball_bat" },
{ "id": 36, "name": "baseball_glove" },
{ "id": 37, "name": "skateboard" },
{ "id": 38, "name": "surfboard" },
{ "id": 39, "name": "tennis_racket" },
{ "id": 40, "name": "bottle" },
{ "id": 41, "name": "wine_glass" },
{ "id": 42, "name": "cup" },
{ "id": 43, "name": "fork" },
{ "id": 44, "name": "knife" },
{ "id": 45, "name": "spoon" },
{ "id": 46, "name": "bowl" },
{ "id": 47, "name": "banana" },
{ "id": 48, "name": "apple" },
{ "id": 49, "name": "sandwich" },
{ "id": 50, "name": "orange" },
{ "id": 51, "name": "broccoli" },
{ "id": 52, "name": "carrot" },
{ "id": 53, "name": "hot_dog" },
{ "id": 54, "name": "pizza" },
{ "id": 55, "name": "donut" },
{ "id": 56, "name": "cake" },
{ "id": 57, "name": "chair" },
{ "id": 58, "name": "couch" },
{ "id": 59, "name": "potted_plant" },
{ "id": 60, "name": "bed" },
{ "id": 61, "name": "dining_table" },
{ "id": 62, "name": "toilet" },
{ "id": 63, "name": "tv" },
{ "id": 64, "name": "laptop" },
{ "id": 65, "name": "mouse" },
{ "id": 66, "name": "remote" },
{ "id": 67, "name": "keyboard" },
{ "id": 68, "name": "cell_phone" },
{ "id": 69, "name": "microwave" },
{ "id": 70, "name": "oven" },
{ "id": 71, "name": "toaster" },
{ "id": 72, "name": "sink" },
{ "id": 73, "name": "refrigerator" },
{ "id": 74, "name": "book" },
{ "id": 75, "name": "clock" },
{ "id": 76, "name": "vase" },
{ "id": 77, "name": "scissors" },
{ "id": 78, "name": "teddy_bear" },
{ "id": 79, "name": "hair_drier" },
{ "id": 80, "name": "toothbrush" }
]
spec:
description: Mask RCNN optimized for GPU
runtime: 'python:3.6'
handler: main:handler
eventTimeout: 30s
env:
- name: MASK_RCNN_DIR
value: /opt/nuclio/Mask_RCNN
build:
image: cvat/tf.matterport.mask_rcnn
baseImage: tensorflow/tensorflow:1.15.5-gpu-py3
directives:
postCopy:
- kind: WORKDIR
value: /opt/nuclio
- kind: RUN
value: apt update && apt install --no-install-recommends -y git curl
- kind: RUN
value: git clone --depth 1 https://github.com/matterport/Mask_RCNN.git
- kind: RUN
value: curl -L https://github.com/matterport/Mask_RCNN/releases/download/v2.0/mask_rcnn_coco.h5 -o Mask_RCNN/mask_rcnn_coco.h5
- kind: RUN
value: pip3 install numpy cython pyyaml keras==2.1.0 scikit-image Pillow
triggers:
myHttpTrigger:
maxWorkers: 1
kind: 'http'
workerAvailabilityTimeoutMilliseconds: 10000
attributes:
maxRequestBodySize: 33554432 # 32MB
resources:
limits:
nvidia.com/gpu: 1
platform:
attributes:
restartPolicy:
name: always
maximumRetryCount: 3
mountMode: volume
......@@ -10,19 +10,20 @@ import yaml
def init_context(context):
context.logger.info("Init context... 0%")
functionconfig = yaml.safe_load(open("/opt/nuclio/function.yaml"))
with open("/opt/nuclio/function.yaml", 'rb') as function_file:
functionconfig = yaml.safe_load(function_file)
labels_spec = functionconfig['metadata']['annotations']['spec']
labels = {item['id']: item['name'] for item in json.loads(labels_spec)}
model_handler = ModelLoader(labels)
setattr(context.user_data, 'model_handler', model_handler)
context.user_data.model_handler = model_handler
context.logger.info("Init context...100%")
def handler(context, event):
context.logger.info("Run tf.matterport.mask_rcnn model")
data = event.body
buf = io.BytesIO(base64.b64decode(data["image"].encode('utf-8')))
buf = io.BytesIO(base64.b64decode(data["image"]))
threshold = float(data.get("threshold", 0.2))
image = Image.open(buf)
......
# REST API design principles
---
title: 'REST API design principles'
linkTitle: 'REST API design principles'
weight: 100
---
## REST API scheme
Common scheme for our REST API is `<VERB> [namespace] <objects> <id> <action>`.
- `VERB` can be `POST`, `GET`, `PATCH`, `PUT`, `DELETE`.
- `namespace` should scope some specific functionality like `auth`, `lambda`.
It is optional in the scheme.
......@@ -13,25 +18,27 @@ Common scheme for our REST API is `<VERB> [namespace] <objects> <id> <action>`.
should not duplicate other endpoints without a reason.
## Design principles
- Use nouns instead of verbs in endpoint paths. For example,
`POST /api/v1/tasks` instead of `POST /api/v1/tasks/create`.
`POST /api/v1/tasks` instead of `POST /api/v1/tasks/create`.
- Accept and respond with JSON whenever it is possible
- Name collections with plural nouns (e.g. `/tasks`, `/projects`)
- Try to keep the API structure flat. Prefer two separate endpoints
for `/projects` and `/tasks` instead of `/projects/:id1/tasks/:id2`. Use
filters to extract necessary information like `/tasks/:id2?project=:id1`.
In some cases it is useful to get all `tasks`. If the structure is
hierarchical, it cannot be done easily. Also you have to know both `:id1`
and `:id2` to get information about the task.
_Note: for now we accept `GET /tasks/:id2/jobs` but it should be replaced
by `/jobs?task=:id2` in the future_.
for `/projects` and `/tasks` instead of `/projects/:id1/tasks/:id2`. Use
filters to extract necessary information like `/tasks/:id2?project=:id1`.
In some cases it is useful to get all `tasks`. If the structure is
hierarchical, it cannot be done easily. Also you have to know both `:id1`
and `:id2` to get information about the task.
_Note: for now we accept `GET /tasks/:id2/jobs` but it should be replaced
by `/jobs?task=:id2` in the future_.
- Handle errors gracefully and return standard error codes (e.g. `201`, `400`)
- Allow filtering, sorting, and pagination
- Maintain good security practices
- Cache data to improve performance
- Versioning our APIs (e.g. `/api/v1`, `/api/v2`). It should be done when you
delete an endpoint or modify its behaviors.
delete an endpoint or modify its behaviors.
## Links
- [Best practices for REST API design](https://stackoverflow.blog/2020/03/02/best-practices-for-rest-api-design/)
- [Flat vs. nested resources](https://stackoverflow.com/questions/20951419/what-are-best-practices-for-rest-nested-resources)
此差异由.gitattributes 抑制。
此差异由.gitattributes 抑制。
此差异由.gitattributes 抑制。
此差异由.gitattributes 抑制。
此差异由.gitattributes 抑制。
此差异由.gitattributes 抑制。
此差异由.gitattributes 抑制。
此差异由.gitattributes 抑制。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册