jamjamjon af934086bb Initial
2024-03-29 15:54:24 +08:00
2024-03-29 15:54:24 +08:00
2024-03-29 15:54:24 +08:00
2024-03-29 15:54:24 +08:00
2024-03-29 15:54:24 +08:00
2024-03-29 15:54:24 +08:00
2024-03-29 15:54:24 +08:00
2024-03-29 15:36:09 +08:00
2024-03-29 15:54:24 +08:00
2024-03-29 15:54:24 +08:00

usls

A Rust library integrated with ONNXRuntime, providing a collection of Computer Vison and Vision-Language models including YOLOv8 (Classification, Segmentation, Detection and Pose Detection), YOLOv9, RTDETR, CLIP, DINOv2, FastSAM, YOLO-World, BLIP, and others. Many execution providers are supported, sunch as CUDA, TensorRT and CoreML.

Supported Models

Model Example CUDA(f32) CUDA(f16) TensorRT(f32) TensorRT(f16)
YOLOv8-detection demo
YOLOv8-pose demo
YOLOv8-classification demo
YOLOv8-segmentation demo
YOLOv8-OBB TODO TODO TODO TODO TODO
YOLOv9 demo
RT-DETR demo
FastSAM demo
YOLO-World demo
DINOv2 demo
CLIP demo visual
textual
visual
textual
BLIP demo  visual
textual
visual
textual
OCR(DB, SVTR) TODO TODO TODO TODO TODO

Solution Models

Additionally, this repo also provides some solution models such as pedestrian fall detection, head detection, trash detection, and more.

Model Example Result
face-landmark detection demo
head detection demo
fall detection demo
trash detection demo

Demo

cargo run -r --example yolov8   # fastsam, yolov9, blip, clip, dinov2, yolo-world...

Integrate into your own project

1. Install ort

check ort guide

For Linux or MacOS users
  • Firstly, download from latest release from ONNXRuntime Releases
  • Then linking
    export ORT_DYLIB_PATH=/Users/qweasd/Desktop/onnxruntime-osx-arm64-1.17.1/lib/libonnxruntime.1.17.1.dylib
    

2. Add usls as a dependency to your project's Cargo.toml:

[dependencies]
usls = "0.0.1"

3. Set model Options and build model, then you're ready to go.

2use usls::{models::YOLO, Options};

fn main() -> Result<(), Box<dyn std::error::Error>> {
    // 1.build model
    let options = Options::default()
        .with_model("../models/yolov8m-seg-dyn-f16.onnx")
        .with_trt(0) // using cuda(0) by default
	// when model with dynamic shapes
        .with_i00((1, 2, 4).into()) // dynamic batch
        .with_i02((416, 640, 800).into())   // dynamic height
        .with_i03((416, 640, 800).into())   // dynamic width
        .with_confs(&[0.4, 0.15]) // person: 0.4, others: 0.15
        .with_saveout("YOLOv8");    // save results
    let mut model = YOLO::new(&options)?;

    // 2.build dataloader
    let dl = DataLoader::default()
        .with_batch(model.batch.opt as usize)
        .load("./assets/")?;

    // 3.run
    for (xs, _paths) in dl {
        let _y = model.run(&xs)?;
    }
    Ok(())
}

Script: converte ONNX model from float32 to float16

import onnx
from pathlib import Path
from onnxconverter_common import float16

model_f32 = "onnx_model.onnx"
model_f16 = float16.convert_float_to_float16(onnx.load(model_f32))
saveout = Path(model_f32).with_name(Path(model_f32).stem + "-f16.onnx")
onnx.save(model_f16, saveout)
Description
No description provided
Readme MIT 24 MiB
Languages
Rust 100%