Jamjamjon beda8ef803 Add YOLOv8-OBB and some bug fixes (#9)
* Add YOLOv8-Obb & Refactor outputs

* Update README.md
2024-04-21 17:06:58 +08:00
2024-04-14 18:11:09 +08:00
2024-03-29 15:54:24 +08:00
2024-03-29 15:36:09 +08:00
2024-03-29 15:54:24 +08:00

usls

A Rust library integrated with ONNXRuntime, providing a collection of Computer Vison and Vision-Language models including YOLOv5, YOLOv8, YOLOv9, RTDETR, CLIP, DINOv2, FastSAM, YOLO-World, BLIP, PaddleOCR and others.

Recently Updated

YOLOP-v2 Face-Parsing Text-Detection
YOLOv8-Obb

Supported Models

Model Task / Type Example CUDA
f32
CUDA
f16
TensorRT
f32
TensorRT
f16
YOLOv8-obb Oriented Object Detection demo
YOLOv8-detection Object Detection demo
YOLOv8-pose Keypoint Detection demo
YOLOv8-classification Classification demo
YOLOv8-segmentation Instance Segmentation demo
YOLOv9 Object Detection demo
RT-DETR Object Detection demo
FastSAM Instance Segmentation demo
YOLO-World Object Detection demo
DINOv2 Vision-Self-Supervised demo
CLIP Vision-Language demo visual
textual
visual
textual
BLIP Vision-Language demo  visual
textual
visual
textual
DB Text Detection demo
SVTR Text Recognition demo
RTMO Keypoint Detection demo
YOLOPv2 Panoptic driving Perception demo
YOLOv5-classification Object Detection demo
YOLOv5-segmentation Instance Segmentation demo

Solution Models

Additionally, this repo also provides some solution models.
Model Example Result
Lane Line Segmentation
Drivable Area Segmentation
Car Detection
车道线-可行驶区域-车辆检测
demo
Face Parsing
人脸解析
demo
Text Detection
(PPOCR-det v3, v4)
通用文本检测
demo
Text Recognition
(PPOCR-rec v3, v4)
中英文-文本识别
demo
Face-Landmark Detection
人脸 & 关键点检测
demo
Head Detection
人头检测
demo
Fall Detection
摔倒检测
demo
Trash Detection
垃圾检测
demo

Demo

cargo run -r --example yolov8   # yolov9, blip, clip, dinov2, svtr, db, yolo-world...

Installation

check ort guide

For Linux or MacOS users
  • Firstly, download from latest release from ONNXRuntime Releases
  • Then linking
    export ORT_DYLIB_PATH=/Users/qweasd/Desktop/onnxruntime-osx-arm64-1.17.1/lib/libonnxruntime.1.17.1.dylib
    

Integrate into your own project

Check Here

1. Add usls as a dependency to your project's Cargo.toml

cargo add --git https://github.com/jamjamjon/usls

2. Set Options and build model

let options = Options::default()
    .with_model("../models/yolov8m-seg-dyn-f16.onnx");
let mut model = YOLO::new(&options)?;
  • If you want to run your model with TensorRT or CoreML

    let options = Options::default()
        .with_trt(0) // using cuda by default
        // .with_coreml(0) 
    
  • If your model has dynamic shapes

    let options = Options::default()
        .with_i00((1, 2, 4).into()) // dynamic batch
        .with_i02((416, 640, 800).into())   // dynamic height
        .with_i03((416, 640, 800).into())   // dynamic width
    
  • If you want to set a confidence level for each category

    let options = Options::default()
        .with_confs(&[0.4, 0.15]) // person: 0.4, others: 0.15
    
  • Go check Options for more model options.

3. Prepare inputs, and then you're ready to go

  • Build DataLoader to load images
let dl = DataLoader::default()
    .with_batch(model.batch.opt as usize)
    .load("./assets/")?;

for (xs, _paths) in dl {
    let _y = model.run(&xs)?;
}
  • Or simply read one image
let x = vec![DataLoader::try_read("./assets/bus.jpg")?];
let y = model.run(&x)?;

4. Annotate and save results

let annotator = Annotator::default().with_saveout("YOLOv8");
annotator.annotate(&x, &y);
Description
No description provided
Readme MIT 24 MiB
Languages
Rust 100%