mirror of
https://github.com/mii443/usls.git
synced 2025-12-03 02:58:22 +00:00
Features
- Support
Classification,Segmentation,Detection,Pose(Keypoints)-Detectiontasks. - Support
FP16&FP32ONNX models. - Support
CoreML,CUDAandTensorRTexecution provider to accelerate computation. - Support dynamic input shapes(
batch,width,height). - Support dynamic confidence(
DynConf) for each class in Detection task.
Quick Start
cargo run -r --example yolov8
Or you can manully
1. Export YOLOv8 ONNX Models
pip install -U ultralytics
# export onnx model with dynamic shapes
yolo export model=yolov8m.pt format=onnx simplify dynamic
yolo export model=yolov8m-cls.pt format=onnx simplify dynamic
yolo export model=yolov8m-pose.pt format=onnx simplify dynamic
yolo export model=yolov8m-seg.pt format=onnx simplify dynamic
# export onnx model with fixed shapes
yolo export model=yolov8m.pt format=onnx simplify
yolo export model=yolov8m-cls.pt format=onnx simplify
yolo export model=yolov8m-pose.pt format=onnx simplify
yolo export model=yolov8m-seg.pt format=onnx simplify
2. Specify the ONNX model path in main.rs
let options = Options::default()
.with_model("ONNX_PATH") // <= modify this
.with_confs(&[0.4, 0.15]) // person: 0.4, others: 0.15
.with_saveout("YOLOv8");
let mut model = YOLO::new(&options)?;
3. Then, run
cargo run -r --example yolov8
Result
| Task | Annotated image |
|---|---|
| Instance Segmentation | ![]() |
| Classification | ![]() |
| Detection | ![]() |
| Pose | ![]() |



