mirror of
https://github.com/mii443/usls.git
synced 2025-08-22 15:45:41 +00:00
- Added `Hub` for resource management - Updated `DataLoader` to support video and streaming - Updated `CI` - Replaced `println!` with `tracing` for logging
YOLO-Series
Detection | Instance Segmentation | Pose |
---|---|---|
![]() |
![]() |
![]() |
Classification | Obb |
---|---|
![]() |
![]() |
Head Detection | Fall Detection | Trash Detection |
---|---|---|
![]() |
![]() |
![]() |
YOLO-World | Face Parsing | FastSAM |
---|---|---|
![]() |
![]() |
![]() |
Quick Start
# Classify
cargo run -r --example yolo -- --task classify --ver v5 # YOLOv5
cargo run -r --example yolo -- --task classify --ver v8 # YOLOv8
# Detect
cargo run -r --example yolo -- --task detect --ver v5 # YOLOv5
cargo run -r --example yolo -- --task detect --ver v6 # YOLOv6
cargo run -r --example yolo -- --task detect --ver v7 # YOLOv7
cargo run -r --example yolo -- --task detect --ver v8 # YOLOv8
cargo run -r --example yolo -- --task detect --ver v9 # YOLOv9
cargo run -r --example yolo -- --task detect --ver v10 # YOLOv10
cargo run -r --example yolo -- --task detect --ver rtdetr # YOLOv8-RTDETR
cargo run -r --example yolo -- --task detect --ver v8 --model yolov8s-world-v2-shoes.onnx # YOLOv8-world
# Pose
cargo run -r --example yolo -- --task pose --ver v8 # YOLOv8-Pose
# Segment
cargo run -r --example yolo -- --task segment --ver v5 # YOLOv5-Segment
cargo run -r --example yolo -- --task segment --ver v8 # YOLOv8-Segment
cargo run -r --example yolo -- --task segment --ver v8 --model FastSAM-s-dyn-f16.onnx # FastSAM
# Obb
cargo run -r --example yolo -- --task obb --ver v8 # YOLOv8-Obb
other options
--source
to specify the input images
--model
to specify the ONNX model
--width --height
to specify the input resolution
--nc
to specify the number of model's classes
--plot
to annotate with inference results
--profile
to profile
--cuda --trt --coreml --device_id
to select device
--half
to use float16 when using TensorRT EP
YOLOs configs with Options
Use official YOLO Models
let options = Options::default()
.with_yolo_version(YOLOVersion::V5) // YOLOVersion: V5, V6, V7, V8, V9, V10, RTDETR
.with_yolo_task(YOLOTask::Classify) // YOLOTask: Classify, Detect, Pose, Segment, Obb
.with_model("xxxx.onnx")?;
Cutomized your own YOLO model
// This config is for YOLOv8-Segment
use usls::{AnchorsPosition, BoxType, ClssType, YOLOPreds};
let options = Options::default()
.with_yolo_preds(
YOLOPreds {
bbox: Some(BoxType::Cxcywh),
clss: ClssType::Clss,
coefs: Some(true),
anchors: Some(AnchorsPosition::After),
..Default::default()
}
)
.with_model("xxxx.onnx")?;
Other YOLOv8 Solution Models
Model | Weights | Datasets |
---|---|---|
Face-Landmark Detection | yolov8-face-dyn-f16 | |
Head Detection | yolov8-head-f16 | |
Fall Detection | yolov8-falldown-f16 | |
Trash Detection | yolov8-plastic-bag-f16 | |
FaceParsing | yolov8-face-parsing-dyn | CelebAMask-HQ [Processed YOLO labels][Python Script] |
Export ONNX Models
YOLOv5
YOLOv6
YOLOv7
YOLOv8
pip install -U ultralytics
# export onnx model with dynamic shapes
yolo export model=yolov8m.pt format=onnx simplify dynamic
yolo export model=yolov8m-cls.pt format=onnx simplify dynamic
yolo export model=yolov8m-pose.pt format=onnx simplify dynamic
yolo export model=yolov8m-seg.pt format=onnx simplify dynamic
yolo export model=yolov8m-obb.pt format=onnx simplify dynamic
# export onnx model with fixed shapes
yolo export model=yolov8m.pt format=onnx simplify
yolo export model=yolov8m-cls.pt format=onnx simplify
yolo export model=yolov8m-pose.pt format=onnx simplify
yolo export model=yolov8m-seg.pt format=onnx simplify
yolo export model=yolov8m-obb.pt format=onnx simplify