mirror of
https://github.com/mii443/usls.git
synced 2025-08-22 15:45:41 +00:00
- Added `Hub` for resource management - Updated `DataLoader` to support video and streaming - Updated `CI` - Replaced `println!` with `tracing` for logging
480 B
480 B
This demo shows how to use BLIP to do conditional or unconditional image captioning.
Quick Start
cargo run -r --example blip
Results
[Unconditional]: a group of people walking around a bus
[Conditional]: three man walking in front of a bus
Some(["three man walking in front of a bus"])
TODO
- Multi-batch inference for image caption
- VQA
- Retrival
- TensorRT support for textual model