Files
usls/examples/blip
Jamjamjon beda8ef803 Add YOLOv8-OBB and some bug fixes (#9)
* Add YOLOv8-Obb & Refactor outputs

* Update README.md
2024-04-21 17:06:58 +08:00
..
2024-04-21 17:06:58 +08:00

This demo shows how to use BLIP to do conditional or unconditional image captioning.

Quick Start

cargo run -r --example blip

BLIP ONNX Model

Results

[Unconditional image captioning]: a group of people walking around a bus
[Conditional image captioning]: three man walking in front of a bus
Some(["three man walking in front of a bus"])

TODO

  • Multi-batch inference for image caption
  • VQA
  • Retrival
  • TensorRT support for textual model