diff --git a/README.md b/README.md index 70174631..bab7d36c 100644 --- a/README.md +++ b/README.md @@ -27,6 +27,13 @@ versatility. - Normalization comes with alignments tracking. It's always possible to get the part of the original sentence that corresponds to a given token. - Does all the pre-processing: Truncate, Pad, add the special tokens your model needs. + +## Bindings + +We provide bindings to the following languages (more to come!): + - [Rust](https://github.com/huggingface/tokenizers/tree/master/tokenizers) (Original implementation) + - [Python](https://github.com/huggingface/tokenizers/tree/master/bindings/python) + - [Node.js](https://github.com/huggingface/tokenizers/tree/master/bindings/node) ## Quick examples using Python: @@ -69,13 +76,6 @@ And training a new vocabulary is just as easy: [00:00:00] Count pairs ████████████████████████████████████████ 20993/20993 [00:00:03] Compute merges ████████████████████████████████████████ 19375/19375 ``` - -## Bindings - -We provide bindings to the following languages (more to come!): - - [Rust](https://github.com/huggingface/tokenizers/tree/master/tokenizers) (Original implementation) - - [Python](https://github.com/huggingface/tokenizers/tree/master/bindings/python) - - [Node.js](https://github.com/huggingface/tokenizers/tree/master/bindings/node) ## Contributors