mirror of
https://github.com/mii443/rust-genai.git
synced 2025-12-03 02:58:27 +00:00
8.6 KiB
8.6 KiB
. minor | - Fix | + Addition | ^ improvement | ! Change | * Refactor
IMPORTANT:
0.1.xwill still have some breaking changes in patches.
- Make sure to lock your version, e.g.,
genai = "=0.1.16".- Version
0.2.xwill follow semver more strictly.- API changes will be denoted as "
!- API CHANGE ...."
2025-01-02 - v0.1.16
.MessageContent::text_into_string/str return None when Parts (to avoid leak)^Image support - Add Test, Image update, API Update (constructors, ImageSource variants with data)+Image Support - Initial (Thanks to @AdamStrojek)- For OpenAI, Gemini, Anthropic. (Only OpenAI supports URL images, others require base64)
2024-12-08 - v0.1.15
+add back AdapterKind::default_key_env_name
2024-12-08 - 0.1.14
+adapter - xAI adapter+ServiceTargetResolver added (support for custom endpoint) (checkout examples/c06-starget-resolver.rs).ollama - now use openai v1 api to list models.test - add test for Client::all_model_names*major internal refactor
2024-12-07 - 0.1.13
.ollama - removed workaround for multi-system lack of support (for old ollama)+add stop_sequences support cohere+stop_sequences - for openai, ollama, groq, gemini, cochere+stop_sequences - for anthropic (thanks @semtexzv)
2024-11-18 - 0.1.12
.minor update on llms model names^ChatRole - impl Display^ChatReqeuust - added from_messages, and append_messages
2024-11-04 - 0.1.11
^anthropic - updated the default max_token to the max for given the model (i.e. 3-5 will be 8k)+tool - First pass at adding Function Calling for OpenAI and Anthropic (rel #24)- NOTE: The tool is still work in progress, but this should be a good first start.
.update version to 0.1.11-WIP
2024-10-05 - 0.1.10
(minor release)
^ChatRequest - addChatReqeust::from_user(...).openai - add o1-preview, o1-mini to openai list.update groq models (llama 3.2).Added .github with Github Bug Report template (#26).minor readme update to avoid browser issue to scroll down to video section
2024-09-18 - 0.1.9
^AdapterKind - expose default_key_env_name.openai - add 'o1-' model prefix to point to OpenAI Adapter.comments proofing (using genai with custom devai script).#23 - add documentation.fix printer comment.updated to v0.1.9-wip
2024-09-06 - 0.1.8
.printer - now uses printer::Error (rather than box dyn) (rel #21)+NEW - structured output - for gemini & OpenAI!soft deprecation (for now) useChatResponseFormat::JsonMode(wasChatOptions::json_modeflag)*Make most public typesDe/Serializable.openai - fix chatgpt prefix. Update current model lists.add json test for anthropic.makeswebc::Errorpublic (relates to: #12)
2024-08-14 - 0.1.7
+Added ModelMapper scheme (client_builder::with_model_mapper_fn)!API CHANGE RemovedAdapterKindResolver(should use ModelMapper) (see examples/c03-mapper.rs)!API CHANGE RenamedModelInfotoModelIden!API CHANGEAuthResolver- Refactor AuthResolver Scheme/API (see examples/c02-auth.rs)!API CHANGE completely removeAdapterConfig(seeAuthResolver).test groq - switch to llama3-groq-8b-8192-tool-use-preview for testing to have the test_chat_json work as expected^chore: make stream is send.test -ChatOptions- add tests for temperature.A typo in adapters for OpenAI makes the temperature chat option unusable..unit test - first value_ext insert
2024-07-26 - 0.1.6
+ChatOption Add json mode for openai type models.groq - added the Llama 3.1 previews, and grog-..-tool-use.. to the groq model list names!nowchat::printer::print_chat_stream(wasutils::print_chat_stream)!NowChatOptions(wasChatRequestOptions) ! Removeclient_builder.with_default_chat_request_options(available withclient_builder.with_chat_options).readme - add youtube videos doc
2024-07-21 - 0.1.5
!API CHANGE now ClientBuilder::insert_adapter_config (was with_adapter_config).code clean
2024-07-19 - 0.1.4
!API CHANGE - refactor Error- With new
ModelInfo - Back to
genai::Error(adapter::Errorwas wrongly exposing internal responsibility)
- With new
.update tests and examples from 'gpt-3.5-turbo' to 'gpt-4o-mini'-Fix namingClientConfig::with_adapter_kind_resolver(was wrongly...auth_resolver)*refactor code layout, internal Adapter calls to use ModelInfo+Add ModelName and ModelInfo types for better efficient request/error context!API CHANGE - nowClient::resolve_model_info(model)(wasClient::resolve_adapter_kind(mode))^ChatRequest- addChatRequest::from_system.updated provider supported list
2024-07-18 - 0.1.3
^openai - addedgpt-4o-miniand switched all openai examples/tests to it!API CHANGE - NewMessageContenttype forChatMessage.content,ChatResponse.content, andStreamEnd.captured_content(only ::Text variant for now).- This is in preparation for multimodal support
!API CHANGE - (should be minor, asIntoimplemented) -ChatMessagenow takesMessageContentwith only::Text(String)variant for now.!API CHANGE - Error refactor - addedgenai::adapter::Errorandgenai::resolver::Error, and updatedgenai::Errorwith appropriateFroms+Added token usage for ALL adapters/providers -ChatResponse.usageandChatRequestOption.capture_usage/.capture_content(for streaming) support for all Adapters (see note in Readme for Ollama for streaming)!API CHANGE:ClientConfig::with_chat_request_options(waswith_default_chat_request_options)!API CHANGE:PrintChatStreamOptions::from_print_events(wasfrom_stream_events)^AdapterKind- addedas_strandas_lower_str^ChatRequest- added.iter_systems()and.combine_systems()(includes eventualchat_req.systemas part of the system messages)!API CHANGE:Client::all_model_names(..)(wasClient::list_models(..))^groq - add gemma2-9b-it to the list of Groq models!API CHANGE:genai::Client(wasgenai::client::Client, same forClientBuilderClientConfig)-groq - remove groq whisper model from list_models as it is not a chat completion model^ollama - implement live list_models for ollama!Makes AdapterDispatcher crate only (should be internal only)
2024-07-08 - 0.1.2
+ChatRequestOptions- addedtemperature,max_tokens,top_pfor all adapters (see readme for property mapping).!SyncAdapterKindResolverFn- Change signature to return Result<Option> (rather than Result).made publicclient.resolve_adapter_kind(model)+implement groq completions
2024-06-12 - 0.1.1
-gemini - proper stream message error handling
2024-06-11 - 0.1.0
.print_chat_stream - minor refactor to ensure flush
2024-06-10 - 0.0.14
-ollama - improve Ollama Adapter to support multi system messages-gemini - fix adapter to set "systemInstruction" (Supported in v1beta)
2024-06-10 - 0.0.13
+Added AdapterKindResolver-Adapter::list_models api impl and change^chat_printer - added PrintChatStreamOptions with print_events