chore(deps): update dependency litellm to v1.55.2
This MR contains the following updates:
Package | Type | Update | Change |
---|---|---|---|
litellm | dependencies | minor |
1.54.0 -> 1.55.2
|
⚠️ WarningSome dependencies could not be looked up. Check the warning logs for more information.
WARNING: this job ran in a Renovate pipeline that doesn't support the configuration required for common-ci-tasks Renovate presets.
Release Notes
BerriAI/litellm (litellm)
v1.55.2
What's Changed
- Litellm dev 12 12 2024 by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7203
- Litellm dev 12 11 2024 v2 by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7215
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.55.1...v1.55.2
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.2
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed |
250.0 | 285.12950797290193 | 6.288841435893255 | 0.0033415735578603907 | 1882 | 1 | 149.6715149999659 | 2193.2730590000347 |
Aggregated | Passed |
250.0 | 285.12950797290193 | 6.288841435893255 | 0.0033415735578603907 | 1882 | 1 | 149.6715149999659 | 2193.2730590000347 |
v1.55.1
What's Changed
- (feat) add
response_time
to StandardLoggingPayload - logged ondatadog
,gcs_bucket
,s3_bucket
etc by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7199 - build(deps): bump nanoid from 3.3.7 to 3.3.8 in /ui by @dependabot in https://github.com/BerriAI/litellm/pull/7198
- (Feat) DataDog Logger - Add
HOSTNAME
andPOD_NAME
to DataDog logs by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7189 - (feat) add
error_code
,error_class
,llm_provider
toStandardLoggingPayload
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7200 - (docs) Document StandardLoggingPayload Spec by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7201
- fix: Support WebP image format and avoid token calculation error by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7182
- (feat) UI - Disable Usage Tab once SpendLogs is 1M+ Rows by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7208
- (minor fix proxy) Clarify Proxy Rate limit errors are showing hash of litellm virtual key by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7210
- (fix) latency fix - revert prompt caching check on litellm router by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7211
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.55.0...v1.55.1
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.1
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed |
250.0 | 274.17864765330575 | 6.170501674094568 | 0.0 | 1846 | 0 | 212.15181599995958 | 2203.3609819999356 |
Aggregated | Passed |
250.0 | 274.17864765330575 | 6.170501674094568 | 0.0 | 1846 | 0 | 212.15181599995958 | 2203.3609819999356 |
v1.55.0
What's Changed
- Litellm code qa common config by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7113
- (Refactor) Code Quality improvement - use Common base handler for Cohere by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7117
- (Refactor) Code Quality improvement - Use Common base handler for
clarifai/
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7125 - (Refactor) Code Quality improvement - Use Common base handler for
cloudflare/
provider by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7127 - (Refactor) Code Quality improvement - Use Common base handler for Cohere /generate API by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7122
- (Refactor) Code Quality improvement - Use Common base handler for
anthropic_text/
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7143 - docs: document code quality by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7149
- (Refactor) Code Quality improvement - stop redefining LiteLLMBase by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7147
- LiteLLM Common Base LLM Config (pt.2) by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7146
- LiteLLM Common Base LLM Config (pt.3): Move all OAI compatible providers to base llm config by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7148
- refactor(sagemaker/): separate chat + completion routes + make them b… by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7151
- rename
llms/OpenAI/
->llms/openai/
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7154 - Code Quality improvement - remove symlink to
requirements.txt
from within litellm by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7155 - LiteLLM Common Base LLM Config (pt.4): Move Ollama to Base LLM Config by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7157
- Code Quality Improvement - remove
file_apis
,fine_tuning_apis
from/llms
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7156 - Revert "LiteLLM Common Base LLM Config (pt.4): Move Ollama to Base LLM Config" by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7160
- Litellm ollama refactor by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7162
- Litellm vllm refactor by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7158
- Litellm merge pr by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7161
- Code Quality Improvement - remove
tokenizers/
from /llms by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7163 - build(deps): bump nanoid from 3.3.7 to 3.3.8 in /docs/my-website by @dependabot in https://github.com/BerriAI/litellm/pull/7159
- (Refactor) Code Quality improvement - remove
/prompt_templates/
,base_aws_llm.py
from/llms
folder by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7164 - Code Quality Improvement - use
vertex_ai/
as folder name for vertexAI by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7166 - Code Quality Improvement - move
aleph_alpha
to deprecated_providers by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7168 - (Refactor) Code Quality improvement - rename
text_completion_codestral.py
->codestral/completion/
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7172 - (Code Quality) - Add test to enforce all folders in
/llms
are a litellm provider by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7175 - fix(get_supported_openai_params.py): cleanup by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7176
- fix(acompletion): support fallbacks on acompletion by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7184
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.54.1...v1.55.0
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.0
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed |
250.0 | 286.19507948581224 | 5.886697197840291 | 0.0033409178194326278 | 1762 | 1 | 211.68456200001629 | 3578.4067740000296 |
Aggregated | Passed |
250.0 | 286.19507948581224 | 5.886697197840291 | 0.0033409178194326278 | 1762 | 1 | 211.68456200001629 | 3578.4067740000296 |
v1.54.1
What's Changed
- refactor - use consistent file naming convention
AI21/
->ai21
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7090 - refactor - use consistent file naming convention AzureOpenAI/ -> azure by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/7092
- Litellm dev 12 07 2024 by @krrishdholakia in https://github.com/BerriAI/litellm/pull/7086
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.54.0...v1.54.1
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.54.1
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Failed |
280.0 | 340.7890831504466 | 5.986291177372485 | 0.0 | 1788 | 0 | 236.28402200000664 | 4047.592437999981 |
Aggregated | Failed |
280.0 | 340.7890831504466 | 5.986291177372485 | 0.0 | 1788 | 0 | 236.28402200000664 | 4047.592437999981 |
Configuration
-
If you want to rebase/retry this MR, check this box
This MR has been generated by Renovate Bot.